In statistics, if a population X has any distribution that is not normal, or if its distribution is unknown, you can’t automatically say the distribution of the sample means
has a normal distribution. But incredibly, you can use a normal distribution to approximate the distribution of
— if the sample size is large enough. This momentous result is due to what statisticians know and love as the Central Limit Theorem.
The Central Limit Theorem (abbreviated CLT) says that if X does not have a normal distribution (or its distribution is unknown and hence can’t be deemed to be normal), the shape of the sampling distribution of
is approximately normal, as long as the sample size, n, is large enough. That is, you get an approximate normal distribution for the means of large samples, even if the distribution of the original values (X) is not normal.
Most statisticians agree that if n is at least 30, this approximation will be reasonably close in most cases, although different distribution shapes for X have different values of n that are needed. The less “bell-shaped” or “normal looking” the distribution of the original values of X are, the larger the sample size for the sample means will need to be. The larger the sample size (n), the closer the distribution of the sample means will be to a normal distribution.