In statistics, every confidence interval (and every margin of error, for that matter) has a percentage associated with it, called a confidence level. This percentage represents how confident you are that the results will capture the true population parameter, depending on the luck of the draw with your random sample.
A confidence level helps you account for the other possible sample results you could have gotten when you’re making an estimate of a parameter using the data from only one sample. If you want to account for 95% of the other possible results, your confidence level would be 95%.
What level of confidence is typically used by researchers? Confidence levels range from 80% to 99%,with the most common confidence level being 95%. Often, the particular choice of confidence level depends on your field of study or the journal your results would appear in. In fact, statisticians have a saying that goes, “Why do statisticians like their jobs? Because they have to be correct only 95% of the time.” (Sort of catchy, isn’t it? And let’s see weather forecasters beat that.)
Variability in sample results is measured in terms of number of standard errors. A standard error is similar to the standard deviation of a data set, except a standard error applies to sample means or sample percentages that you could have gotten if different samples were taken. (The standard deviation applies to individuals, not samples, although the standard deviation has an effect on the standard error.)
Standard errors are the building blocks of confidence intervals. A confidence interval is a statistic plus or minus a margin of error, and the margin of error is determined by the number of standard errors you need to get the confidence level you want.
Every confidence level has a corresponding number of standard errors that have to be added or subtracted. This number of standard errors is a called a critical value. In a situation where you use a Z-distribution to find the number of standard errors, you call the critical value the z*-value (pronounced z-star value). The following table shows a list of z*-values for some of the most commonly used confidence levels.
As the confidence level increases, the number of standard errors increases, so the margin of error increases.
z*-values for Various Confidence Levels | |
Confidence Level | z*-value |
---|---|
80% | 1.28 |
90% | 1.645 (by convention) |
95% | 1.96 |
98% | 2.33 |
99% | 2.58 |
Note that these values are taken from the standard normal (Z-) distribution. The area between each z* value and the negative of that z* value is the confidence percentage (approximately). For example, the area between z*=1.28 and z=-1.28 is approximately 0.80. Hence this chart can be expanded to other confidence percentages as well. The chart shows only the confidence percentages most commonly used.
If you want to be more than 95% confident about your results, you need to add and subtract more than about two standard errors. For example, to be 99% confident, you would add and subtract about two and a half standard errors to obtain your margin of error (2.58 to be exact). The higher the confidence level, the larger the z*-value, the larger the margin of error, and the wider the confidence interval (assuming everything else stays the same). You have to pay a certain price for more confidence.
A common question among folks first learning about confidence intervals is, “Why not just always choose a 100% confidence interval?” Remember, that a confidence interval gives a range of plausible values for some unknown population parameter. Suppose the desired population parameter is the proportion of all teenagers who own a cell phone. What would the range of proportions have to be in order to be 100% confident that it contains the true, unknown proportion? The interval would have to contain all possible proportions. Yes, it would have to go all the way from 0 to 1(which is equivalent to 0% to 100%)! But that is not very useful in narrowing down a set of practical proportions.