When is a difference between two percentages more likely to be statistically significant

#1
I am just wondering whether there are general rules regarding when the difference between two percentages is more likely to be significant, apart from when the sample size increases. Any help would be very much appreciated.
 

Buckeye

Active Member
#2
Well, it depends on a few factors. If we are trying to detect small differences (i.e. effect size) in the percentages we will need larger samples relative to larger differences. It also depends on type 1 error, type 2 error, and power. Typically, we fix alpha to be .01, .05, or .10 and then observe sample sizes for various levels of power or vice versa. Generally, we like power to be 80% or higher. Power is the probability that we detect the effect given that the effect truly exists. All these calculations are driven by the context of the problem/research question.
 
Last edited:

Dason

Ambassador to the humans
#3
And there are other things that matter but they are things we can't actually control so it probably doesn't matter much to mention them.
 
Last edited:

Miner

TS Contributor
#4
I believe there are also differences depending on whether you are in the middle (e.g., p ~ 0.5) versus the extremes (e.g., p ~ 0.1 or 0.9).
 

noetsi

No cake for spunky
#6
What do you mean by significant. If you mean statistically significant some tests are more powerful in detecting an effect than others.

If you mean substantively significant the answer is if there are really differences :)
 
#7
I believe there are also differences depending on whether you are in the middle (e.g., p ~ 0.5) versus the extremes (e.g., p ~ 0.1 or 0.9).
Thank you, this is what I thought - are you more or less likely to get a statistically significant result if the percentages are closer to 50%? Thanks in advance.
 

Miner

TS Contributor
#9
Thank you, this is what I thought - are you more or less likely to get a statistically significant result if the percentages are closer to 50%? Thanks in advance.
The power of a proportions test decreases when you are in the middle, so you have to compensate by increasing the sample size.
 
#10
The power of a proportions test decreases when you are in the middle, so you have to compensate by increasing the sample size.
Thank you. So if two percentages are close to 50%, say 46% vs 54% - all things being equal, the difference between %s would be less likely to be statistically significant than say 76% and 84%? Many thanks
 

fed2

Active Member
#13
what about if the relative risk is held constant, rather than the risk difference? For example comparing 0.5 to 1.3*0.5=0.65, versus 0.1 to 0.13 or something. Doesn't this just reflect a sort of preference for risk difference? Is the decision about what constitutes an 'apples to apples' comparison arbitrary here?