Significance test to compare two effect sizes (Cohen's d)

#1
I have two Cohen's D effect sizes from the same sample. How can I test if the difference between the magnitude of these two effect sizes is statistically significant?

As the confidence intervals of the two estimated effect sizes overlap, can I assume that the difference between them is not statistically significant (see output)?

Code:
Cohen's d

d estimate: 1.320214 (large)
95 percent confidence interval:
   lower    upper
1.055358 1.585070

Cohen's d

d estimate: 1.744894 (large)
95 percent confidence interval:
   lower    upper
1.416700 2.073088
If not, how could I test if the difference between the two Cohen's D is significant?
 

katxt

Well-Known Member
#2
can I assume that the difference between them is not statistically significant
Not necessarily. It can happen that two CIs can overlap yet there still be a statistically significant difference.
If we can assume that the two estimates are independent of each other, we can do this.
First find the difference Diff=1.745-1.320=...
Next find the 95% margin of error MoE for each estimate MoE1=1.320-1.055=... and MoE2=1.745-1.416 =...
Now find the MoE of the difference MoEDiff=sqrt(MoE1^2+MoE2^2)
Finally make a 95% CI for the difference = Diff+/-MoEDiff
If this CI includes 0, then no sig diff.
If this CI does not include 0 then there is a sig diff.
 
Last edited:
#3
Thanks a lot for the answer!
Both my Cohen's D estimates come from the same sample. Is there a similar approach for dependent samples?


Not necessarily. It can happen that two CIs can overlap yet there still be a statistically significant difference.
If we can assume that the two estimates are independent of each other, we can do this.
First find the difference Diff=1.745-1.320=...
Next find the 95% margin of error MoE for each estimate MoE1=1.320-1.055=... and MoE2=1.745-1.416 =...
Now find the MoE of the difference MoEDiff=sqrt(MoE1^2+MoE2^2)
Finally make a 95% CI for the difference = Diff+/-MoEDiff
If this CI includes 0, then no sig diff.
If this CI does not include 0 then there is a sig diff.
 

hlsmith

Less is more. Stay pure. Stay poor.
#4
Can you describe your data in more detail? In particular what they represent and why you are using cohen's d.

A note, the overlapping CI's but different concept comes from standard errors getting pooled (weighted) together when calculating a SE for a difference.
 
#5
Sure! Subjects took two tests (a & b), then there was a manipulation after which the same subjects took both tests again. Since the two tests use different metrics, I want to compare the increase in each test score pre/post-manipulation. So first I calculate Cohen's D for each Test and then I'd like to see if the differences between these two effects is significant.

Can you describe your data in more detail? In particular what they represent and why you are using cohen's d.

A note, the overlapping CI's but different concept comes from standard errors getting pooled (weighted) together when calculating a SE for a difference.
 

hlsmith

Less is more. Stay pure. Stay poor.
#6
Two different test pre and two different tests posts, correct? And you can match each person's pre and post tests? Lastly, everyone got the ''manipulation'?
 
#7
Two different tests, all subjects take them before and after the manipulation. Yes, i can match each person to their pre-/post-scores. Everyone got the manipulation
 

katxt

Well-Known Member
#8
Is there a similar approach for dependent samples?
This approach is not much more than a useful rule of thumb. Probably it could be extended with a correlation and some degrees of freedom stirred in. However, a randomization test would take care of all that.
Do what you have done and find the difference in the Ds = DDiff.
Resample the subjects with replacement and repeat to get a new DDiff for that resample.
Do several thousand times, recording all the DDiffs. This is the sampling distribution of DDiff.
Find the 95% CI for the DDiffs, and compare it with 0. (Or find an actual p value to be more formal.)