Unequal sample size among sub-groups: Effects on significance levels?

#1
Hello,

I just came across a finding which I find somewhat strange: I conducted a quite simple multiple regression analysis in a dataset that consists out of three groups (n=90, n=120, n=650, respectively). When I just run the analysis over the whole dataset with SPSS, I get a significant result for a certain coefficent. But if I run the analysis for each group separately ("compare groups"), none of them is significant (the n=90 and the n=650 group are even very far away from being significant). How can this actually be? I do control for the sample.

Thank you!
 

Karabiner

TS Contributor
#2
When I just run the analysis over the whole dataset with SPSS, I get a significant result for a certain coefficent.
This is not very informative. What were b and p values?
But if I run the analysis for each group separately ("compare groups"), none of them is significant (the n=90 and the n=650 group are even very far away from being significant).
Same as before.
How can this actually be?
Maybe the dispersion of the variables is considerably limited if only subgroups are
included?

With kind regards

K.
 
#3
Thanks for your answer!

This is not very informative. What were b and p values?
Here one example:

In the full sample, the predictor in question has a B=-.007, SE= .002, Beta=-.102, p=.006.
For the sample n=90 the respective figures are: B=-.014, SE=.007, Beta=-.257, p=.059.
For the sample n=613 the respective figures are: B=-.004, SE=.003, Beta=-.057, p=.204.
(The third sample is actually not included in this model - but let keep it simple.)

Maybe the dispersion of the variables is considerably limited if only subgroups are
included?
For the predictor of sample n=90: Range=40, Min=19, Max=59, Mean=42, SE=1.08, SD=10.3 (the predictor is age, by the way).
For the dependent of sample n=90: Range=3.13, Min=1.63, Max=4.75, Mean=3.67, SE=.06, SD=.55 (this is a Likert-Scale score 1-5)

For the predictor of sample n=613: Range=48, Min=21, Max=69, Mean=40, SE=.465, SD=11.5
For the dependent of sample n=613: Range=3.88, Min=1.13, Max=5.00, Mean=3.17, SE=.031, SD=.77

Would you see an indication for that?

Thank you!
 
Last edited:

Lazar

Phineas Packard
#4
The formula for standard errors includes n. Thus it is not really surprising that on your full sample you get a significant result but in your subset samples you do not.
 
#5
Thanks for your comment!

The formula for standard errors includes n. Thus it is not really surprising that on your full sample you get a significant result but in your subset samples you do not.
While I was (somewhat) aware of this, I find it still bewildering, given that the subsample - which makes up almost 90% of the total sample is highly insignificant (p=.204) (see the example above). And the other sample is not significant either. Reviewers will certainly slap me if I say "while the total sample was significant, the results for the subsamples were insignificant". I don't really know how to interpret this meaningfully, I guess (but I need to dig deeper, as the overall result was contrary to the hypothesis).
 

Lazar

Phineas Packard
#6
There are better ways to run these models. Consider including interaction terms or if you have access to SEM software running multigroup models.
 
#7
Reviewers will certainly slap me if I say "while the total sample was significant, the results for the subsamples were insignificant". I don't really know how to interpret this meaningfully, I guess (but I need to dig deeper, as the overall result was contrary to the hypothesis).
I think they won't, necessarily. This kind of result appears sometimes, when the cumulative effect of a number of groups reach the level of significance, but with lower power in subsamples, it does not "reach" the significance level.

szidel said:
In the full sample, the predictor in question has a B=-.007, SE= .002, Beta=-.102, p=.006.
For the sample n=90 the respective figures are: B=-.014, SE=.007, Beta=-.257, p=.059.
For the sample n=613 the respective figures are: B=-.004, SE=.003, Beta=-.057, p=.204.
It is obvious that your n=90 sample is actually very close to being significant. Given its very smaller size (90), there is some clear effect in that group. Also the P value of your n=613 group is not too high. The predictor's beta in both groups is negative. So it is possible that when these two groups join together, their aggregated effect becomes significant. As it is shown, the slope of predictor in the main sample (-0.007) is something between the slopes seen in subsamples (-0.004 and -0.014). However, the power for the large, main sample suffices to give significant results, while the power for the smaller subsamples don't.

(The third sample is actually not included in this model - but let keep it simple.)
You mean "included in this model" or "included in this explanation"? Maybe the third group as well has a negative beta which can correspond to the significant result you obtained from the final model.

While I was (somewhat) aware of this, I find it still bewildering, given that the subsample - which makes up almost 90% of the total sample is highly insignificant (p=.204) (see the example above). And the other sample is not significant either.
We don't know about each of the subsamples, the predictor, the outcome, and also what is going on in each group. It is possible that two groups have something in common that when you mix them, the new larger group does not show just a summation of their effects as expected, but an effect stronger (or at some points less stronger) than expected. Are your groups positively correlated in some regard? Please explain your model more. Also it would help if you evaluated the distributions of your subsamples and your main sample. It is possible that the two groups complete each other in a way that the SE of the new, larger sample becomes less than expected (expected by only increasing the size).

And the other sample is not significant either.
However, it is clearly indicative of some effect there with a P value = 0.06.