comparing test results between subgroups with those between main groups


we have to groups of subjects, A and B, and N measures.
If we test all N measures for differences between A and B we obtain N p values (let's call this set P0).

The, we propose a way to extract subgroups from one of the two groups say A1, A2 (whose intersection is null and whose union is A).
Now, we have N p values for the test A1 vs B, and N p values for the test A2 vs B (let's call these sets P1 and P2 respectively).

What is the correct way to compare P0, P1, and P2?



Less is more. Stay pure. Stay poor.
Agreed, I was just getting a feeling for what you had done so far.

I am not sure the project's context, but the use of pvalues is considered a faux pas these days and may be apparent in your setting. With two sided tests, you could have a test statistic that completely flips directions though you end up with a comparable p-value (e.g., 3.66 versus -3.66). Also you are dealing with different sample sizes, which effects your precision. Why can't you just plot the effect estimates on a graph and articulate the differences?
Thanks a lot, this is actually a better description! Would you use histograms or something else?
Also what about a hierarchical Bayesian model?


Less is more. Stay pure. Stay poor.
I was thinking of something closer to a forest plot.

I am not extremely versed in multilevel models, but it seems people throw out numbers like you need at least 40-60 clusters, which you seem to have two clusters for As and 1 for B. I can't seem to process how it would fit your data, but I may be missing something.

@GretaGarbo - you seem to know about submodels and comparisons - do you have any suggestions for danae8?