P-value Question

#1
Hi!

I have a question involving P-Values in a specific context. Suppose you have an 1-Way ANOVA with k groups. If you obtain a significant p-value (<0.05) for the omnibus test, you usually query further with multiple comparisons to find exactly which of the k groups differ from each other. Is there a mathematical formulation of this? In general, if you have a significant omnibus, should you have at least one significant comparison? What's the theory/mathematical result behind this?

Of course I know in practice this always isn't the case (a significant omnibus but no significant multiple comparison)

Just thinking, please post if you have some insights!
 

Karabiner

TS Contributor
#2
Is there a mathematical formulation of this?
I am not sure what you mean by this. There's a number of different
post-hoc tests (for group comparisons), so there a different
formulas. See for example
http://pages.uoregon.edu/stevensj/posthoc.pdf
http://en.wikipedia.org/wiki/Post-hoc_analysis#List_of_post-hoc_tests
In general, if you have a significant omnibus, should you have at least
one significant comparison?
The number of subjects involved in the respective
two-group comparisons are smaller than the number
of subjects in the total analysis. Maybe that has an
influence.

Kind regards

K.
 

Mean Joe

TS Contributor
#3
In general, if you have a significant omnibus, should you have at least one significant comparison? What's the theory/mathematical result behind this?
Hi JM! I don't know if I have any insight, but I hope I can post in here ;)
I'm not sure statistics (as it is being used) is a true math. It does not define one of the fundamental things: what is a significant p-value? Definitions are essential in true math.

If the omnibus test is significant, but none of the multiple group comparisons are significant, then it seems we've lost significance. Or did we? Did we ever have significance? We had a p<.05, and everyone says p<.05 is significant, but that has no mathematical basis.

The omnibus test says all groups are not equal (for some critical p value). So then we logically conclude (having "proven" that all groups are not equal*) there must be a difference between some groups, but why do we think we should see p<critical value for them too? What's so magical about .05, or any critical value, that it has to hold for all tests?

*Remember: a significant omnibus does not say that it is false that all groups are equal. There is a probability that the conclusion is wrong.