Multiple t-tests vs ANOVA with Dunnett's post-hoc test

#1
Hello,

I am bit confused about the use of multiple t-tests vs ANOVA with Dunnett's post-hoc test (on normally distributed data).

Let's take the following made-up example:

A scientist tests the effect of substance A vs no treatment on n biological replicates. He then performs a t-test to find out if there is a significant effect (with p<0.05 considered to be significant) of substance A.
Independently, another scientist does the same experiments and statistical analysis for substance B, and again another scientist for substance C.
They all find that each substance has a statistically significant effect.

Now let's assume there hadn't been three scientists but only one scientist testing all three substances in the same way. He is not interested in differences between the substances but only whether they have a significant effect vs untreated control. Therefore, he uses ANOVA and Dunnett's post-hoc test to analyze his data. He does not get a statistically significant effect for substance A, B and C because of the corrections made for multiple comparisons.

How can this be? This does not appear logical to me.
 
#2
Another example I can think of would be this:

Substance X was tested for an effect at 5 different concentrations or doses (vs no treatment). Alternatively, only the highest concentration of substance X was tested. Again, you may get no significant effect for the 5 different concentrations but a significant effect for the single highest concentration when using Dunnett's post-hoc test or a t-test, respetively. OK, here there are no multiple t-tests but still it is confusing to me that the result for the effect of the highest concentration of substance X may be significant in one case and not in the other.
 

rogojel

TS Contributor
#3
hi,
referring to the first case: let us assume that there is no effect at any of these substances. If 3 scientists independently test them there is a 1-0.95^3=0.14 chance that at least one of them will falsely claim an effect. If they are all tested with an anova the chance of a false positive will be only 0.05 As AFAIK a post-hoc test only makes sense if the overall anova is significant, we would still be at a fakse positive chance of 0.05 after the post-hoc test.
So, this only tells us, that anova is better at multiple tests than pairwise t-tests, which is not that surprising.