Why is chance of a Type 1 error equal to alpha?

#1
Hi all:

First post on here, so hope it's not too incoherent.

Most sources I've looked at in trying to learn about statistics and probability talk about Type 1 errors as cases where we falsely attribute significance and state that the chance of this is the alpha value we decide when determining the significance of a test (e.g. 5% for a 95% confidence interval). It might sound bizarre, but I actually struggle with this.

At one level I get it - if we are designing a test in advance and we know that, if we are taking a random sample from a given population (that defined by the null hypothesis) we only have a 1 in 20 chance of getting an observation of a given value or higher then it makes sense to say we will only wrongly reject the null hypothesis 5% of the time. After all, we will only see that result 5% of the time if we are drawing from the H0 population, and all of those rejections will be wrong.

But is there not another way to look at this? Let's say I have now drawn my sample and I have an extreme result P (observation | H0) <0.05. How can I use this to conclude that, if I reject the null hypothesis there is only a 5% of me being wrong: P(H0 | data)<0.05?

Does it not also depend on other things, such as what other distributions might be being drawn from and the relative chances of it being one population vs another given the data? Something like P (observation | H0) / P (observation)?

To put it another way, if I picked a random case from the set of humans wearing pink puffer jackets the chance of them being over 7ft tall is minuscule. But if I saw a hominid wearing a pink puffer jacket and they were 7ft plus, should I reject the hypothesis that they are human?

Am I way off the mark here, and if so could someone explain? Both positions seem to make sense and I'm struggling to reconcile them. Is this where Bayesianism (which I've generally avoided till now) comes in, or am I just making a mistake?

Thanks for any help you can offer.

Billy.
 

Dason

Ambassador to the humans
#3
How can I use this to conclude that, if I reject the null hypothesis there is only a 5% of me being wrong: P(H0 | data)<0.05?
In the standard hypothesis testing framework you can't and don't make that conclusion.