Alpha 0.1 and power 75% is it acceptable

#1
I have got a retrospective cohort analysis where in sample size calculator says that for the sample that I have collected I would get a result with alpha of 0.1 and power of 75%. To get more samples to get better power and significance is impossible now. Is it ok

....
 
#2
The acceptable values of the significance level (alpha) and power are dictated by the research standards in your field. You get a sense of that by seeing what kind of research is publishable... Typically, alpha of 10% is too high. You are allowing yourself to be wrong 10% of the time if the null hypothesis is correct. This is not really synonymous with pushing frontiers of science and increasing human knowledge, is it?
 
Last edited:
#3
Post hoc analysis of sample size in a retrospective cohort study, isn't it a wrong thing to do?That is after completing your study putting in your results in the sample size calculator for incidence in unexposed and unexposed to see whether there is adequate sample size? It should be done a priori only?
 
#5
Quite the opposite. By "a priori" you mean "before seeing the data used in the main analysis". However, think about this. Determination of the sufficient sample size must be based on some data (not just opinion of wise men). Once those pilot data have been collected and the sample size has been calculated, ignoring other information in the data would be wrong (statistically inefficient). That is what the statistical theory teaches us. So for any analytical purposes, the pilot data must be combined with the data collected subsequently. To avoid introduction of any biases, it is important to keep track of which data have been collected at the pilot stage and which data have been collected afterwards.
 

hlsmith

Omega Contributor
#8
jm,

You could provide more information. Given your posts, I imagine you conducted your study and failed to reject the null. Then conducted a sample size calculation which appeared provide weak alpha/beta. Though those values can be traded off in decrease one and increase the other.
 

CowboyBear

Super Moderator
#9
You could provide more information. Given your posts, I imagine you conducted your study and failed to reject the null. Then conducted a sample size calculation which appeared provide weak alpha/beta. Though those values can be traded off in decrease one and increase the other.
Is this the case, OP? I hope you haven't decided to switch to a higher alpha level after observing a non-significant result?
 

noetsi

Fortran must die
#10
Actually I think wise men (they were all men I am sure given when this was done) did make the decision on what is and is not an acceptable alpha level. There is no reason that an alpha of .1 is too high and .05 is correct mathematically. Researchers just decided on that being reasonable.

I think there is a move away from this whole process in general.
 

CowboyBear

Super Moderator
#11
There is a move away from statistical significance testing, and setting alpha at 0.05 is indeed somewhat arbitrary. Some of usthink it can be fine to commit to an alpha level other than 0.05, depending on the circumstances of a particular study...

But you still have to set the alpha level in advance of data collection and analysis (such that you're committed to a error rate). If you specify a hypothesis, settle on an alpha of 0.05, collect data, find p = 0.06, and then decide after the fact that the most sensible alpha level would actually be 0.10... then you're p-hacking. It's this kind of behaviour that allows researchers to find "support" for their hypotheses no matter what the data say.
 

noetsi

Fortran must die
#12
Well then you should just run analysis with the whole population, not a sample as I do.

I long accepted p values as "sacred" because I was taught that way. But I don't anymore. Arguably, you should just report the p values (and effect sizes) and let others decide if they matter.
 

CowboyBear

Super Moderator
#13
I don't see how the population-sample distinction is relevant here?

No one is saying p values are sacred. You don't have to think they're sacred to be against p hacking.

Yes, you can just report effect sizes and p values and leave it to others to decide what they mean. But then the OP wouldn't be able to describe the findings as "statistically significant", or supportive of an effect, or anything like that. Which is fine, if that's your kind of thing, but it makes it a bit hard to write a discussion section in that case.