p value of .051

Dason

Ambassador to the humans
#2
If you're using an alpha of .05 then a p-value of 0.051 wouldn't allow you to reject the null.
 

bruin

New Member
#3
But you could just report the p-value that you got.

(If I was reading a journal article and all they reported was "fail to reject H0," and I later found out that they actually got p=.051, I would be kind of annoyed at their level of NHST fundamentalism.)
 

hlsmith

Omega Contributor
#4
I agree with Dason and bruin. Reporting the pvalue and effect size (e.g., components) will also allow the reader to speculate if your test may have been underpowered.
 
#5
Depending on your field of research, you might consider reporting the p size you got, but noting that it is not significant but close to significance. This is called marginal significance. You can usually do this with p-values of up to 0.075 or 0.1, again, depending on your field of research.

Besides, significance isn't everything. Another important factor is effect size. Sometimes a p-value of 0.051 is much more exciting than one of 0.001, if the first has a big effect size and the second a small one.
 
#6
Depending on your field of research, you might consider reporting the p size you got, but noting that it is not significant but close to significance. This is called marginal significance. You can usually do this with p-values of up to 0.075 or 0.1, again, depending on your field of research.
I find this practice similar to people claiming "quasi-randomized" when referencing group allocation. The process is either random or it is not. Similarly, a given p-value shouldn't be "marginally significant" when it exceeds the alpha cutoff-- it is simply nonsignificant at the chosen alpha level, per the a priori decision criteria. That is, the test is either significant at a given alpha level or it is not. However, reporting the p-value with your decision allows the reader to see just what that means and mitigates the dichotimization issue people worry about.

Besides, significance isn't everything. Another important factor is effect size. Sometimes a p-value of 0.051 is much more exciting than one of 0.001, if the first has a big effect size and the second a small one.
This is definitely a good point that p-values should also be reported with an effect size, ideally a confidence interval or the necessary values to calculate one.