The ASA's statement on p-values is FINALLY HERE

spunky

Doesn't actually exist
#1
even the magic of Disney World can't prevent me from nerding things out a little bit, especially when something THIS BIG. just came out.

as some people may remember, last year the Journal of Basic and Applied Psychology banned the use of p-values (unfortunately the actual editorial is no longer available for free :(

this prompted a big kerfuffle among many applied researchers who use statistics and given the dire situation in which things are in areas like psychology or political science with the whole "replicability crisis", the American Statistical Association (ASA) took it upon itself to come up with an official statement of where things are in terms of p-values and inference.

so time went by.... days became weeks, week became months and now, finally, just fresh out of the oven we can all read the accepted manuscript of the official statement of the ASA on the uses and misuses of p-values in applied scientific practice:

http://www.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108

i just downloaded it on my cell so i can read it while i'm doing those endless Disney lines but i can't wait to go through the whole thing and i'm sure a few of you are also gonna be very interested!

the paradigm is changing, people!
 

Miner

TS Contributor
#2
It is interesting that the big "kerfuffle" appears to be in the social sciences. There is no similar replicability crisis in industrial statistics of which I am aware even though NHST is a mainstay approach (i.e., Bayes who?). I suspect that may be due to the fact that in industrial statistics, we are only interested in large effect sizes and are forced to run confirmation experiments (i.e., replications) before committing large monies toward making changes to processes. Of course, our job security comes from getting it right, not in getting published.

Added: That is not a we are superior attitude coming through. The fact is that if we screw up a study, it is our job, not our reputation that is at risk.
 

spunky

Doesn't actually exist
#3
It is interesting that the big "kerfuffle" appears to be in the social sciences. There is no similar replicability crisis in industrial statistics of which I am aware even though NHST is a mainstay approach (i.e., Bayes who?). I suspect that may be due to the fact that in industrial statistics, we are only interested in large effect sizes and are forced to run confirmation experiments (i.e., replications) before committing large monies toward making changes to processes. Of course, our job security comes from getting it right, not in getting published.

Added: That is not a we are superior attitude coming through. The fact is that if we screw up a study, it is our job, not our reputation that is at risk.
i do agree wholeheartedly with you here. although i stopped myself short from saying "social sciences" because i know many areas of medicine also have their own share of controversy, especially in areas like the one i am working now, public health. but yeah, 99% of the kerfuffle is definitely coming from the social sciences. from the manuscript attached i LOVED this quote which refers to what you're saying:

"The problem is not that people use P-values poorly, it is that the vast majority of data analysis is not performed by people properly trained to perform data analysis"

I have seen some of those "not properly trained" people. It is very scary, particularly if you consider many of them are tenured professors at reputable universities who, every now and then, teach their own research methodology/statistics courses to unsuspecting students, making sure the next big "crisis" is just brewing around the corner.
 

hlsmith

Less is more. Stay pure. Stay poor.
#4
So I am guessing they dislike confidence intervals as well. Well, I don't think there can be an immediate switch, in that not all frequentist solved questions currently have a Bayesian analog, unless they just want creditability intervals or something not so radically different.
 

rogojel

TS Contributor
#5
It is interesting that the big "kerfuffle" appears to be in the social sciences. There is no similar replicability crisis in industrial statistics of which I am aware even though NHST is a mainstay approach (i.e., Bayes who?). I suspect that may be due to the fact that in industrial statistics, we are only interested in large effect sizes and are forced to run confirmation experiments (i.e., replications) before committing large monies toward making changes to processes. Of course, our job security comes from getting it right, not in getting published.

Added: That is not a we are superior attitude coming through. The fact is that if we screw up a study, it is our job, not our reputation that is at risk.
A german statiscian Gigerenzer, interviewed a ceo of an airline , who said, that if the safety standards of airlines were like those of hospitals a plane would crash each week. When asked, why this is so he said - if the plane crashes the pilot dies too.