Statistical Errors in Research

#1
http://www.smw.ch/docs/pdf200x/2007/03/smw-11587.pdf

What are your opinions on this? How can possible even be? Aren't there supposed to be expert statisticians reviewing the researchers work or something? I know in the field such as Accounting there is huge amounts of regulations and rules to follow such as GAAP and IFRS, but from reading this article about Statistical practices etc., it seems like its a complete unregulated mess.
 

hlsmith

Less is more. Stay pure. Stay poor.
#2
It just says these are threats and some can occur in peer-reviewed publications. I dont find these to be too surprising. I good list to look at before hitting submit.
 
#3
http://www.smw.ch/docs/pdf200x/2007/03/smw-11587.pdf

What are your opinions on this? How can possible even be? Aren't there supposed to be expert statisticians reviewing the researchers work or something? I know in the field such as Accounting there is huge amounts of regulations and rules to follow such as GAAP and IFRS, but from reading this article about Statistical practices etc., it seems like its a complete unregulated mess.
This is nothing new and something that we just have to accept. If there is one thing I have learned in statistics is that there is always a better method but never a best. While there are some big culprits as far as problems are concerned (i.e. psuedo replication) the biggest issue comes with the peer review process. The main issue is that not all journals have a group of statisticians on beck and call for the review process in journals.
 
#4
A lot of flaws listed, indeed. But when has statistics ever been about being absolutely precise? You're never going to find applied statistics following identical guidelines. All methods have drawbacks, but if it's well-grounded, then the results should be valid, too.
 

CB

Super Moderator
#5
I think in general that a lot of errors do end up in published research. Some of these errors are deliberate. Here is a good discussion of a specific error: Incorrect rounding of p values near 0.05: https://peerj.com/articles/1935/

On the other hand, the article you mention is not a good discussion of the problem. A good chunk of the "statistical" errors it mentions aren't statistical in the first place, they're errors of design (E.g., "Failure to use and report randomisation"). In other cases, the authors describe things as "errors" where the 'correct' practice is really a matter of debate ("Missing discussion of the problem of multiple significance testing if done"). And then there's basic misunderstandings about mathematical terminology ("Failure to prove (sic) test assumptions").

On the whole the article presents a view of statistical practice where there are clear and well-established "correct" things to do and "incorrect" things to avoid. IMO this view isn't remotely consistent with statistics as a field - there are certainly some statistical practices that are just clearly wrong (like rounding a p value of 0.07463 to 0.05), but many other choices you can make in statistics are subject to lots of debate.

Aren't there supposed to be expert statisticians reviewing the researchers work or something?
Peer review involves research outputs being reviewed by (usually) 2-3 experts in the area. Sometimes one of these may be a statistician, but often they aren't.