Oh wow, I've missed some interesting posts on this thread.
TheEcologist you have some very good points (and have altered my outlook on some points), especially the double blind point of reviewing big names.]
Ditto. My reason for suggesting that peer reviewers names' should perhaps be published was to improve the quality (i.e. decrease the
laziness) of some reviews. I find lazy reviews quite irritating (e.g. one vague and incomprehensible paragraph). Negative but well-justified reviews are easier to live with: They teach you something. However, as TheEcologist points out, printing reviewers' names could have the unintended consequence of stopping reviewers from publishing overly
negative reviews, so perhaps wouldn't be such a good idea. But, perhaps journals could still publish peer reviewers' reports anonymously in online supplements, so as to demonstrate the comprehensiveness and quality of the reviews...?
Spunky I understand you may have a vested interest in non PhDs reviweing articles but I really don't believe the majority of non PhDs are capable of really reviewing an article and providing the feedback necessary for a proper review of someone's work. Even new PhDs may lack the depth of knowledge to review an article.
This article actually found that the quality of peer reviewers' reports seems to decrease over time. I don't imagine that many of the peer reviewers included would've have been pre-PhD, but I think the bigger point is that the quality of peer reviewer doesn't necessarily increase with experience (or not linearly anyway!)
No, likely the only way to do this is to let the process of Science run its course - if your results cannot be reproduced your theories will die. And here lies the heart of the modern problem in my opinion: Science needs people to reproduce results before it can progress. However, high and mid-tier journals only want the most novel results possible. So who is going to reproduce the forged study experiments and find out that something is fishy? Its seen as a waste of time because whoever wants job security must have a couple of high ranking journal publications. This is an increasing problem in my opinion (directly caused by efforts to measure a scientist's productive output as if he/she was a factory worker).
I wonder sometimes whether we need journals with titles like "Journal of Replication in Psychology" (I've just googled this to check it doesn't already exist!) I actually don't think this would necessarily be a bad avenue to take for commercial publishers, as well as being good for science: I'm not sure that the pragmatic/commercial bias against replications is that well justified either. Basically, publishers and authors at the moment tend to have explicit or implicit policies against replications, probably because for any given research finding, the original finding is likely to get much more attention than the replication. More citations of an article are better for the journal and better for the author too...
But the thing is... replications, I would think, are surely much more likely to be directed at new, interesting, and important findings (agreed?) So while we'd expect the replications of these findings to get less attention than the originals, they may still get more citations than the average original article (based on piggybacking the popularity of the particular original studies they are replicating). Publishers and authors might even be able to predict with reasonable accuracy the number of citations a given replication is likely to get based on how many citations the original got.
All of this is just speculation, but it'd be possible to check whether these arguments actually have any validity... anyone up for writing an article on the business case for replications?
