I'd like to thank Rob Gressis for drawing this paper to my attention. Here's the abstract:
A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.
The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.
With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.
In short, when psychology papers already accepted by top journals in the field were resubmitted to the very same journals (prior to publication) for review with nothing more than a lesser-known or unknown institution, the papers were:
- Not recognized by editors or reviewers as already having been submitted to the journal, but
- Rejected 89% of the time.
So much for "blind" review! I wonder if something similar happens in philosophy. Anyway, here's one thing I do know: I know people who've had papers rejected by more than one "bad" journal only to have the very same paper accepted by a top-5 generalist journal. The lesson? Probably this: peer-review is a crapshoot, and not always a very fair one.
To be fair, the study said that they sent them to journals with "non blind" (I don't like the ablism, so I prefer 'non-anonymous) referee practices.
Posted by: Rachel | 05/28/2013 at 03:38 PM
Rachel: fair point -- but, as we've discussed on this blog before, "blind" review is often far from truly blind! Although the evidence is only anecdotal, it strongly suggests that "Google reviewing" is something of an epidemic in philosophy...
Posted by: Marcus Arvan | 05/28/2013 at 03:54 PM
If those papers had been largely Google reviewed the resubmissions would've been detected much more often. In part for the reason Rachel mentions it's not at all clear to me what lessons if any this has for philosophy reviewing.
Posted by: Daniel | 05/28/2013 at 04:20 PM
I think it's hard to disentangle the "prestige effect" from the "different reviewer effect." That is, how much of the change is because the papers are no longer coming from authors at prestigious institutions, and how much of it is simply because the papers are going to different reviewers? In philosophy, at least, I imagine the latter issue is a big deal.
I'd be interested to see what happened if you resubmitted papers from non-prestigious institutions using the names of equally non-prestigious institutions.
Posted by: David Morrow | 05/28/2013 at 07:23 PM
If it was anonymous, then this says nothing more than that, before the Internet, plagiarism detection was unlikely (the article is 30 years old), and that back then there was much luck in psychology journals. I'd like to see a replication and one that surveys multiple areas of academia.
Posted by: T.M. | 05/31/2013 at 11:15 PM