In my last three posts in this series, I investigated ways the peer-review process might be reformed to:
- Incentivize better behavior by authors: incentivizing the submission of fewer and better papers for review.
- Incentivize better behavior by reviewers: incentivizing faster and more thoughtful referee reports.
- Improve turnaround times: giving editors resources to secure reviewers and completed reviews more quickly.
In brief, the general proposal I floated is that peer-review might be improved by creating a central peer-review platform--a central interface (perhaps at Philpapers?) which might use a clever author and reviewer scoring system to match new papers with reviewers whose scores are closest to the authors' own.
The idea here is, first, that this system would incentivize better author and reviewer behavior, as it would (finally!) create actual incentives to being a good author (submitting papers ready for review) and being a good reviewer (answering being timely in responding to review requests and timely and thoughtful in submitting reviews). The idea then is that this would in turn make things easier on editors, as instead of having to spend copious hours emailing possible referees, etc., the system would recommend reviewers to editors and enable them to send out requests to reviewers who are likely to say yes and do a good job more quickly!
At present, this is just a pie-in-the-sky fantasy. However, David Bourget (who co-runs Philpapers/Philpeople) chimed in saying that such a system would actually be quite doable--and I see no reason not to think big. Although the current system of peer-review has some positive features, there are also many things about it that everyone seems to recognize to be suboptimal: authors, reviewers, and editors. We should therefore think creatively about what could be done to make things better for everyone--and I am optimistic that the kind of system outlined above might help a great deal. However, it is increasingly clear to be--based on David Velleman's and others' remarks--that there is a fundamental issue that needs to be dealt with: the way in which journals are simply over-flooded with submissions. This is the issue I'd like to grapple with a bit today.
As Velleman points out, the number of paper submissions journals get appears to have skyrocketed in recent years. This is probably in large part due to how competitive the job-market has become, as well as due to ease of submission (in the old days, one actually had to send papers by mail!). If Velleman is right, this is simply unsustainable. Something must be done to reduce the number of submissions journals receive--otherwise, the experience that authors, editors, and reviewers have in the process is only going to get worse, not better. What can be done about this problem?
I'd like to propose the following as a constraint on an adequate solution to this problem: whatever strategies we use to reduce the number of submissions, it should not further disdvantage the most vulnerable members of the profession--specifically, graduate students, job-candidates, and adjuncts. In his post at Daily Nous, Velleman proposed "philosophy journals should adopt a policy of refusing to publish work by graduate students." However, this would almost certainly run afoul of the above constraint: it would make it harder for graduate students at lower-ranked and unranked programs to do what it takes to get a job. Similarly, over at this thread, Velleman noted that "Some editors have discussed forming a consortium that allows a paper to be submitted to only a limited number of journals within the consortium: N rejections from journals in the consortium would rule out further submissions to journals within it." However, this also seems to me to run afoul of the above constraint: given that journal rejection-rates are over 90%, and the fact that many very good papers (indeed, a wide variety of Nobel-prize winning economics papers!) may be systematically rejected before finding a home, it seems to me that this policy is far too draconian--and would disproportionately harm those who can least afford it: early-career people who need to publish to get jobs and earn tenure!
Is there a better solution available? I'd like to note, first, that I think the author/reviewer scoring system described earlier might help a lot. Given that authors would receive scores on the basis of whether they submit papers "ready for review", and lower scores would give them a lower overall score (thus leading them to be paired with worse reviewers), that might deter some submissions. However, all by itself this might not be sufficient to make an enormous difference. What could make a difference?
Here's one possibility: suppose an author places a paper, X, under review at a journal in this system. Then suppose both of the referees at the first journal flag it as "not meeting minimal standards for submission at the journal." Then suppose the author tries submitting the same paper to a couple of other journals, and a certain percentage of reviewers (say, >50%) flag it as "not ready to be placed under review." At that point, the author might be prohibited from submitting the paper to certain journals (say, 'Tier 1 journals') for a period of time. Would this run afoul of the constraint of not further disadvantaging the most vulnerable members of the profession? I am not sure. It might--but then again, if a person is submitting a paper that most reviewers think is nowhere near ready for submission, then it may be in the author's best interest to put more work into the paper or send it to lower-tier journals (thus leading submissions to be more distributed rather than concentrated in a small number of top-ranked journals).
Still, I'm a bit worried about whether this might run afoul of the aforementioned constraint. So here's another possibility I floated previously (and which David Bourget said he has thought of as well): a two-tiered system of review where, at the first stage, editors send papers to two 'junior reviewers' who are asked to simply read the paper quickly (say, within 1 week) and report back whether the paper should be sent out for review by 'senior reviewers.' Although this wouldn't reduce the number of submissions to journals, it plausibly would help alleviate the main problems Velleman identifies: the fact that editors and reviewers are overwhelmed with submissions. For, on the 'two-tiered' system I'm now suggesting, editors wouldn't have to read and desk-reject papers themselves: they would outsource that to junior reviewers who could do so quickly. That might make it easier for editors to determine which papers should be sent out for formal review (stage 2), and send out fewer papers, making it easier for them to find willing reviewers and secure reviews more quickly. Further, it would make things better for authors, as they might find out much more quickly than at present whether their paper is 'desk-rejected' at stage 1 (in a week rather than months!).
Anyway, it seems to me that this system might work--especially if it were built into a central review-system platform of the sort I've proposed: a system where editors could log-in, select "stage 1 reviewers", reviewers would receive some sort of credit for returning a first-round verdict within a one-week-deadline, then qualifying as "senior reviewers" if selected as such by a given editor at a journal.
But these are just a few of my own thoughts. This seems to me the most difficult problem to grapple with in improving peer-review--and this is the best I have for now. What do you all think?
Marcus -
I thought Nous (and maybe also therefore PPR) already engaged in this "two tiered" system of review. They send the paper out initially and ask referees for a simple thumb's up or down - should this paper be refereed? It is supposed to be turned around in something like a week, at least the last time I was asked to do this for them. Then if it gets a yes at this stage it gets send to referees for a more complete review.
Maybe they stopped doing this? I can't tell from their website. But for a while, at least, they had something like this two tiered system.
I guess one difference between Nous and your suggestion is that you distinguish it as "junior" vs. 'senior' reviewers.
Posted by: Chris | 01/28/2019 at 07:38 PM
Hi Chris: Interesting, I hadn't heard that about Nous and PPR before. It would be great to know! Can anyone confirm whether that's the case? If they do use the practice, it would also be good to hear a bit more about how it works.
Posted by: Marcus Arvan | 01/29/2019 at 08:58 AM
My experience with Nous and PPR has always been that, for desk rejections (i.e. rejections without comments), they tend to reliably arrive around 2.5-3 months after submission. When I have received comments on submissions it has actually taken less time than this (maybe I have just been lucky in that respect). But that leads me to be skeptical about the hypothesis that their two tiered system allows them to get verdicts to authors quicker. Perhaps they decide on desk rejections quickly but put off notifying authors so they don’t immediately submit something else? This might make sense given the limited time window within which you can submit to Nous and PPR. Or perhaps my experience is non-standard.
Posted by: Random job market person. | 02/01/2019 at 09:14 AM