In my post sharing the results of the informal survey I ran on peer-review, I reported that a full 93.67% of respondents at least somewhat agreed with the statement that journal turnaround times are too long and inconsistent (with 67% strongly agreeing). Given that so many people seem to think this is a serious problem, I think it could be good to have a discussion about what could be done to best address it.
In order to think about how journals might address the issue, we need to know its causes. From what I have heard on social media (including remarks from various editors), long and inconsistent turnaround times appear to have the following two primary sources:
- Difficulty and delays in finding willing reviewers.
- Problems with unresponsive reviewers who fail to get their reviews in or respond to reminders.
Oftentimes, when I've heard (1) discussed on social media, it has involved editors imploring people to respond to review requests more quickly (like, within 24 hours). However, imploring people seems to me likely to be an ineffective solution--and, judging from people's responses to the survey, it has been pretty ineffective in practice. No, effective systems of compliance don't just implore people to do the right thing: they incentivize it. Moreover, good systems are also efficient. Instead of looking haphazardly for people who might say yes (but just as well might say no), an efficient system would also drive editors toward people likely to say 'yes.' All of which raises a first question: how might journals most efficiently (A) find competent reviewers likely to accept an assignment request, and (B) incentivize quick answers to review requests?
I'd like to brainstorm answers to this question, and am curious what readers think...
Here's a thought that just occurred to me. What if there were a central system--a "reviewer bank" (say, at philpapers/philpeople)--that all journal editors might use to solicit reviewers. That central system could organize potential reviewers by topic, making easier for journals to find reviewers. But, more importantly, it might rank reviewers by how quickly they respond to review requests, as well as how often they say 'yes' as opposed to 'no.' Such a system--if it aggregated those two things into a single Reviewer Score--might effectively drive editors to people who are likely to say yes, and do so quickly. "Okay", you say, "but that system would probably lead editors to always ask the same people highest in the list, which would inevitably lower their score because they would have to start saying no." Indeed, but I wonder whether this might be addressed by building a component into the system that both incentivizes saying yes, but also gives editors reasons in some cases to explicitly select reviewers lower down in the list. Allow me to explain.
Suppose that once the central system were set up, journal editors adopted a Matching System of matching authors with 'reviewers like them.' For example, suppose you are a reviewer who takes forever to respond to reviewer requests, and when you do, you say no the vast majority of the time--giving you a low Reviewer Score. Now suppose you have a low Reviewer Score and you submit a paper to a journal for review as an author! Instead of asking reviewers high in the list (i.e. those with good Reviewer Scores), the editor would instead reach out to someone with a low Reviewer Score just like you, giving you the kind of reviewer you are. Here's my thought: this would incentivize you to be a better reviewer, because as a Bad Reviewer you would be matched with Bad Reviewers yourself. This would incentivize everyone to respond to review requests more quickly, and say yes more often--or at least it would for those who want to have a good experience as authors!
Interestingly, I think this sort of system might also address problem (2): irresponsible reviewers who miss deadlines and are unresponsive to reminders. For notice: this too might be a component of your Reviewer Score (along with how long you take to respond to initial queries, how often you accept assignments, etc.). On this model, if you have a bad Reviewer Score, you would once again be matched as an author with bad reviewers--people who take forever to review your paper. Conversely, if you have a good Reviewer Score, editors would preferentially match you with good reviewers: people who get their reviews in quickly.
Maybe I'm missing something--but it seems to me that this kind of central system, if put in place (perhaps by philpapers) and utilized by editors properly, would dramatically incentivize better reviewer behavior at all stages, reducing the time that people take to respond to requests, how often they say yes, and how long they take to review papers. For it would attach real consequences to these things, improving one's experience as an author for being a Good Reviewer and disincentivizing being a Bad Reviewer.
Anyway, what do you all think? Is my proposal (which, to be frank, I just put together on the fly) a good one? Or, do you think there's a better way for journals to address these issues? I'm really curious to hear your thoughts!
This thought is also off the cuff, but with the added constraint of trying to think of a particularly simple practice (Because I suspect philosophers would never converge on complex ones, but maybe I'm being too pessimistic).
So here's the thought: In your initial message to the referee, say that a non-response within 24 hours is a rejection (You can even put this information in the subject line of the email; at this point people who act like they aren't aware of the deadline are probably just being willfully ignorant). Immediately move on to the next referee if you must. Moreover, develop a list of likely candidates so that moving on to the next referee doesn't take a lot of thought.
Rationale for this thought: it gets us away from waiting for a long time for a potential referee to say "no" or to never respond, and for the editor to make the judgment call about when they should move on, etc.
Posted by: Anon | 12/05/2018 at 12:25 PM
Anon: Thanks for weighing in!
I think your proposal may be better than nothing, but I'm not entirely sure. Given how slow people many people are with email, an editor following your suggested policy might often go through reviewer after reviewer--as person after person might just ignore the initial email, leading to a long string of initial emails that don't accomplish anything until the editor finally lucks on someone who *does* say yes within 24 hours. In which case the policy might not solve the problem, and indeed, make things even more frustrating for editors. But who knows, maybe it would work!
In any case, I guess I am more sanguine that a more complex system (such as the one I proposed) might work, particularly one involving philpapers/philpeople.
They have been very proactive over there developing the site as a central resource in the discipline--so my thought is that if a case could be made some central system there (such as the one I propose) had serious advantages for everyone involved (making editors' jobs easier, improving the process for authors, etc.), then perhaps they might actually try to develop something like it.
But who knows - I think it's probably a good idea to brainstorm both simple and more complex solutions, so thanks again for weighing in!
Posted by: Marcus Arvan | 12/05/2018 at 12:55 PM
Right, I definitely think that the scenario you imagine could obtain, but that we lack reason to think it *probably* would, and so it's worth trying. I'm just tempted to see what this process would look like if the slow responders were cut out of it entirely.
But I'm also curious to see what other suggestions there are, too!
Posted by: Anon | 12/05/2018 at 01:34 PM
Fair enough - me too! :)
Posted by: Marcus Arvan | 12/05/2018 at 02:25 PM
Marcus,
Something like the system you describe already exists. I am an academic editor for PLOS ONE. The journal allows editors to search for reviewers, and the system is set to find matches with the paper under review, matches with respect to expertise. You get a near endless list of matches to choose from. And each person has their e-mail listed, and other relevant information.
Further, when the editor gets the referee report, s/he can rate the reviewer on a scale from 0 to 100. These ratings are averaged, so you can find something about people's track record. Further, there are details about how often the reviewer is late, etc.
This system, though, is designed for a large journal publishing many papers. Indeed, I believe there may be more than 5000 academic editors working for PLOS ONE.
Posted by: Brad | 12/06/2018 at 01:33 AM
Brad: very cool - I wasn't aware of that! As an editor, do you find the system helpful?
One thing that seems to me suboptimal about the PLOS ONE system is that it leaves the up to the editor to complete at the end of the process. There are two potential problems with this. First, it can open up ratings to bias (such as if the editor knows the referee personally). Another more basic (and perhaps more salient) issue is that it leaves the rating until the editor receives the referee report. By that time, the editor may have forgotten how long it took the referee to respond to the initial referee request--and it doesn't base the referee's score in any way on how often they accept or turn down requests (which is important, as I've heard both contribute substantially to delays in turnaround times).
I'm inclined to think a better system would base an overall Reviewer Score on an agglomeration of several different scores:
(1) Initial response time (in accepting/declining)
(2) Proportion of accepts/declines
(3) Promptness of review time
(4) Quality of referee report
In the model I'm proposing, the central system would keep track of (1)-(3) automatically, and editors would only play a role in evaluating (4).
I'm also inclined to think a better system wouldn't just sort referees by AOS (providing editors a really long list of potential referees on the paper's topic). That still makes an editor's choice difficult, and would seem to me to make it liable for editors to try to always select the best referee they can find. My worry about this is that it doesn't incentivize better reviewer behavior--it might even punish the best reviewers by leading editors to select them first all the time. On the contrary, for reasons given in the OP I'm inclined to think that a better system would populate a potential list of referees for editors to choose from that is not only sorted by AOS, but also prompts editors to select reviewers who have a similar Reviewer Score as the author whose paper has been submitted.
In any case--setting aside whether my version of the proposal is optimal or not--what would you think of something like philpapers functioning as a central "referee recruitment" system like this? Not a compulsory system, mind you--but a system that journal editors (from any journal) could choose to use if it were developed?
Posted by: Marcus Arvan | 12/06/2018 at 09:36 AM
I second Anon's suggestion in the first comment!
It could maybe be 48 hours instead of 24--to help prospective reviewers having a busy day. But beyond that, I think it's fine to move past the reviewer.
It's not a huge tragedy or injustice to miss a chance to review, and if some reviewers do not respond to emails quickly, they will learn over time that they need to do this in order to review papers.
Further, just as it is unethical to never accept requests to review, it would be unethical (on such a system) to never answer request emails within the allotted time.
Reading and replying to short emails quickly would become a basic requirement of the profession, just as many of us now require that of our students.
Posted by: Chris | 12/06/2018 at 04:39 PM
Marcus
PLOS ONE gives only 10 days for people to agree to referee a paper.
Further, the editor's assessment is intended to be of the report (and by extension, the referee, because the scores are averaged for specific referees)
Posted by: Brad | 12/08/2018 at 11:58 AM
Anon, Chris,
Speaking from personal experience, an email that lets me off the hook by doing nothing is going to make it vastly, vastly more likely that I do nothing. That might be unethical, but insofar as we're concerned about getting referees, and not just in blaming those who fail to referee, setting up a system which is likely to induce a much greater number of failures to referee seems like a bad idea.
Maybe I'm idiosyncratic?
Posted by: Craig | 12/08/2018 at 03:03 PM
Brad: that’s really interesting. But I just want to clarify: is that a 10 day limit on answering the initial request (accepting the referee assignment), or is it a 10 day deadline to get the referee report in? If it’s the former, that sounds to me like too long. If 3 referees say no in a row, that could be 30 days before a referee is even successfully recruited to review the paper. On the other hand, if it’s a 10 day deadline to get the referee report in, I’d be curious how well it would work in philosophy—as I would worry that referees would tend to turn down most requests, on the grounds that they don’t feel they could meet a 10 day turnaround.
I’m just curious to get a better idea how PLOS One works, to get a better handle of whether it might be a good kind of model to centralize somehow.
Posted by: Marcus Arvan | 12/08/2018 at 03:18 PM
Also, I noticed by looking at PLOS One’s Wikipedia page that they have very different review standards (which appear to be unique to the journal). From what I can tell the sole criterion for acceptance is that the paper’s scientific methods are competent—not necessarily good or excellent. Their publication model is then to publish everything that meets this minimal standard, and to let academic audiences evaluate the paper post-review.
This actually seems quite a bit like the ArXiv model I’ve advocated on this blog, as it minimizes the role of referees and adopts a more “crowd sourced” approach to evaluating overall quality and importance of papers. The only real difference is that in the ArXiv model the crowd sourcing comes before peer review at journals, whereas in the PLOS One model the referees get to judge basic competence first and then the “crowd” gets to judge. And actually even this isn’t so different than the ArXiv, the the ArXiv has a minimal system of peer review for evaluating basic competence that must be passed for an article to be posted there.
Whatever the case may be, the kinds of standards referees are supposed to use at PLOS One seem very different than ones standard in philosophy, and so emulating that model would (as far as I can tell) require a pretty radical change of process and standards. In any case, I’d like to learn more, as I’d like to pin down some plausible alternatives to current peer-review processes in philosophy that might improve upon the issues those polled have with our current system!
Posted by: Marcus Arvan | 12/08/2018 at 03:30 PM
Craig, thanks for the comment. My hunch is this: that right now, *already*, there's enough people never responding to the email that requests a response such that significant time would be saved by just moving on after a very brief initial waiting period.
I guess your specific worry could be handled by requesting a response in the initial email (rather than letting someone off the hook in the sense that you describe), but then, after not hearing back for 24-48 hours, explicitly canceling that request and moving on.
(I guess this discussion would be better informed if we had on hand statistics from a range of journals that speak to this issue.)
Posted by: Anon | 12/08/2018 at 05:05 PM
These sound like interesting ideas. But I think a few basic things would help first. One is that authors could be encouraged to suggest reviewers with justification and contact details themselves, as in other fields. Another is that reviewers could be better instructed to write shorter reporters. People often tell me they are writing 1,000, 2,000 or even 5,000-word reports, which is wasting resources that could be better invested in the system.
Posted by: Wesley Buckwalter | 12/15/2018 at 04:11 PM