A number of posts at different blogs recently have raised concerns about prevailing peer-review practices, and what might be done to improve things. Among other things, concerns have been expressed about excessively long turnaround times, desk-rejections and unnecessarily harsh reviews, how anonymized "anonymized" review truly is, and many other things to boot.
I'm inclined to like some of the proposals people have floated at Daily Nous, particularly those that might permit people who submit papers to rate reviewers. However, such ratings are only likely to work if there is some "force" behind them--that is, some form of consequence (positive or negative) for reviewers. After all, suppose Jones is a bad reviewer and gets rated poorly by people who submit to Journal X. Maybe, if this occurred, Jones would no longer be asked to review papers for X. But what of it? Why should Jones care? Indeed, Jones might even be happy to be no longer asked to review. Jones' bad ratings not only may be "no skin off his teeth" (as it were): it might actually benefit Jones, by letting him off the hook.
What, then, could actually be done to incentivize reviewers doing a good job? Here's a thought I would like to float out there for discussion: why not attach real conditions to reviewer performance? That is, why not attach "carrots" and "sticks" to reviewing, rewarding good reviewing in some way and discouraging poor reviewing with some form of consequence. How might this work? I'm not sure, but here are a few possibilities...
First, here's one way a journal might incentivize good reviewing. It might implement a stated policy of expediting the peer-review process for good reviewers. For example, it might promise reviewers who (a) have a high using rating, and (b) submit timely reviews (<8 weeks) a turnaround time of <8 weeks when they submit papers. According to Andrew Cullison's journal wiki, many journals--even those with good average turnaround times--have incredibly variable decision-times (with some decisions reached in a few weeks, and others over a year later). I actually avoid submitting to many such journals because, even if they have a decent average, I don't want to risk waiting 8-12+ months. If, however, I were promised (as a "good reviewer") that my paper would be reviewed in two months, I would almost certainly submit to those journals. Thus, for someone like me--and, I imagine, there are others like me--the above editorial policy might really work: it would significantly incentivize submitting good, timely reviews (because, again, there are real benefits attached!).
Is such a "carrot"-system to reward good-reviewers feasible? Obviously, I've never run a journal, but I have a hard time seeing why it wouldn't be. From my vantage-point, the most difficult part of the proposal would be to ensure the reward (i.e. following through on the editorial promise that good reviewsers' own submissions will be expedited). However, there are two reasons why I think such promises might be feasibly met. First, some of us are conscientious reviewers. To the best of my recollection, I have never turned in a review late (i.e. after the 6-8 week requested time-frame). I also know of friends who say they are never late either--and, as Cullison's wiki suggests, there is a pretty decent number of good reviewers out there. Thus, as long as a journal has a decent number of conscientious reviewers (people who always get in good reviews on time), it should be possible to ensure that the above promise is satisfied: the editor could simply send "good reviewer's" submitted papers to other good reviewers, noting that it is an expedited case where the review must be completed before 8 weeks. Secondly, or so I will now suggest, there might be "sticks" that journals could attach to reviewing to improve reviewing, as well.
What kinds of "sticks" (or negative consequences) might be attached to poor reviewing? Here's an obvious possibility: if someone either (a) consistently refuses to review for a given journal, (b) fails to get their reviews in within a reasonable time-frame, or (c) they review but have low ratings which note their reviews to needlessly inflammatory or otherwise irresponsible (a sentence long, without any rationale given), then the reviewer could be suspended or barred from submitting material of their own to the journal. While punishing bad reviewers in some such manner might seem heavy-handed, it seems far more bizarre to me that, at the present time, there are (to my knowledge) no negative consequences for being an irresponsible reviewer. Reviewing, it seems to me, should be (A) privilege, that (B) carries with it some real responsibilities. As I see it, as professionals, we owe each other a system that is fair and responsible--and, or so it seems to me (given, again, the kinds of concerns that are raised again and again), there should be some consequences for failing to discharge one's responsibilities appropriately.
In conclusion, I don't mean to suggest that the above proposals are the right kinds of "carrots and sticks" to have. All I am suggesting is that it might be worthwhile to seriously consider whether there should at least be some carrots and sticks involved. The fact that there have never been consequences attached to reviewing is no reason, in itself, to continue with the status quo. We owe each other a fair peer-review system, one which--through some compliance mechanism(s) or other--better ensures timely, conscientious reviewing practices.
Or so say I. What say you?
Journals run by the same company--e.g. Spring or Wiley--could share their ratings. So being a bad reviewer for Journal A could not only bar one from submitting to A, but also to an associated journal B.
A few additions:
1) Give bans--or whatever the stick--an expiration date. This not only seems just, but it allows a journal to implement another policy which would help start the system of carrots and sticks:
2) Apply the bans and benefits retroactively, where possible. For example, pay some grad student a few bucks to find all the late reviews over the past two years. Then apply the relevant policy.
Posted by: recent grad | 08/04/2015 at 04:09 PM
There is one glaring problem with your "carrot" suggestion, it seems to me. The problem is that your suggestion has the potential to delay reviews for those who need to publish most. Graduate students need to publish to have a good chance on the job market, but are less likely to be asked to review papers, especially for top journals, and thus won't be `good reviewers'. It then looks as if grad students and early career philosophers without much reviewing experience are more likely to get reviewers who take too long. Long review times are arguably more harmful to such philosophers than those with tenure, as review times may impact their career prospects. This will be especially problematic for those (like me) in graduate programs with quick turnaround times---waiting a year for a review of a paper submitted in year 3 of your program will be less of an issue if you have 4 years left as opposed to one.
That said, I agree with the general carrot and stick framework, just not you're particular suggestions.
Posted by: Aaron Thomas-Bolduc | 08/05/2015 at 02:53 PM
Hi Aaron: That's a fair point, and something I thought of, too. My initial thought in reply is that the combination of carrots and sticks I'm suggesting might likely to improve reviewing on the whole so that--compared to the status quo--*everyone* benefits, including grad students. Yes, it might benefit some more than others (namely, those who are good reviewers)--but, for all that, if it incentivizes good reviewing, then everyone might be more likely to enjoy better turnaround times, more constructive comments, etc.
Indeed, I think it's important not to merely evaluate proposals in the abstract, but rather always do so comparatively--comparing alternatives to the actual baseline one starts with. After all, the fact that my proposal may not be perfect is, in itself, no reason to think it would be worse than the status quo. The status quo, after all, is pretty horrible: long turnaround times, bad reviewers, etc!
In any case, since you like the carrot/stick idea but disagree with my suggestions, do you have any thoughts on potentially better alternatives?
Posted by: Marcus Arvan | 08/05/2015 at 07:10 PM
Hi Marcus, You're right that it would probably be better than the status quo. I don't have any ideas as far as incentives go, though one simple fix might be to start off new submitters/reviewers in the `good' colomn. This might require a more fine-grained rating system in the long run, but that would probably be a good thing.
Posted by: Aaron Thomas-Bolduc | 08/06/2015 at 07:40 PM