A number of posts at different blogs recently have raised concerns about prevailing peer-review practices, and what might be done to improve things. Among other things, concerns have been expressed about excessively long turnaround times, desk-rejections and unnecessarily harsh reviews, how anonymized "anonymized" review truly is, and many other things to boot.
I'm inclined to like some of the proposals people have floated at Daily Nous, particularly those that might permit people who submit papers to rate reviewers. However, such ratings are only likely to work if there is some "force" behind them--that is, some form of consequence (positive or negative) for reviewers. After all, suppose Jones is a bad reviewer and gets rated poorly by people who submit to Journal X. Maybe, if this occurred, Jones would no longer be asked to review papers for X. But what of it? Why should Jones care? Indeed, Jones might even be happy to be no longer asked to review. Jones' bad ratings not only may be "no skin off his teeth" (as it were): it might actually benefit Jones, by letting him off the hook.
What, then, could actually be done to incentivize reviewers doing a good job? Here's a thought I would like to float out there for discussion: why not attach real conditions to reviewer performance? That is, why not attach "carrots" and "sticks" to reviewing, rewarding good reviewing in some way and discouraging poor reviewing with some form of consequence. How might this work? I'm not sure, but here are a few possibilities...
First, here's one way a journal might incentivize good reviewing. It might implement a stated policy of expediting the peer-review process for good reviewers. For example, it might promise reviewers who (a) have a high using rating, and (b) submit timely reviews (<8 weeks) a turnaround time of <8 weeks when they submit papers. According to Andrew Cullison's journal wiki, many journals--even those with good average turnaround times--have incredibly variable decision-times (with some decisions reached in a few weeks, and others over a year later). I actually avoid submitting to many such journals because, even if they have a decent average, I don't want to risk waiting 8-12+ months. If, however, I were promised (as a "good reviewer") that my paper would be reviewed in two months, I would almost certainly submit to those journals. Thus, for someone like me--and, I imagine, there are others like me--the above editorial policy might really work: it would significantly incentivize submitting good, timely reviews (because, again, there are real benefits attached!).
Is such a "carrot"-system to reward good-reviewers feasible? Obviously, I've never run a journal, but I have a hard time seeing why it wouldn't be. From my vantage-point, the most difficult part of the proposal would be to ensure the reward (i.e. following through on the editorial promise that good reviewsers' own submissions will be expedited). However, there are two reasons why I think such promises might be feasibly met. First, some of us are conscientious reviewers. To the best of my recollection, I have never turned in a review late (i.e. after the 6-8 week requested time-frame). I also know of friends who say they are never late either--and, as Cullison's wiki suggests, there is a pretty decent number of good reviewers out there. Thus, as long as a journal has a decent number of conscientious reviewers (people who always get in good reviews on time), it should be possible to ensure that the above promise is satisfied: the editor could simply send "good reviewer's" submitted papers to other good reviewers, noting that it is an expedited case where the review must be completed before 8 weeks. Secondly, or so I will now suggest, there might be "sticks" that journals could attach to reviewing to improve reviewing, as well.
What kinds of "sticks" (or negative consequences) might be attached to poor reviewing? Here's an obvious possibility: if someone either (a) consistently refuses to review for a given journal, (b) fails to get their reviews in within a reasonable time-frame, or (c) they review but have low ratings which note their reviews to needlessly inflammatory or otherwise irresponsible (a sentence long, without any rationale given), then the reviewer could be suspended or barred from submitting material of their own to the journal. While punishing bad reviewers in some such manner might seem heavy-handed, it seems far more bizarre to me that, at the present time, there are (to my knowledge) no negative consequences for being an irresponsible reviewer. Reviewing, it seems to me, should be (A) privilege, that (B) carries with it some real responsibilities. As I see it, as professionals, we owe each other a system that is fair and responsible--and, or so it seems to me (given, again, the kinds of concerns that are raised again and again), there should be some consequences for failing to discharge one's responsibilities appropriately.
In conclusion, I don't mean to suggest that the above proposals are the right kinds of "carrots and sticks" to have. All I am suggesting is that it might be worthwhile to seriously consider whether there should at least be some carrots and sticks involved. The fact that there have never been consequences attached to reviewing is no reason, in itself, to continue with the status quo. We owe each other a fair peer-review system, one which--through some compliance mechanism(s) or other--better ensures timely, conscientious reviewing practices.
Or so say I. What say you?