In the comments section of a post today at Leiter reports on how much time and effort referees should expend, a commenter writing under the pseudonym 'Out to pasture' wrote:
Here's a brusque (but I hope not brutal) headline thought. Referee's reports for journals are written for editors. They should offer a judgement about whether the paper is worth publishing. They are not, primarily, tutorial reports for authors. If the judgement is negative, then it is a work of supererogation to spell out the reason in any detail: a couple of lines for the editor's eye is enough. The editor should trust your judgement if s/he has chosen you to report. It is nice if you have the time to say more, but there is no need...
If my first, second, and third-hand experience are any indication, a decent proportion of philosophy journal referees appear to subscribe to Out to pasture's general line of thought. I have not only had papers rejected with one or two lines to the effect of, "The arguments are no good", without any substantiation of the reviewer's judgment. I have also had numerous collegagues in the discipline--including some people in my social media feeds recently--recount similar experiences: as having to wait for anywhere from 3 months to over a year to get a rejection consisting of a couple sentences that amount to little more than, "Trust me, this paper is not publishable--though I cannot be bothered to give you any explanation why." Interestingly, although I do not have much experience in other fields, I do have some first and second-hand experience with one other field: psychology. I have submitted papers to psychology journals myself, and my spouse is a researcher in Industrial-Organizational Psychology. And, although my data points might not be representative, I have yet to come across a single instance of this kind of reviewer behavior there. Even the bad papers I submitted to psychology journals were met with clear explanations of what was wrong--in some cases, explanations by a reviewer, in other cases a brief explanation by the editors. And every referree report my spouse has shown me with her papers is similar: there is always a real explanation--a justification--of the referree's recommendation.
Of course, these differences between fields might be justified. Maybe philosophers have special reasons--special qualifications or abilities--that should exempt them from having to justify their editorial recommendations. Or maybe there are more general reasons to think referees shouldn't have to give detailed recommendations. This seems to be Out to pasture's thought: that in soliciting a referee's recommendation to begin with (especially, let's say, if the person has a good philosophical reportation), philosophy editors have grounds to trust the judgment of that referee. Yet this seems to me to fly in the face of everything we know, empirically--and we know a lot--about the pernicious effects of tacit biases. People tend to think that they are "objective, impartial" judges of things--and it seems, in philosophy, that we judge "the arguments on their merits." But there are so many empirical reasons to doubt this. People tend to think that they do not judge people differentially on the basis of race, gender, social class, etc.--yet, time and again, empirical research shows they do. Similarly, although a given referee might not think they judge papers in biased ways, there are all kinds of other possible ways that their judgments might be biased. For example, one recent study of doubly-anonymized review showed that,
[A]uthors often could be identified by reviewers using a combination of the author's reference list and the referee's personal background knowledge...[identifying] authors correctly 40-45% of the time. One main motivation for double-blind review is to eliminate bias in favor of well-known authors. However, identification accuracy for authors with substantial publication history is even better (60% accuracy for the top-10% most prolific authors, 85% for authors with 100 or more prior papers).
Second, many papers are shared publicly--at conferences, department colloquia, etc.--in ways that can clue a reader into the author's identity. Third, there are "graduate student" ways of writing, or so I hear--features of a person's writing which can signal a person being an early-career philosopher. And so on.
This, in brief, is why I think referees are obligated to give actual, detailed justifications for their recommendations, and why editors should expect them to fulfill that obligation. The point of anonymized review is (or should be) to mitigate bias, and ensure that papers are judged on their merits. Having to give an actual justification is one way of ensuring this: it is a way to (A) hold referees accountable for their recommendations, and (B) enable second and/or third parties (an editor or multiple editors) to judge whether the judger actually has a good case to make for the recommendation, or whether their recommendations are due to biases they (the referee) may or may not be aware of. I just have a hard time seeing how anything less is appropriate scholarship. Given the kinds of biases that can--and are known to--afflict human judgers, "trust me" shouldn't be considered good enough, especially when it comes with a 3-6+ month wait time (which, as we all know, are common wait times with journals).
Or so I'm inclined to think. What do you think?
Do you have any details on what "graduate student writing" tends to consist of?
Posted by: recent grad | 03/28/2016 at 06:31 PM
Also, some research has suggested that people expecting to be held accountable for their decisions and judgments may exhibit less bias in those judgments:
"Among those [approaches to debiasing] for which research evidence suggests the possibility of successful debiasing outcomes include:[...] Having a sense of accountability, that is, “the implicit or explicit expectation that one may be called on to justify one’s beliefs, feelings, and actions to others,”
can decrease the influence of bias (T. K. Green & Kalev, 2008; Kang, et al., 2012; Lerner & Tetlock, 1999, p. 255; Reskin, 2000, 2005)."
(Source: http://kirwaninstitute.osu.edu/wp-content/uploads/2014/03/2014-implicit-bias.pdf)
So there's reason to think that reviewers' judgments may improve if we consistently expect them to provide justification for them.
Posted by: Stacey Goguen | 03/28/2016 at 11:58 PM
I saw two approaches being advocated in the comments, one to write "a couple of lines" the other to write "1500-2000 words". I would advocate for something in between. In my view, the main job of a reviewer is to evaluate whether a paper has met the minimal standard unit for publication. So, in most cases of rejection, I think the reviewer should clearly articulate the main ways that the paper failed to achieve that unit. But, this need not and probably should not take "2-6 pages" to communicate! That many pages begins to seem like clandestine co-authorship and it strays from the main job of the reviewer. It might also be less optimal for the system on the whole, if fewer people accept assignments or complete them punctually because of extensive commenting.
Is there any evidence that making people justify their responses reduces rejections as a result of the biases you list? It could lead people rethink their reactions, but it seems more likely that they just look for reasons to justify them. You can find reasons to sink any paper if you really wanted to, and often any reason is all it takes to get rejected. In any event, if the reviewer is going to recommend rejection, I'd rather just have it be a few sentences and quick so I can move on elsewhere, rather than wait around for someone to write 2000 words about my submission, before I go ahead and do that anyway.
Posted by: Wesley Buckwalter | 03/29/2016 at 01:33 AM
"Out to Pasture" (... "Lunch" ?) does not seem to understand the concept of due process, or think it applicable in refereeing. Due process requires that one not only decide, but give (more or less) public reasons for one's decision. The strength of the argument they assert may be contestable; but the argument must be made.
Posted by: Hugh Miller | 03/29/2016 at 12:53 PM
'Here's a brusque (but I hope not brutal) headline thought. Referee's reports for journals are written for editors. They should offer a judgement about whether the paper is worth publishing. They are not, primarily, tutorial reports for authors. If the judgement is negative, then it is a work of supererogation to spell out the reason in any detail: a couple of lines for the editor's eye is enough. The editor should trust your judgement if s/he has chosen you to report. It is nice if you have the time to say more, but there is no need.'
In reply, I have just two things to say.
First, I have no problem with this, as long as it's applied to rejections and acceptances. I had a paper basically accepted at a decent journal and the editor ignored the verdict, because the referee had not written enough justifying it. On the other hand, I've had papers rejected for little reason. Sometimes, I am provided with no reason whatsoever, so I can only assume there was no reason, or no reason they dare share.
Second, some top journals say 'We cannot provide comments on all rejected papers. We focus rather on arriving at a well-informed judgment without undue delay.'
But there seems to be an inconsistency between ‘arriving at an informed decision’ and ‘not providing comments.’ If your decision is informed, then you have comments and can provide them. So, I can only surmise that this policy is designed so they can reject papers for bad reasons to keep their rejection rates high and continue to be perceived as prestigious.
That's all
Posted by: Postdoc | 03/29/2016 at 01:36 PM
recent grad: That's a great question! I hope to write a post on it in the next couple of days. As I explained in a post a while back, "On not acting like a grad student" (http://philosopherscocoon.typepad.com/blog/2014/03/on-writing-and-acting-like-a-professional-rather-than-a-grad-student.html ), my sense is that there are two general qualities that people associate with grad students: (1) underconfidence, and (2) overconfidence. My own sense, in learning how to publish, is that indeed, one must learn to avoid these pitfalls as a writer as well. In my upcoming post, I will try to further explain my sense (which could be wrong!) as it pertains to writing specifically. In any case, thanks for asking: it's a great question!
Posted by: Marcus Arvan | 03/29/2016 at 05:16 PM
Hi Wesley: I think Stacey put it well.
First, accountability can plausibly incentivize people being more careful. A person may think a paper is bad to begin with, but when they sit down and actually have to compose a *detailed* review, they have to provide an argument for their recommendation--and they may find that their argument is difficult to make, which could in turn get them to reconsider their recommendation. This has happened to me before as a reviewer. Sometimes I initially have a negative opinion of a paper, but when I find myself having to *explain* my initial reaction, I have to be more critical of my own reactions--and better understand the paper.
Second, even if justifying a recommendation doesn't make a reviewer more careful, it can reduce bias by subjecting that evaluation to a second set of eyes: the editor(s), who can then *compare* that review with other reviews from other referees, to see if it is actually a competently performed/unbiased review. In some cases, I appear to have editors disregard a middling or even negative review because (A) the other reviews were very positive and (B) the negative review gave shoddy justifications for rejection that the other reviews undercut. So, this is a second way actual justifications can plausibly lead to better results. They give editors actual justifications to judge for themselves, along with other reviews. Failure to give any information at all (besides "trust me") deprives editors of important information.
Posted by: Marcus Arvan | 03/29/2016 at 05:27 PM
Hey Marcus, I am familiar with some of that research, but I'm not sure how it would extend to this particular environment or set of questions. I'm also not sure I see how much more or any "accountability" there is for an anonymous reviewer of a 1000 word report than one who did a 100 word report. How are anonymous reviewers being held accountable in either case exactly?
Posted by: Wesley Buckwalter | 03/29/2016 at 06:56 PM
"Second, some top journals say 'We cannot provide comments on all rejected papers. We focus rather on arriving at a well-informed judgment without undue delay.'"
Yep, and I love it when I get both a rejection without comments eight weeks after submission. Really? It takes a reviewer that long to give an up or down vote? I guess they don't consider two months to be "undue delay."
Posted by: Scott Clifton | 03/30/2016 at 11:07 AM
I get a lot of referee requests. I accept what I believe is a fair number (maybe about 10-12 a year). Lately I have revised my refereeing practice to be less micromanaging and briefer, also to be more efficient with my time. I do not think reports of 3000 words for an 8000 word paper are ultimately helpful for the author, and they are a huge time sink for me.
So I currently write referee reports that are about a page, and never more than 2 pages (about 500-1000 words) long, also if I recommend revisions. They are shorter if I recommend rejection, especially if the paper is of very poor quality. I begin by saying what I think is good about the paper, and then briefly review the worries I have about it. I try to balance being honest and being useful: if a paper is poorly written, the author has to know this or else they would be sending around a paper forever that is rejected on those grounds. If it misses discussion of key portions of the literature, I give a few examples but I do not think it is my task to help the author to all the sources they miss. If there are flaws in the argument, I point them out. I try to restrict myself to no more than 3 big comments and a couple of smaller comments.
Posted by: Helen De Cruz | 04/01/2016 at 05:51 AM