Our books

Become a Fan

« Marc Sanders Award for Public Philosophy | Main | The vexed n-body problem and the costs of moving »



Feed You can follow this conversation by subscribing to the comment feed for this post.

Douglas W. Portmore

I think that the referee's "main job" and, indeed, only job is to do what he or she has agreed to do. And keep in mind that a referee gets the job of refereeing at the invitation of an editor who asks him or her to perform some specific task within a certain time frame. Moreover, in my experience, different editors (given the different editorial policies of the journals that they run) ask for different things. Some ask for just a very brief summary judgment about whether the paper should be published. Others ask for helpful comments for the author in addition to a recommendation on whether to publish. Still others ask for a justification of one's judgment, explaining what the paper is about and why it makes, or fails to make, an important contribution to the literature. Now, is it your view, that (1) if someone turns down an invitation to review, they are under no obligation to provide helpful comments for the author (and let's assume that you could provide these to the editor for him or her to pass along to the author while refusing to provide a recommendation about publication), but that (2) if you accept an invitation to review, you are obligated to provide helpful comments for the author even if you only agreed to provide a summary judgment for the editor? Why is that? Why is one obligated to do more than just what one agreed to do when one wasn't even obligated to agree to do that much?

In any case, it seems to me that a lot of what you give as reasons for thinking that reviewers are obligated to provide helpful comments for authors are just as much reasons for people to provide helpful comments to people who post drafts of their work online. For instance, both would help to ensure "that more papers in the 'peer-review pipeline' are better papers." So I wonder why not argue instead that we all just have an imperfect duty to help our peers produce better work and to improve the profession, where we can fulfill this duty in many different ways. My view is that we all have an imperfect duty to help each other and the profession to some substantial extent, but we can choose to do this in various different ways so long as we give some priority to the worse off in our profession. Some will do more editorial work. Some will do more to comment on drafts of people's papers. Some will do more refereeing work. And some will do more work to provide helpful comments to authors in their referee reports. But I see no good reason to think that someone who is already doing an extraordinary amount of service to the profession and to the those coming up in the profession are obligated to provide helpful comments to authors in their referee reports even when they only agreed to provide the editor with a brief judgment as to whether the paper should be published.

Marcus Arvan

Hi Douglas: Thanks for weighing in! Here are some thoughts in reply.

I am a bit puzzled by your first statement: "I think that the referee's "main job" and, indeed, only job is to do what he or she has agreed to do."

The general principle, "X's only job is to do what he or she agreed to do", seems to me implausible, as it implies that our obligations to each other (in the workplace) end at what we contractually agreed to do. That cannot be right. We have all kinds of background obligations to others as human beings that may or may not be included in the contract--including, let's say, the "imperfect duties" to be kind and helpful.

Anyway, getting more to the heart of the matter, I find it interesting that you refer to "duties" and "obligations." I didn't utilize either of those words in my post, and for good reason. In my new book, I argue that--with a few important exceptions--it is a mistake to think that our duties to others are things to be discovered through pure thought or debate. Rather, I argue that our duties are comprised through fair negotiation. So, what I am attempting to do in the current post is not *assert* what our obligations are. I am suggesting that the status quo conception of reviewer duties in philosophy wasn't fairly negotiated (it is mainly decided by editors and people in positions of power), and I am attempting to negotiate in favor of what I take to be a fairer conception--one that I think would be mutual beneficial for us all, realizing a better and more helpful discipline, and reducing inequalities in access to feedback. It's okay if you disagree--but I'm not sure I see an argument in your comment against either of the claims I'm trying to negotiate for (the "more beneficial" claim or the "fairer" claim). I suppse your assertion that people are already doing "extraordinary" work here is intended to address the fairness claim--but I don't think what I am suggesting requires extraordinary work. I always give detailed reviewer reports, and it doesn't seem extraordinary to me: it seems to me like I'm just doing my job as a good reviewer. And if there are people who are doing an extraordinary amount of work as reviewers (such a great amount of work that they cannot provide helpful feedback)? In their case, I would suggest they should be doing less reviewing, such that it is not extraordinary for them to do a more helpful job in the reviewing jobs they do take on.

Finally, to answer your series of questions, I am actually *very* open to the idea to negotiating more helpful and equitable norms of feedback and discussion outside of peer-review as well. Indeed, many times in the past I've referred to how much I admire contemporary physics--a discipline in which a great deal of discussion and feedback happens publicly via articles posted publicly on the arxiv (https://arxiv.org/). My general suggestion is that some other disciplines (psychology and physics) are much more mutually beneficial and fair--in how work is disseminated, discussed, and cited--than philosophy. In physics and psychology, helpful feedback is much more readily available, and papers are much more widely cited and discussed. That's precisely why I think they are disciplines to be emulated!

Wesley Buckwalter

Hey Marcus, glad to see my comments inspired a post on the main purpose of peer review -- though I'm not sure the quotes highlight the difference between our views. I definitely want reviews to be professional, productive, and ultimately reflect "helpful" norms. I also think we would do wise to emulate certain norms from other fields, and I agree with much of the statement from Cognition. I just think these things should be in the main service of evaluating a piece according to the standards of publishing. Professionally evaluating a piece for publishing in these ways is an importantly different job in several respects than that of an advisor, a commenter, a collaborator, or a coauthor, when giving feedback. This was also in response to reviewers trying to "surreptitiously co-author" or "signal jam" papers, by giving 1500-2000 word reviews/demands on papers. I would argue that several referees doing this, rather than focusing on evaluating papers, definitely decreases the likelihood that all of your criteria are met, that it is less likely to produce better philosophy for all of us to enjoy, improve the prestige of journals, experience of peer review, making things less arbitrary, oppositional, or constructive. Lastly, I don't think facts about the availability of opportunities to get feedback elsewhere has any relevance to this.

Pendaran Roberts

'What has been incredibly useful for me--indeed, indispensable over the years--is helpful feedback from referees. Indeed, feedback from referees has, by and large, been the only means for feedback that I have been able to consistently draw upon--and, as I mentioned above, I think it has played a critical role in helping me not only publish, but publish better work than I otherwise would have.'

This has more or less been my situation as well. Towards the end of graduate school, my work was at the point that non-experts in my field seldom could help to improve it further. I did try emailing my papers to experts, but this is hit or miss, usually miss. I have basically never gotten a useful comment from a conference. It seems conferences are mainly for networking.

Referees who are experts and give you their time though, can really help to improve a paper. All of my published work has benefitted from referee input, some more than others. But one thing is for sure, without comments I wouldn't have half the publications I do, and my others would be worse than they are.

I was unemployed for a year too. During that time, I had even less resources. So, that's another thing to consider. Many many PhDs are adjuncts or unemployed or have little access to experts in their fields. So, if we want to have any upward mobility in philosophy, referees must provide comments.

It's a moral requirement. In fact, boycotting journals that do not provide comments wouldn't be a bad idea. I kind of do this personally.

Marcus Arvan

Hi Wesley: I am beginning to suspect that our views are much closer than I initially thought!

You write, "I definitely want reviews to be professional, productive, and ultimately reflect "helpful" norms. I also think we would do wise to emulate certain norms from other fields, and I agree with much of the statement from Cognition. I just think these things should be in the main service of evaluating a piece according to the standards of publishing."

I entirely agree! The problem that I'm reacting to is that, all too often, peer-review seems to be understood in philosophy as merely enforcing minimum standards for publication. Hence, all too often, we merely get rejections with no comments, or blithe indifference to our hard work [viz. waiting 6 months for a paragraph that says, without any detailed justification, "this paper is terrible"].

I realize you qualified your initial endorsement of the "enforcing minimum publication standards" view with the claim that, "Issuing an evaluation might and maybe should end up helping authors when done well, but in my view, as a consequence of its main function." But I guess I want to advocate for something stronger than this: namely, that it's not merely true that a good review "might and maybe should" help authors. I want to advocate for the view that a good review absolutely should be helpful for authors--either by providing careful, considered critique of the author's argument [explaining, respectfully, where its shortcomings lie], and/or helpful advice for improving it. Further, to me, a disciplinary-wide commitment to something like Cognition's norms, a psychological and social commitment to the general aim of helping each other produce better work [rather than just brutally dismiss/reject work with a few hasty lines of text] would probably lead to this [good, responsible reviews] happening more often.

So, perhaps you and I are mostly on the same page. We both appear to think that good referees provide some detailed justification for the editorial recommendations they provide--and that to the extent that this occurs, it tends to help authors. I would just add that I think this kind of responsible reviewing is best advanced by a psychological and social commitment [norms] to something like Cognition's expectations: i.e. expecting referees to aim to help authors, rather than merely reject papers with no comment, or dismissive comments.

Finally, I do think facts about the availability of opportunities to get access to feedback are relevant, and would suggest that one should not dismiss their relevance out of hand. Here, in brief, is why.

First, as comments by Pendaran Roberts, myself, and Dan Kaufman make clear, these facts do seem relevant to some [many?] of us in small departments. We want to do good research--and are often told that we should only be sending work to journals that we have sufficiently refined through peer-feedabck--and yet we do not have many opportunities at all to receive such feedback. Which puts us in a very hard position, especially relative to those who are better positioned [those who can walk down the hall to experts in their department, those who get invited everywhere to give colloquium talks, those who get funding to go to lots of conferences, etc.].

Second, and relatedly, this is plausibly a matter of fairness--at least according to some standard egalitarian arguments on justice. Whether someone ends up in a large department or a small department is, at least in part, a matter of luck [the job-market is, as they say, a crapshoot]. And academic philosophy--and academic publishing--is a set of institutions. So, according to many egalitarians [who dominate social and political philosophy], it seems like justice/fairness requires reducing resultant inequalities of opportunity. Since such inequalities are rather extensive [in my experience, and Pendaran's, and Kaufman's], this suggests to me that we should want the publishing game to approximate a more equal playing field--which I believe would be better approximated if there were stronger norms for detailed referee feedback [which, again, there are in other academic fields, in ways that--again in my experience--make for a better, more equal experience for researchers].

Third, I would add that, at least on my own favored moral view [Rightness as Fairness], moral relevance is not something to be properly settled through rational debate. Rather, since [on my view] morality is partly a matter of negotiating what is morally relevant--through fair negotiating procesess [which enable each person to lobby effectively for their own favored conception of relevance given the costs/benefits they fact]--we should instead give each other reasons for favoring one conception of relevance over another, and then negotiate in favor of whatever conception we are ultimately most convinced by. This is why I am trying to enunciate the reasons why "accessibility to feedback" is relevant to me, as well as [it appears] to other people similarly placed as I. That conception of relevance may not fit what seems relevant to others, but [at least on my view] it is important to clarify the reasons for considering something relevant/irrelevant, so that we can better understand the other side's point of view, and potentially, adjust our own views in light of the arguments.

Wesley Buckwalter

Hey Marcus,

Yes I think we agree that reviewers should clearly communicate the grounds on which they view a paper is to be accepted or rejected, providing a critique of the argument, shortcomings, or necessary improvements for publication. I take it that bad behaviours of extended rejections without comments, unprofessional reports, and blithe indifference is orthogonal to our disagreement, since we both tend to condemn those things.

To locate the source of this disagreement, let me ask the following question. What, in your view, is the difference between a referee on the one hand, and a thesis advisor or collaborator, on the other?

There is probably a lot of subtle overlap to what people in these roles do, but also clearly important differences. To me, the referee is mainly tasked with evaluating a paper as presented for publication, while the job of the latter two is more focused on helping authors write great papers, and perhaps the best paper they could potentially write. To complete the analogy, these roles are less like a “referee” and more like a “coach”.

You seem to be advocating the reviewer-as-coach model. The reason why I object to this model is that reviewers-as-coaches take many liberties that are beyond the scope of referees. They often try to significantly mold papers, take over papers in their own image or according to their own subjective preferences over standards for publishing, with thousands of words of comments. Of course, coaches have different styles in which they train and educate, so with 2-3 reviewers this quickly gets out of hand. Conversely, the reviewer-as-referee can still behave professionally, competently, and helpfully to authors. They acknowledge that the paper they are evaluating is not their own. They can provide, maybe five hundred words succinctly and concretely communicating the specific ways that papers meet, could meet, or fail to meet the standards for publishing in a way that is still perfectly helpful to authors, but with a more objective perspective involving enforcing rules given a state of play, rather than as coaches trying to develop their own athletes.

Lastly, I strongly disagree with your points reviewers have a moral imperative to give extensive feedback to correct for networking “injustices”. It is an important issue to generally address, if insularity, clichés, physical distances, and so on, are limiting research opportunities in our field in ways that might be avoidable. I just reject that the peer review system for publishing in scholarly journals is the place to remedy such injustice. I have already given you my criteria for relevance. The criteria are that I think the main purpose of submitting a paper to a journal is to publish it. And I think the main job of a reviewer is to evaluate it for that purpose. Thus I deny there is a moral imperative to correct for injustice about people's opportunity to get extensive comments on their work prior to submitting it for publishing during the publishing process. I think it is a serious misuse of the purpose of the system to submit papers to a publisher as a means to solicit extensive comments, or before you honestly think is publishable. Speaking for myself, I personally think I have a moral imperative not to do these things. On a practical level, it also seems like an incredibly ineffective way to address networking injustices, given some of the common bad behaviours you noted.


Marcus Arvan

Hi Wesley: Thanks for clarifying your view, and for the question. Here are some thoughts in reply.

Your initial question is, "What, in your view, is the difference between a referee on the one hand, and a thesis advisor or collaborator, on the other?"

My answer is: that's something to be negotiated by real human beings, not stipulated unilaterally. On the one hand, there are obvious reasons for advisors and collaborators to focus solely on helping the author, as they are not involved in another task (that of justifying an editorial recommendation to editors). Referees, obviously, have that extra task. But I do not think it follows that that extra task is their main or sole job. It all depends on the norms we negotiate to apply to reviewers--and I am trying to negotiate/advocate one according to which referees fundamentally have a dual role: (1) to justify an editorial recommendation, and (2) help authors improve their papers. This is the dual role endorsed in Cognition's editorial statement, and one that I have suggested would be more mutually beneficial, and more fair, than the single, "just justify an editorial recommendation" model that you seem to favor (though, at times you seem to agree that reviewers should help authors in that "single role"...which again makes me wonder how far we disagree).

Our disagreement here seems clearer to me with respect to the following passage: "There is probably a lot of subtle overlap to what people in these roles do, but also clearly important differences. To me, the referee is mainly tasked with evaluating a paper as presented for publication, while the job of the latter two is more focused on helping authors write great papers, and perhaps the best paper they could potentially write."

I'm suggesting that there should be more overlap than you prefer--that referees should see themselves simultaneously (and roughly equally) as judgers and coaches. Which again seems to me what the Cognition editorial endorses (it says referees' job is to do both!).

However, I think I am now beginning to better understand what you take to be problematic about that model, when you write, "The reason why I object to this model is that reviewers-as-coaches take many liberties that are beyond the scope of referees. They often try to significantly mold papers, take over papers in their own image or according to their own subjective preferences over standards for publishing, with thousands of words of comments. They acknowledge that the paper they are evaluating is not their own. They can provide, maybe five hundred words succinctly and concretely communicating the specific ways that papers meet, could meet, or fail to meet the standards for publishing in a way that is still perfectly helpful to authors, but with a more objective perspective involving enforcing rules given a state of play, rather than as coaches trying to develop their own athletes."

I guess this is where I am puzzled. I'm not sure what justifies saying--in a system of peer-review--that "X is beyond the scope of referees", or indeed, that there is some "more objective perspective" for evaluating papers. I'm not sure what justifies this for two reasons. First, it is a system of *peer*-review, whereby one's peers are tasked to judge the paper as a *peer* (given their own philosophical judgment of what should be published), not according to some "objective" standard. Second, given that I believe in negotiation, I think we should be open to (and probably prefer) a system in which reviewers are encouraged to both (1) be helpful, but (2) not try make the paper into their own piece of work. Your worry seems to be that by endorsing the "helpful" model, reviewers would thereby be free to wantonly turn a paper into their own piece of work. I am happy to agree that this is a very bad thing, but it is not something I seem to see very much of in a system with more helpful reviewing norms (e.g. psychology). And indeed, it is not something that I have encountered at *all* with helpful reviewers in philosophy. Maybe my experience isn't representative here--and if it isn't, that's an important data point--but I've found just about every detailed review I've received to be genuinely helpful, not unreasonable attempts by reviewers to foist their personal philosophical idiosyncracies upon me. And indeed, I suspect this is a big reason why I support the helpful model: my overwhelming experience has been that helpful, conscientious reviewers are indeed helpful! If that's not others' experience, then that is indeed something to seriously consider--and which I would be totally happy to consider. But, I also think there could be good editorial guidelines to deter reviewers from doing it.

Finally, on the notion of correcting institutional injustices, I think we will probably agree to disagree here. In a broadly analogous domain (though the inequalities there are much more serious), one often hears opponents to affirmative action say things like, "Well, we should really fix education, mass incarceration, etc., the *real* sites of injustice." Yet, one natural response is, "Okay, but those sites of injustice are really hard to change, and have never been fixed. Affirmative action is one place that positive change can effectively be made." I want to say something similar about good reviewing. Asking reviewers to be helpful isn't especially onerous; it's good practice, and (in my view) has the added benefit of giving people who have little to no access to professional feedback a place to get it. Like many people in my position, I try not to submit things until I think they are publishable. But, people in my position are also in a very difficult position when it comes to making that call, precisely because it can be so hard to get feedback. We are, in other words, in a double-bind: either we don't send stuff out because we cannot get adequate feedback as to whether it is ready, or we do send things out given our best judgment that it is ready...only to (all too often) receive little or no helpful feedback from referees explaining why it is not ready, or what we might do to improve the work at issue. At which point the problem only compounds itself: we find ourselves sending out the (not very much improved) work yet again, rinse and repeat. My suggestion is that this is a bad system for everybody. It is bad for people in my situation because we have a hard time getting any feedback. It is bad for everyone who submits papers, as all too often feedback is unhelpful. And it is bad for referees, as all too often they get papers that haven't been improved because referees haven't been very helpful.

The suggestion then is: let's intervene at one place in the process--the point at which reviewers provide feedback--and break that vicious cycle, by encouraging/requiring helpful, detailed comments.

John Turri

Hi Marcus,

It's good when referees provide the sort of positive and valuable feedback you have in mind. The problem, however, is that, at least in the current regime of anonymous and unaccountable refereeing, referees provide far too much useless and harmful feedback. There is also a pernicious tendency for many people to confuse the role of referee with that of surreptitious co-author or research director.

In my experience, approximately 60% of referee requests/demands are neither helpful nor harmful in themselves, about 30% are harmful, about 5% lead to minor improvements, and less than 5% lead to noteworthy improvements. Comments that are neither helpful nor harmful in themselves tend to be harmful in the context of a long referee report, because they contribute to a perception that a lot of work remains to be done. Overall, I'd say that my published work tends to be no better, and perhaps slightly worse, than it would have been without referees' involvement. Given the significant time and energy the process demands, that's a really disappointing outcome.

(I submit papers only when they are publishable and not in order to get feedback, so my experience might be unrepresentative.)

My advice to referees would be to, as a general rule, limit their reports to an overall assessment, an informative but concise explanation of the assessment, and, if things go beyond that, a couple substantive suggestions that they are most confident in. I also suggest approaching refereeing tasks with humility and a healthy sense of one's own fallibility.

Marcus Arvan

Hi John: Thanks for weighing in. Those are interesting data points!

However, I'm curious whether you might clarify something. Your experience roughly matches mine for "unconscientious" reports, i.e. reports that are relatively brief, with little careful summary of my argument and/or little justification for an editorial recommendation (suggesting that the author did not read the paper carefully, etc.). It's these kinds of reports that I too have problems with. However, when it comes to conscientious reports--ones that actually bother to state my argument, raise detailed concerns, and provide helpful comments for improving the paper--my experience (seriously) that 90%+ of them are truly helpful. What I'm trying to argue for is that we (and editors) should expect reviewers--and we should expect ourselves when we are reviewers--to do the latter.

This is also something I have experienced in psychology, where detailed referee reports are common. *Sometimes* one still gets bad reports--but, the level of conscientiousness expected there leads (in my first and second-hand experience) to reports that are more helpful.

So, I wonder: are the experiences/percentages you are reporting the actual percentages for conscientious referee reports (ones that actually detailed), or are they merely representative of the total percentage (of all referee reports) that you find helpful? I ask, again, because when it comes to total reports, I have a similar experience (60%+ are unhelpful), whereas with truly conscientious reports (detailed reports, even negative ones) my experience is the opposite (they are almost always helpful).

John Turri

Hi Markus,

Those are overall percentages.

As an author and as an editor, I do not expect referees to help improve a paper. It is appreciated but not expected as part of the job.

I think it is the author's responsibility to submit a paper only if it is publishable. If a long and detailed report is required to explain how a paper could be made publishable, then the author has egregiously failed in his or her responsibility. Egregiously failing in one's responsibility does not generate a (time-consuming and onerous) responsibility for others to help one avoid failing next time.

Marcus Arvan

Hi John: Thanks for your reply!

I'm a bit puzzled by the strength of your main claims, specifically, "I think it is the author's responsibility to submit a paper only if it is publishable. If a long and detailed report is required to explain how a paper could be made publishable, then the author has egregiously failed in his or her responsibility."

I would certainly agree that it is an author's responsibility to submit something they have good reason to believe *may* be publishable (due care and all--though I think we may disagree about what this involves, more on this below). But, isn't the point of peer-review to discover whether a given paper is actually publishable? Isn't that what peer-review is: a system through which peers *review* papers, determining whether they are fit for publication?

Indeed, I have three related concerns here:

(A) Determining whether something is publishable is not easy and a matter both of differing opinions and luck (different people/peers have very different standards, etc.).

(B) On your strong claims, any reject-worthy paper automatically entails an egregious failure of responsibility on the author's part (which I don't think can be right, as again different peers have different views about what is publication-worthy).

(C) It can be very difficult for some of us (those of us working in small, less-prestigious schools) to get adequate feedback on whether our papers are publishable, in which case the strong claims entail that those of us in these positions should either rarely ever submit papers (which would put us at an extreme disadvantage), or else commit egregious failures of responsibility due to little fault of our own (which also puts us at a severe disadvantage).

I'm entirely with you that people should not egregiously fail to meet their professional responsibilities. But, I think it is an open and difficult question what those responsibilities are, in part because of the very different standpoints we have in the discipline.

Consider the situation of an author such as myself at a small school. I had to publish (a lot) here (in a non-TT position) just to get a job, and now that I am in a TT position I have to continue publishing in order to keep my job/get tenure. However, being at a small school, I have found it *very* difficult to get much peer feedback prior to sending out papers for review. This is not to say I haven't tried. I have! But it is very difficult. When I was at UBC (a bit research department), it was so easy to get feedback: I could literally walk down the hall and ask an Expert their thoughts or for their feedback on a paper draft...I would usually get the help I needed with little effort. Once I moved to a small school, things changed dramatically. I would try to get feedback from peers by email, I would regularly submit papers to conferences...and yet, I found myself unable to get the kind of feedback I would need to do anything other than use my own judgment of whether my papers were publishable. Moreover, on the few occasions I did receive some feedback, it was not terribly helpful. When I sent my paper, "A New Theory of Free Will", to some people, instead of getting detailed feedback on the paper's main argument, the feedback I got was super brief: break up the paper into a bunch of smaller papers. I thought (and still think) that advice was wrong, and so sent out the (78-page) paper for review. It got desk-rejected at Phil Review, and then accepted without revisions by Philosophical Forum. Did I fail egregiously in my professional responsibilities? I suppose you could say so at the point I sent it to Phil Review (since it got rejected without comments). But it was ultimately accepted because *some* peer(s) of mine out there judged that, however strange of a paper it may be, it was publication-worthy. Why isn't this the kind of call I should be able to make for myself as a researcher? Isn't that why we are trained and receive PhDs--to recognize each other as peers/experts who can recognize for ourselves when our papers *may* be publishable? Why, as someone with a PhD, shouldn't I be able to use my philosophical judgment on what is publishable (I am, after all, one of the peers that evaluates other people's papers!)? Alternatively, if some more onerous standard of professional responsibility is imposed on authors at small schools (viz. "Don't send out a paper unless you've received a lot of feedback indicating it is publishable"), doesn't that put those authors at a terrible disadvantage (again, one may not be able to get much feedback, and one might not have a job if one doesn't send stuff out!)?

John Turri

Hi Marcus,

The point of peer-review is not for *the author* to discover whether a paper is publishable. Rather, if there is a point of a journal’s review process, it is for *the editor* to make an informed judgment on whether to accept the manuscript for publication at that particular journal.

In response to your lettered concerns (retaining the same lettering):

(A) I think that most judgments of publishability are actually pretty easy. (Disagreement and luck are separate issues.)

(B) I did not say or imply that a reject-worthy paper entails an egregious failure. Instead, I said, “If a long and detailed report is required to explain how a paper could be made publishable, then the author has egregiously failed in his or her responsibility.” It’s the “long and detailed” part that entails an egregious failure. And often times a paper is publishable but, for various reasons, not in a particular venue (e.g. not quite right for the audience, the advance is not significant enough).

(C) My view does not entail this either. From the fact that it can be difficult to get feedback on whether a paper is publishable, it does not follow that it is difficult to judge whether a paper is publishable. Furthermore, even if it were difficult, it does not follow that it will rarely happen or result in egregious failure.

For years I taught at a small that most academics have never heard of, so I understand this issue all too well. But the journal review process is not, and should not be, a mechanism to rectify disparities in feedback opportunities.

Incidentally, it can also be very difficult for people at large, prestigious schools to get feedback too.

Wesley Buckwalter

Hey Marcus,

I'm glad my analogy helped reveal the source of our disagreement. Your answer to my question was that “referees should see themselves simultaneously (and roughly equally) as judgers and coaches” whereas I think referees should mainly see themselves as referees. Not anonymous coauthors, thesis advisors, research directors, or conference commenters that I’ve never met and have no idea who they are.

You said you were puzzled by my position for two reasons. You write “First, it is a system of *peer*-review, whereby one's peers are tasked to judge the paper as a *peer* (given their own philosophical judgment of what should be published), not according to some "objective" standard”. Are you saying there are no objective standards for publishing in philosophy? I would have thought appealing to and enforcing those things are largely what makes one a peer.

The second reason was that you’ve had such overwhelming positive experiences with peer review that my worries regarding being someone’s coach rather than someone’s referee could be exaggerated. All I can say is that I’m glad you’ve had such positive experiences getting extensive feedback on your work that you feel has helped you.

Regarding the “moral duty” to give extensive comments to correct for “injustice”, we will have indeed have to agree to disagree. In addition to my view it is a misuse though, I will give some other reasons against it: First, there are plenty of ways to “break the cycle” you describe without potentially misusing the peer review system (Do you not have friends or email or Skype? Do you not go to conferences, talk with visiting researchers, hold reading groups, or working paper series?) Second, a paper is more likely to get rejected if it receives extensive comments, since that indicates it needs a lot more work. So this could end up harming the people you are trying to help. Third, an explicit duty to give extensive comments (let’s define this as > 1000 words) is a significant deterrent to accepting an assignment, shrinking resources for the system. This limits progress for the field as a whole. Fourth, as I mentioned above it is incredibly ineffective. Doing this might require waiting six months or even years for feedback from someone you've never met, which may or may not be helpful or trustworthy, and sometimes none is issued at all. Those are some pretty big risks. On a purely practical level there's got to be more reliable means of procuring extensive comments.



Wesley seems puzzled that anyone could have trouble getting feedback on drafts. He asks Marcus,

"Do you not have friends or email or Skype? Do you not go to conferences, talk with visiting researchers, hold reading groups, or working paper series?"

I can only speak to my own situation, but all of those avenues are more difficult than you might think.I have asked friends and colleagues for feedback. Very few ever respond. Folks are busy and feedback takes time, I guess. I am a member of a couple of working paper groups. But given that we meet 3-6 times a year, I can at best present my own work once every 1 or two years. I do get a lot of feedback at conferences, some is very good. But as others have pointed out, often times the audience members are not familiar with the area and cannot be that helpful. As for 'visiting researchers', that is a luxury many schools do not have.

I am not making any statement about reviewing. I just wanted to note that getting feedback is not always as easy as it may seem.


Comments -- Of course I am not puzzled about those difficulties. Everyone faces them to some degree, even those working at larger universities. This blog has made some excellent contributions as well in fact, in terms of meetups and conferences (did you also have a digital reading group, Marcus?). In any event, however difficult it is to get extensive feedback is orthogonal to my view the moral proposal on offer is a misuse of the peer review system.

Marcus Arvan

Hi Wesley and John: It seems we have fundamentally different views on a lot of things. It also seems as that I’m losing in the court of public opinion—as there haven’t exactly been commenters rushing to defend my position. So, perhaps, indeed, I’m in the wrong! ;)

Like I implied in at the end of the OP, I’m happy to reconsider my position, and you’ve both given me a lot to think about. I still disagree with many of your claims/premises, and would be happy to keep debating those issues if you’d like. But, if I may, can I perhaps reframe my concerns in a way that might be more productive?

As I will now try to explain, I'm not exactly wedded to the "helping model", and don't want to (unreasonably) "defend it to the death" (as it were)--as you two have raised reasonable concerns. Rather, my reason for proposing the model in the first place was as a (perhaps clumsy) attempt to discuss and help address certain problems people in our field regularly seem to report having with the status quo.

When I think about why I proposed the “helping model” to begin with, here’s broadly where I’m coming from…

First, in my experience, many people, both inside and outside of philosophy, seem to think there is something amiss with current peer-review norms in our field--particularly compared to some other fields.

Here is one set of data points: I was just at the Pacific APA last week and I happened upon at least a half-dozen conversations of people reporting really bad peer-review experiences (i.e. waiting 2 years for rejection with no comments, or a few dismissive remarks, rather than any detailed justification). See also today's Daily Nous post: http://dailynous.com/2016/04/13/improving-journal-author-communication/

I have experienced my share of these sorts of things myself: at least half the time I send a paper out, I wait for many months to either get no comments, or a rejection with comments one or two lines basically saying, "This paper is terrible", without any further justification. I also know it's not just me, as I regularly see people in my social media feeds reporting similar things. Further, a recent study indicated that peer-reviewers in philosophy tend to use inflammatory comments more than psychology (http://philosopherscocoon.typepad.com/blog/2015/07/philosophys-peer-review-practices-some-comparative-data.html ). On the other hand, on the occasions that I have received detailed reviewer reports, I have almost always found them helpful (even those that misunderstand my paper--as they can help me revise the paper so future reviewers don't misunderstand it!).

Next, here is a second set of data points: I have submitted papers to psychology journals--and my wife is a PhD student in psychology--and my experience is that virtually all papers come with detailed, fairly helpful comments *and* a relatively quick turnaround time (2-3 months tops). And indeed, just yesterday in my social media feed, a psychologist I never met attested more or less to this fact, saying, "Sometimes we get bad comments, but we always get them--and people I know in my discipline are appalled to hear the kinds of stories that happen in philosophy."

Finally, insofar as reviewer in reports in psychology tend to be detailed, my sense is that their system (i.e. reviewer norms) broadly do have the “four benefits” I allude to in my OP: (A) improving the science, (B) improving discussion (and which journals are included in discussion), (C) giving people a better experience in the peer-review process, and (D) giving more people (including those at smaller schools) more, and better, feedback on their work, etc.

So, here, very roughly, is what I was thinking when I wrote the OP, and am still sort of inclined to think now:

1. Peer review in philosophy is a comparatively negative experience for a lot of people involved, who have many (reasonable) concerns: long turnaround times, rejections with either no comments or very few (and often dismissive) dismissive comments that one cannot use to improve their work, etc.

2. Peer-review in some other fields (e.g. psychology) is a much more positive, productive experience for those involved, largely preventing the kinds of frustrations in philosophy: they have good turnaround times, reviewers (and sometimes editors) virtually always with useful comments, providing people with more/better feedback, improving people’s work, increasing discussion, etc.

3. The situation in (2) is clearly better than situation in (1), and philosophy would be happier, more productive place if we tried to realize a similar situation.

4. Psychology seems to me to realize situation in (2) by having peer-review norms closer to the “helping model” than the “just judge the paper model” (which seems to me to lend itself to situation 1).

5. So (from 1-4), philosophy would be a happier, more productive place if we were more like psychology, moving closer to the “helping model” than the “just judge the paper” model.

This, in brief, was where I was trying to come from in the OP. My aim was to discuss and try to "find a better way" then we currently have. I thought the helping model was a good idea (and I'm not yet entirely convinced it's not)--but, do you two (or anyone else) have another, perhaps better idea of how we might address the types of problems with peer review (that some other disciplines, again, appear to have better resolved)?

Wesley Buckwalter

Hey Marcus,

I strongly agree with you that the peer review system is currently in very bad shape in our field. I agree, and have had experiences similar to yours, where it seems like other fields are in some degree of less-bad shape. I also agree deep and perhaps radical reforms are needed to improve the practice. I even think this should involve asking questions and studying data about deeply held assumptions about anonymous review, for example http://blogs.plos.org/absolutely-maybe/2015/05/13/weighing-up-anonymity-and-openness-in-publication-peer-review/

To answer your initial question of the post, I think the purpose of peer review is to evaluate papers for publication. But I also definitely understand your desire to want to help authors. So a different question you might now ask is how could we change peer review to achieve a maximal amount of improvement for authors? I do not think conceptualizing a peer-reviewer basically as somebodies anonymous research director, in which 1000s of words of comments/demands are given without any accountability will lead to the kind of improvements we need. I suspect that doing this, on the whole is bound to drain resources, take time, make papers worse, prevent good papers from being published, reduce professionalism, and breed abusiveness and bias.

I suggest that in addition to more closely aligning with the purpose of peer review, the reviewer-as-referee model will also lead to better improvements for authors. This involves focusing primarily on evaluating pieces, giving short, say 500 word assessments, of the publishable nature of the piece assigned, and a few comments closely in the service of achieving that goal. I submit that this will ultimately be the most “helpful” model for authors and our field.


Marcus Arvan

Hi Wesley: I'm glad to hear you agree the peer-review system is currently in bad shape. At the end of the day, that's what I care about addressing--and so I am happy to engage in continued dialogue about the best way to fix those problems. I thought the "helping model" might be a good way to do it, but clearly my arguments for the model did not appear to persuade, and I appreciate the concerns you raise. What steps do you think could be/should be done to improve the system? And, do you have any thoughts on how people in the discipline could lobby effectively to have those steps taken?

Wesley Buckwalter

Excellent questions! One thing I find so fascinating is that two junior philosophers who publish articles quite regularly could have such different experiences from peer review, conceptions of its purpose, and moral/prudential visions for how it might change. This also seems borne out partially in the original Leiter thread you linked, in which these two camps emerged. To me, this suggests the first steps would be a dialogue that exposes the substantive differences people have about approaching this job. That would go a long way to understanding, and hopefully, standardizing the process. It would be great if there were APA sessions or other conferences on the topic. Perhaps a special issue of JAPA.

John Turri

One thing that would help fix problems is for philosophy journals to provide criteria for publication and instructions to reviewers, as many science journals do. I recently read the instructions given to reviewers by Nature Human Behavior — http://www.nature.com/nathumbehav/info/for-referees — which I found to be exceedingly clear and helpful. Here is part of what they say:

"The primary purpose of the review is to provide the editors with the information needed to reach a decision. It should also instruct the authors on how they can strengthen their paper to the point where it may be acceptable. As far as possible, a negative review should explain to the authors the weaknesses of their manuscript, so that rejected authors can understand the basis for the decision. This is secondary to the other functions, however, and referees should not feel obliged to provide detailed advice to authors of papers that do not meet the criteria for Nature Human Behavior."

They go on to identify which questions "the ideal reviewer should answer."

Marcus Arvan

Hi John: I think that's an excellent suggestion! I actually reviewed for a journal that used that practice, and found the criteria incredibly helpful. The journal also had specific quantitative questions I found helpful (viz. "On a scale of 1-10, how original of a contribution to the literature is the paper?", "On a scale of 1-10, how strong was the argument?").

Here, though, is a more general question: what do you think is the best way to combat professional inertia here, that is, to get journals to actually adopt proposals like yours? I think Wesley has a good idea in terms of APA sessions or a special issue of JAPA, as those kinds of thing might bring these issues to the fore in the profession, get them taken more seriously, and incentivize reform. What do you think? Do you have any ideas about advocating effectively for reform?

John Turri

Hi Marcus,

Sadly, I only have speculation to offer about what would work. I think that an important first step is for people to, as Wesley suggested above, be open to rethinking assumptions about what "should" happen in peer review and to pay attention to actual evidence about the effects of relevant practices and procedures.

If I had to single out one assumption for special scrutiny, it would be that reviewers should be anonymous (and, as a result, largely unaccountable). As in other social situations, sunlight might be the best disinfectant. Of course, there would be trade-offs to consider, such as the suspicion that abandoning anonymity will diminish the pool of people willing to review. It's an empirical question whether the downside would be greater than the upside.

Another possibility is for philosophy editors to stop simply outsourcing their decisions to referees. In other fields, it is standard for action editors to write action letters reflecting their own evaluation of the manuscript and, when appropriate, critically synthesize reviewer reports. Philosophy editors rarely do this. Instead, they tend to just give a verdict and, in some cases, append reviewer reports. That contributes to a culture of unaccountability.

Derek Bowman


I'm curious about your (upthread)distinction between whether or not a paper is publishable (which authors should know in advance) and whether or not "the advance is [...] significant enough" (which authors might discover through the peer review process).

If the advance isn't significant enough, in what sense is the paper still "publishable," despite not being worth publishing?

John Turri

Hi Derek,

A paper can be publishable without meriting publication in the most selective venues. For instance, most publishable science papers don't merit publication in Nature. That's basically all I meant.

Derek Bowman

Thanks; I mistakenly took the "not significant enough" to have a wider scope than you intended.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread