Our books

Become a Fan

« Moving to a research-supportive job before or after tenure? How? | Main | Getting into a top PhD program following a post-MA break? »



Feed You can follow this conversation by subscribing to the comment feed for this post.


I'd like to disagree with the comment about not needing a summary. Even the best philosophers can be sloppy or lazy readers. The summary is, I believe, a good way to make sure the referee read the paper well, and the whole thing. Just a 100-word summary is fine.

Sometimes reviewer

(Assuming questions are allowed): does this advice depend on the verdict that is being recommended? In particular it strikes me that this advice seems right if my verdict is either straight reject or accept without revisions—in which case speed is the most important consideration. But it is less obviously right to me if the verdict is an R&R of some kind: then it seems important that I write comments that give the author the best chance of improving the paper in the ways required to get it accepted after revision. It doesn't seem all that helpful, for example, to give only brief overall comments when there are changes needed at the passage by passage level in order for the paper to be publishable.

Elizabeth Harman

Completely right!

Richard Yetter Chappell

Thanks to the Editors for this very helpful (and sensible-seeming) post!

It's especially helpful to hear that summaries are unnecessary (I wouldn't have guessed that). And I strongly concur with the request for frankness in communicating negative evaluations.

One thing I'm always curious about is how referees (and editors!) "explicitly assess the quality and overall significance of the paper." The confidential sections of reports might speak more to this, but I worry -- at least based on what is shared with the author -- that many referees seem to fall into a default mode of assessing papers by the metric of "can I find an objection?" rather than "does this advance the debate?"

In the following blog post -
- I suggest some alternative questions that I hope might prompt referees to better focus their evaluations on the "big picture", and less on whether they personally agree with the argument. I'd be curious to hear whether the Editors agree that it could be helpful to prompt their referees to bear these sorts of questions in mind when assessing papers.

Kris McDaniel

It might be that summaries are unnecessary given the length of the typical analysis paper. But as a subject editor for a different journal--Ergo, which publishes substantially longer pieces)--I appreciate referees' summaries of lengthier pieces because they provide me with a good sense of whether the referee and I are on the same page with respective to the aims and arguments of the paper in question.


Mostly, as a referee, I appreciate explicit instructions (like these), which tell me how the editors want me to evaluate the paper. Often, I am asked to review a paper with few instructions at all... by top journals!


I think there is something to be said for A) providing a brief summary of the paper, and B) giving extensive comments. I believe that these two practices play an important communicative role, and that giving up on them could potentially make the situation even worse. This is mainly because, with so much pressure on journals to reject papers, a journal rejection sends only a very weak signal about the quality of an article. Even top fancy megastars are reporting that their papers often get rejected multiple times before being accepted these days. Moreover, most of us think that our papers are good. So, if we just receive a rejection, or rejection with very minimal comments, most of us will take this as only very weak evidence that our paper is not publishable. So, we send our papers out again with minimal changes and clog up the peer review system.

Summaries and detailed comments communicate to both the author and the editor that the referee has read the paper in detail. As an author, when I receive minimal comments with a rejection my first instinct is almost always just to send the paper elsewhere, perhaps with a few small changes. If the referee has very little to say I assume that they have not read the paper especially closely. This might not be entirely rational, but it is entirely natural. I believe the implicit reasoning is something like this: I have plenty of evidence that referees often don’t read papers (from my own experience and reports of others regarding referees just misrepresenting the content of submissions). Moreover, I have evidence that they haven’t read my paper closely –after all, if they had read my paper closely then they would have recognized its obvious brilliance. This reasoning (which I am sure most of us engage in, consciously or not) is only really defeated by evidence of a close reading. All this is to say – a rejection with only brief superficial comments sends a very weak signal regarding the quality of the paper. One referee didn’t really like it but may not have read it very closely. There is very little reason not to send such a paper out again almost immediately.

Beyond this, detailed comments, if done well, give the author a clear idea of exactly where their paper stands. No author believes that their paper is a no-hoper. If it is a no-hoper then they need to be convinced of this. If they are not convinced of it they will just send their paper out again. Clear and detailed comments are needed to persuade authors not to just send their papers out again. Similarly, if a paper needs very substantial changes to ever stand a chance of being publishable then this needs to be clearly communicated to the author so that they don’t send a half-baked paper out again immediately, but also so that they do not give up on it for good. I think this should be done with kindness, but also with a clear statement of where the paper stands – the type of statement that the Analysis editors suggest is often saved for the editors’ eyes only.

So, whilst brief reports may take less time to write, I am not convinced that they save time overall because they fail to take bad or half-baked papers out of circulation. As long as these papers are in circulation they will be taking up the time of editors and referees (not to mention the authors who are repeatedly revising these papers to no avail).

As a final small point: I actually think detailed comments send a similarly helpful message to editors in the case of the recommendation to accept: Several times I have written very brief referee reports recommending acceptance or minor revisions, only to have my report entirely disregarded by editors. This has never happened when I have provided more extensive comments. And I can only conclude that this is because editors (perhaps with the exception of the editors at analysis) assume that minimal comments = superficial reading of the paper.


This is useful. Like Richard, I wouldn't have guessed that summaries are unhelpful. I also like the idea of being frank in the comments to the author, and not just to the editor. Presumably that helps keep the author's comments more professional and less vitriolic.

Here are two questions I have:
1. I am sometimes tempted, as a referee, to note how the manuscript compares to the average article published in the journal for which I'm refereeing. Is this something that you like? dislike? encourage? discourage?
2. In the comments only to the editor, I sometimes remark (when it is relevant) about the level of confidence of my opinion. (e.g., I might say "I've never published on this topic and my knowledge of this literature is incomplete and so if the other referee(s) are more competent, you should put more weight on their opinions).
is this helpful? and is this the kind of thing you'd also like me to pass on to the author?


I'm all for reports that frankly and "explicitly assess the quality and overall significance of the paper, and set out any suggested changes." But I don't buy the argument for refraining from doing more than that. I tend to write detailed reports, and doubt that doing so makes me review fewer papers or take significantly more time on reports. What makes refereeing time-intensive is the fact that carefully reading and thinking through a paper is time-intensive. So long as I'm reading a paper carefully--which I have to do in order to competently assess its quality--I might as well write up helpful suggestions and small criticisms for the author as I go. (After all, I have frequently been deeply appreciative for such suggestions/criticisms from referees of my papers!)


For what it's worth, I understand the editors' suggestion that making reports briefer in certain ways can help with the publishing backlog. But I think the other culprit on this score is the fact that some places have suggested that authors send out many, many papers to journals at a time. If people just submitted fewer papers that would serve to address the problem. Remember Wittgenstein's quip about how philosophers should "take their time."


I tend to write longer reports than other referees, but I always think of my job as advising the editors first. The length is required to simply say what I think about the "quality and overall significance" of the paper. In particular, it's hard for me to see, in a typical case, how to characterize the contribution the paper makes without something like a summary.

With respect to quality, if the paper is good but needs improvement in various ways, then I need to say what those are in a way that will actually lead to improvement. This is guidance, too, for the editors, who will soon be looking at a resubmission.

Sometimes, however, the length of my reports is unnecessary for the purpose of advising the editors. In particular, it is hard for me to know what to do when the submission is half-baked. What I want to say is, `please think through the issues more thoughtfully.' But that would be both rude and unhelpful. So, what I do instead is provide a blast of objections and requests for clarification. In every case, I give only a sample of the problems that come up -- life's too short to complete the list -- and I always try to find a polite way of saying that the author should approach writing the paper more carefully. Sometimes I say something less polite in the remarks to the editors.

It is the prospect of refereeing a paper of this sort that prevents me from taking on more refereeing. I would be especially grateful to Stacie, David, and Lee if they could offer advice on how to provide a shorter report (and one less painful to produce) in such a case. The kinds of supportive things I say to my students in cases like these -- platitudes about how everyone's early drafts bear improvement, that a little sweat really improves the final product, etc. -- would be inappropriate in the context of a referee report. In the rare case in which it's a colleague who gives me something half-baked, I always preface my comments to them by indicating my conviction that this is an early draft. Again, this sort of remark would be inappropriate in the context of a referee report. So, any tips would be much appreciated.


I largely agree with Andy. Although a brief review with only a general assessment of quality may allow an individual editorial decision to made more quickly, these gains will likely be largely offset by the fact that such reviews do not incentivize authors to do any revision before immediately submitting again to another journal.

However, one counterpoint is that authors will in some cases resubmit without revisions even if they are given more substantive comments. Personally, I do this if I do not agree with the referee comments, since I have not yet had a case where a paper was rejected for the very same reason by two different referees. So, if I don't agree with the comments, I'll just ignore them unless other readers have a similar reaction. If these kinds of cases are widespread, then efficiency gains from quicker referee decisions will not be offset by quicker resubmissions.

Matt Bennett

I tend to write long reports to help the author, so this was a very interesting read for me.

Could a useful compromise be: (1) explicit recommendation to reject/accept/R&R, followed by (2) brief reasons for the recommendation, (3) a brief account of the most important virtues of/problems with the paper, and then (4) any detailed comments think could be useful to the author?

If the editors are pressed for time, perhaps they could just read 1, 2, maybe 3. The rest is for the author.

veteran referee

Thanks Editors
Like others I think a brief summary is worth including, as it makes clear what the referee thinks the paper is about. I have sometimes even used the summary to highlight the fact that there are two separate lines of argument (that do not fit together) in the paper.
I also think that briefer, more pointed, comments are better. I assume many editors agree, as I am asked to referee quite often. I tend to complete a refereeing job in 5 days of less. I have refereed about 200 papers, over 25 for both Philosophy of Science and Synthese, and 10 or more for EJPS, SHPS, BJPS, and Erkenntnis. And many more ...
I am distressed to see how often people just keep resubmitting the same bad paper, largely unrevised. I tend to decline offers to referee papers I have refereed before, but some authors change the title, and thus trick me, and sometimes editors beg me, knowing I have refereed the paper before. People need to send fewer papers in - aim for quality, not quantity. Especially after one has a few publications, a bunch more makes little difference to ones file or ones career. A few well placed, well argued, and original papers will go a long way.


Re: leaving out the summary: I agree with Kris McDaniel. In case of a very short paper, a summary might not be needed, but based on my experience in editing, for longer or more ambitious papers, it is actually very useful to have a summary, preferably of course a very concise one.

Re: avoiding long reports: Like grymes, I sometimes write long reports with many comments for the authors and writing those down is not that much more work for me if it's the kind of paper which requires a close reading and inspires thoughts. Still, I also see the analysis editors' point that for them, it's actually more work to have to read a report with a long list of comments.

I think there is a simple way to address this problem: referees could be asked to put decision-relevant comments in one section and comments which are "merely" supposed to help the author in a separate section of their report. Relevance is of course a vague notion, but I think it can be reasonably sharpened in this context:

In case of a rejection, the only decision-relevant comments are those which directly say why the paper is not fit for publication where it was submitted. In case of R&R or CA, the decision-relevant comments are those which a) say why the paper can be made publishable by revisions and b) explicitly specify these revisions. In case of acceptance, only comments which say why the paper should be published.

I know some people already do this in their reports and to them this might sound trivial, but I've seen enough referee reports which could be improved (from an editor's perspective) by clearly making this distinction.


Another option to reduce submissions to 'top' journals is for people in 'top' philosophical countries, especially senior scholars, to submit their papers to journals published in different parts of the world. There is space available for publication, and if this practice becomes more regular, it can enhance the rating of those journals, ultimately creating more 'top' journals and more more publication opportunities. Furthermore, inviting scholars from outside the philosophical 'top' centers to review papers could help address the issue of overwhelming review requests often reported by individuals from English-speaking countries and top-centers. Scholars outside these centers are often available and willing to contribute as reviewers. We are also cultivated and smart enough to do reviews!

Top dog

I think you misconceive of the notion of "top" journal - that is a game that is truly a zero-sum game. You cannot have twice as many or 10 times as many top journals.

Nathan Salmon

I’ve long complained that the system of peer review in philosophy is broken, partly (but only partly) because too many reviewers misunderstand the nature of their task. It is not the task of the reviewer to rewrite the submission. It is fine for the reviewer to offer improvement, but that too is not the reviewer's task. In many cases, the author has more expertise on the topic than the reviewer does, often significantly more. The peer-review process isn’t, or shouldn’t be, a contest between author & reviewer. Neither is the reviewer really a “referee” in the sports sense. It also isn’t the reviewer’s task to declare whether they have been persuaded by the submission to change their pre-existing view. It certainly isn’t the reviewer’s task to declare what sort of submission they’d prefer to see, given the reviewer’s idiosyncratic interests & preferences. And it isn’t the reviewer’s task to do the editors’ job for them by looking for a rationale for rejection--any rationale. The reviewer's task, rather, is to provide a professional assessment concerning the quality of the submission and its suitability for publication in the outlet in question, specifying in a helpful way the rationale for their judgment. The reviewer has an obligation to the author to be ever professional, but the reviewer is performing a task for the editors, not for the author. Editors must then make selections from among the submissions that have been deemed suitable and of sufficiently high quality. Not a trivial task, to be sure, but reviewers are in no position to perform that task for the editors.

There are no rules

What a strange understanding of the situation! Even setting aside the fact that the number of A&HCI- and SSCI-indexed journals has grown larger over time, do you really think there can be no growth at the top to account for the massive growth of the profession, especially internationally?

Do you think the same for other prestige systems? Should we just pretend that the Academy Awards didn't double the number of nominations for the Best Picture category?

referee with a question

As a rather junior and inexperienced referee (refereed fewer papers than my academic stage indicates), I observered a strange phenomenon of myself, and wonder if this is just me. In general, I tend to write longer report trying to help authors to improve their papers regardless of the quality of the papers. This is not the strange bit. The strange bit is that my evaluation tend to be more frank and brutal when I review rather high quality submissions but not necessarily without problems. When it comes to low quality submission, my tone changes to sugarcoating. This leads to the undesirable result that from the external point of view, it maybe hard to tell the difference in quality from my report, and may even result in opposite judgment. But I really cannot control this tendency without significant efforts. Am I alone? If not, this can be a serious problem for the editors. If yes, can someone maybe offer some suggestions to overcome this?


First, everything Nathan Salmon said sounds about right; more referees should think like that. The process is not about changing your mind.

Second, someone above suggested something along the lines of 'if you write shorter reports, then writers won't revise and they'll just send their paper back out'. SO WHAT?! If we engage in this broken practice with good faith, then you should expect that the author took a lot of time to work on their paper, and thus the opinion of one or two persons (expressed quickly, in shorter reports, as the Editors ask for) should not motivate them to majorly revise their paper. Major revisions should happen after 4-5 rejections.


I wonder if the "good intentions" of writing longer reports to help authors is really just a cover for wanting to micromanage other people's writing and approach to doing philosophy. Maybe, we should have a discussion asking authors whether longer reports actually helped them write better papers.



I mean, it’s peer review; refs are themselves authors. I write detailed reports because such reports from others have often helped me write significantly better papers.

Executive Summarizer

I wonder whether it would make editors’ jobs easier if long-winded referees included “executive summaries” in their reports that delivered verdicts and briefly summarized the main strengths/weaknesses of each paper. This would allow those referees to divide their reports into two parts, one primarily for the editor(s), and one primarily for the author(s). The part aimed at the author could then be as long the referee wanted it to be.

on being informative

It's a good practice that sometimes reviewers point to relevant literature, as long as it is done well. Here are a few ways things can be frustrating.

"x has a paper on F, you should engage with it."

This is uninformative on two different levels. How should the revision engage with it? And which paper exactly? x may have 50+ papers. So I guess editors should make it clear that if reviewers want to suggest more engagement with the literature, at least identify the title of the paper.

Another frustrating experience I had was that a reviewer suggested a book chapter that is only available in paper format, not available in any of the libraries within 8 hours driving distance, and the book is like $250. I don't know where the line should be drawn, but insisting that authors engage with sources that are extremely difficult to access is somewhat problematic.

partial expert

I'm a little late to the party, but hopefully someone might see this and weight in.

I agree that frankness is important when writing review reports, but to which extent should reviewers be frank about their own limitations? Let's say that I have been asked to review a paper drawing on two bodies of literature, X and Y, and I am on expert on X but not Y.

Should I state this in my review or in my comments to the editors? What if, say, the engagement with the literature on Y seems a little superficial to my admittedly untrained eye? Should I restrict my comments to my own field of expertise? Or should I somehow flag if I raise worries that lie at the periphery of my expertise?

I'd be curious to hear from both authors and editors.


it reads like editors should do more vetting. If there is such a thing as a "no hope" paper, it should not be sent out to peer review. That just wastes the time of all, including the author. Even if the editor is wrong about it being no hope, it is unlikely a positive review will see the paper published over that editorial evaluation, and better for the author to just submit elsewhere where there may indeed be hope.

And if a reviewer feels the obligation to warn an editor their opinion may have little weight due to lack of expertise in an area, they should not be doing a review of that paper. It is always helpful to know whether a paper makes sense to someone who does not have specialized knowledge - if it does not, it may well be a poorly written paper - but that is the editor's job and a waste of the peer reviewers time, which can be better used reviewing a paper on a topic where they can confidently weigh in. Peer review is a great way for a peer reviewer to learn and keep up wih the latest in her field, but that is not the purpose of peer review.

I find detailed comments, as an author, very helpful. Even if I disagree, it shows me how many other readers will likely perceive the paper and fine tunes my thinking so those other unperceptive souls who don't appreciate my genius will after rewriting better get any point I actually might have.

i do see the value of somewhat sugar coating reviews - an experienced author knows a rejected article will likely get published elsewhere, with or without revisions, but for a first-time author a rejection is a devastating blow.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Subscribe to the Cocoon

Job-market reporting thread

Current Job-Market Discussion Thread

Philosophers in Industry Directory


Subscribe to the Cocoon