Our books

Become a Fan

« Should grad programs be held more accountable? If so, how? | Main | Call for additional mentors »



Feed You can follow this conversation by subscribing to the comment feed for this post.


Thank you very much for sharing my post. I look forward to hearing other people's thoughts on the peer review process in philosophy.

Here are my thoughts on the 3 suggestions for publishing.

Re 1. This is great advice but easier said than done. This is especially true for those working full-time or even part-time temporary teaching positions.

My personal story is that for many years I worked hard and graduated with multiple top 20 publications and over half a dozen papers under review. I kept up my output for a year or so. Despite this, my job applications seem to just all disappear into a void never to be heard from again. I have been unable to find a job paying anything close to a median income. It does eventually become difficult to motivate oneself to continue writing. I do not feel appreciated, and I do not have much hope that my work will be rewarded in the future. I am considering dropping out after this year if I do not have better success than I did last year. I imagine many bright PhD graduates are faced with considering the same thing.

I have had faculty look at my record and say, 'surely, you're getting a lot of interviews.' This would have been true a decade ago, but not now. I’ve had very little success on the job market. So, PhD students need to be aware of just how bad it is out there. I knew it was bad going into the PhD, but I didn’t realize just how bad it was. I'm sure it's different if you've graduated from a top program.

Re 2. I've had more successes with top 20 journals than I have with bottom 50 journals. I think that the lower ranked journals may on average have worse referee pools, but this is just anecdotal evidence. Of course, I don’t wish to mislead. The problem with the peer review process is not limited to bottom ranked journals. I’ve had completely incompetent reports from top 10 philosophy journals on more than one occasion.

Re 3. I totally agree. Early career folk should not waste their time with the likes of mind, philosophical review, or the journal of philosophy. I've never sent anything to any of these journals and don't plan to.

At the end of the day, a big part of why the peer review process is so abysmal (but not the only reason) is that there just are way too many young PhDs submitting and reviewing articles. We've trained way too many PhDs for very few jobs. With the increase in adjunct faculty and 70% of tuition going to admin, there just aren't the tenure-track jobs available for all the bright young candidates. So, we are all inundating the peer review process with articles hoping to get a slight advantage over our competition. This, of course, we must do or drop out of academia. But it does partly explain the problem.


Why not do what I used to do, and use (1) as a way to mitigate (2)? The success rates at top 10 journals are lower than at those further down the ladder. But if you have 4 papers out at a time, and are fairly confident that they will find homes in this round or the next, you can afford to spend extra time developing an especially good paper to send to a top 10 journal.

It's worth adding that though some of the top 10 journals are among the worst offenders for review times and quality of reports, some are among the best. The opportunity costs for submitting to, say, PPR are lower than at many lower ranked journals. My worst experiences (losing my paper at one journal, a review time of more than 48 months at another) have been with journals well outside the top 20.

Enzo Rossi

I've fantasised about a pooling system to get around some of the issues mentioned above: http://enzo-rossi.tumblr.com/post/130816262210/journal-pooling-a-peer-review-pipe-dream

More importantly, the system of review transfer currently in place at the Nature Group journals seems very promising to me. I explain it briefly in the post.


Enzo, I like the pooling idea, but it doesn't solve the problem that so many referees are of poor quality. Maybe this could be solved by having more referees? 3 would be too few. 4-6 maybe...

Really, I think what we need is a way to rank referees. Perhaps authors should be required to write a short report to the journal on the referees. These reports should be shared among journals. Referees who are consistently ranked poorly, should be eliminated from the referee pool. Editors could also play a role in these reports. So it's not just the authors' opinions.


Moreover, referee quality ranking could be made available to referees. High rankings could be mentioned on CVs, as evidence of good service to the profession.


I support Enzo's ranking suggestion.


The AJP does rank referees. Or rather it scores them. The score is based on the timeliness and helpfulness of the report. Referees with lower scores are avoided.

Those who are suggesting more referees have clearly never been involved with running a journal. It often takes weeks to find two referees willing to accept the request. And much longer to get reports out of them. The system is far too slow as it is. I would far rather have quicker reports, so that the opportunity costs of a submission are lower than currently.



You should read Enzo's idea. It involves multiple journals having access to the same referee reports. Check it out. http://enzo-rossi.tumblr.com/post/130816262210/journal-pooling-a-peer-review-pipe-dream

Re AJP, how do they score referees? I've never had them ask me to score the reports. A worry I have is a report could look helpful while in fact being completely incompetent.


I read Enzo's idea. It will work only for a very narrow subset of journals and a narrow subset of papers: the paoers would have to be rejected on grounds of fit.

Scoring is done by the associate editors, who have at least some expertise in the field of the paper.

Enzo Rossi

Hi Postdoc and Neil,

Within some journals referees are scored, if not ranked. When I get a review I have an option to rate for both timeliness and quality. This is then recorded in our reviewers' database. I suppose it might be a good idea to share this information more widely, but since this is done through a software licensed to our publisher, I don't know what the legalities would be. In any case I have a rule of thumb of one review per person per year, so it'd be hard to build a reliable dataset, unless journals shared all their data on reviewers. But I anticipate several objections there, some reasonable.

It's true that the pooling system, even in the Nature version, requires a group of journals with comparable interests and so on. And yes, the most relevant cases will be those of papers rejected for reasons of fit. But that's not such a random subset: lots of fairly promising R&Rs become rejections for reasons of fit, for instance.


I propose 8 relatively simple changes to drastically improve the refereeing process.

1. Authors and editors rank referees. These rankings are shared between journals. Referees who consistently get poor rankings are no longer used.

2. After an article is accepted, referee's can choose to reveal their name and be formally thanked and acknowledged in the article. This should be done in a way that's noticeable. This means that referees will get more credit for their work. It's also more honest. Almost all published articles are really the result of collaboration with the referees.

3. As a profession, we should take refereeing more seriously as a part of our jobs. We should place more weight on it with regard to hiring and promotion. (Change 2 would help with this.)

4. Journals should have clear scopes or remits. Desk rejections need to be based on reasons. Journals like Phil Imprint have a very obscure scope and desk reject so much. You never really know why and so can't make improvements.

5. Editors need to stop intentionally trying to have super high rejection rates. Philosophy has the highest rejection rates of any field I know of. There is no need for it. Instead, editors need to have the mind set of trying to actually discover whether an article is worth being published. This will decrease the number of article circulating in the system.

6. No article should be accepted or rejected based on 1 review. That's not peer review. That's one person's opinion. There is a difference. (If the other suggestions were implemented, referees should be quicker. So, this could be done.)

7. PhD programs need to only admit as many students as they think they can find permanent jobs. So, we're talking about cutting admissions in half or more. Less desperate PhD graduates, less articles submitted clogging up the system.

8. You should be able to share referee reports with another journal. So, if high ranked journal x gets two reports that say R&R but rejects you anyway (which happens), you should be able to share those reports with another journal which might be interested in R&Ring you. This will cut down on referee use. It also cuts down on the lottery element.

I can't think of anything else right now. What do people think?

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Job ads crowdsourcing thread

Philosophers in Industry Directory

Open thread on hiring timelines

Cocoon Job-Market Mentoring Program