In the comments section of her recent post at NewAPPS on data she collected on journal submissions (cross-posted here), Helen De Cruz writes:
Jason Stanley...wrote (in a comment published on this blog a while ago): "I'm reviewing Kieran Healy's citation data, and it reminds me again how weird journal acceptance is. My book *Knowledge and Practical Interests* is the fifth most cited work of philosophy since 2000 in Phil Review, Mind, Nous, and the Journal of Philosophy (book or article). Yet the book itself is the result of three revise and resubmits, and finally a rejection from Phil Review. One of those drafts was also rejected from Mind, and also from Nous. All of those journals have accepted papers discussing, in many cases very centrally, a work those very journals have deemed unpublishable."
I find this very disturbing. I would wager Jason's experience is not some weird outlier. I know several senior philosophers who don't publish in general philosophy journals (anymore) but mainly in their own monographs or invited publications in handbooks etc. The reason is that they find the peer review process is not productive for getting their best work out. The peer review process is geared towards finding mistakes rather than identifying bold new ideas (which invariably always have some flaws), in this way encouraging work that extends existing debates and topics, and discouraging new ideas.
Barry Lam then added:
In my experience, which seems to be corroborated by others on various blogs, the reason the process is so geared toward finding mistakes is because peer reviewers feel that, given what we are told about the selectivity and acceptance rates at these journals, we our sticking our necks out quite a lot by recommending acceptance. Speaking for myself, many peer reviewers are also submitters who have had experiences like Jason's. These experiences make me think "wow, if the peer reviews for my rejected papers look like this, and I thought my paper was deserving of publication (that's why I submitted it!), then my standards for reviewing papers from this journal should be similarly severe, since that appears to be the level that is correct for this journal). It doesn't occur to me enough to think, "Wow, maybe I shouldn't be reviewing submissions in the same way, but reviewing by the standards that I think my papers deserve."
These remarks (and Jason Stanley's example) raise the following question: does our peer-review system focus too much on avoiding false positives (i.e. publishing bad papers), and not enough on avoiding false negatives (i.e. failing to publish good papers)? Stanley's example is -- as De Cruz notes -- particularly striking. It is one of the most-cited works in the "Healey four" journals since 2000, and it was not considered good enough to publish in them. How many other false negatives are there out there? How many of them have never seen the light of day because they ended up in "bad journals no one reads"?
Anyway, I'm curious to hear what everyone thinks. Does our peer-review process work as it should, at least on average? Are reviewers systematically too uncharitable, and unwilling to take bold, unconventional ideas seriously? I'd be particularly curious to see if there are more examples like Stanley's -- examples of really influential papers that were serially rejected by the Healey four or other top journals. Does anyone have similar examples to share? Please do share...the more examples the better!
In a way, however, it might be thought that Stanley's story vindicates the current peer review practices of the top journals. For "[Stanley's Knowledge and Practical Interests] is the result of three revise and resubmits, and finally a rejection from Phil Review. One of those drafts was also rejected from Mind, and also from Nous." And had his work not been so heavily vetted, it may not have come together to receive as much attention as it did, such that "[it]...is the fifth most cited work of philosophy since 2000 in Phil Review, Mind, Nous, and the Journal of Philosophy (book or article)."
Posted by: Eugene | 09/19/2014 at 04:08 PM
Hi Eugene: nice point, and well taken. But I still can't help but wonder. Stanley's work may have benefited tremendously from the vetting he received at those journals, but for all that -- given its influence -- it seems his work was likely *good* enough to be accepted by all of them, and yet all of them rejected it.
Isn't this disturbing? Rigorously vetting work is one thing. Systematically *rejecting* great work that later becomes wildly influential is another! It is a string of false negatives across several influential journals. Shouldn't strings of false negatives like this worry us, especially given the effects that they may have on people less well-placed than Stanley (i.e. people at small schools, etc., who may not have the opportunity to publish the work as a book with OUP)?
Posted by: Marcus Arvan | 09/19/2014 at 04:43 PM
But I guess the point I was making was precisely that Stanley's experience wasn't "a string of false negatives." His work, on this line of thought, wasn't up to snuff while it was still being refereed at Nous, et cetera. Eventually, with enough feedback, his work was ready to be published. And it was!
Posted by: Eugene | 09/19/2014 at 07:59 PM
It seems pretty plausible that Stanley's previous work was good enough to warrant publication in a top journal -- that his rejection constituted a 'false negative'. I'm not particularly inclined to conclude that journals are too selective on this basis. It's probably not an appropriate use of this blog to name names, but I think that there are lots (and lots) of 'false positives' in the top journals too. It seems to me that the conclusion to draw is that top journals are highly fallible at choosing the best papers, without exhibiting any particular bias in favour of accepting or rejecting.
(Indeed, given the fact that appearance in top journals is pretty close to a zero-sum game -- there is a pretty inelastic availability of slots -- it's hard to make sense of the idea that they're not getting things right "on average".)
Posted by: Jonathan Jenkins Ichikawa | 09/19/2014 at 08:16 PM
This post won't directly respond to Marcus's questions, but I want to support the worry raised in his post.
So, I'll just say that I find this to be a problem. The culture of philosophy seems to be that one must play it safe in order to survive. But safe work is not only dull, it doesn't say or do much. And the history of philosophy seems to show that new approaches to old problems are what matters, even if the arguments in favor of those approaches aren't perfect. (Others, including Marcus, have expressed similar worries in the past.)
The journals practices here are representative of how many (not all) philosophers genuinely think philosophy should only be done. As such, I'm not sure whether to continue onto a Ph.D or not since I'm not certain I'll be happy to work in the discipline (I have my MA, fyi).
Posted by: Anon. | 09/19/2014 at 08:23 PM
I worry that journals are too selective at some times and insufficiently selective at others. Some terrible stuff seems to slip through the peer review process and this same process seems to be block very good stuff. I think there are various causes for this, but one problem is that referees aren't given sufficient guidance by editors. It might be good if editors distributed something like a grading rubric to try to standardize things a little. As someone who referees a lot of work, I'd appreciate it if editors distributed something like a series of questions about the paper instead of just asking for comments. I think it could actually speed the refereeing process and level out the quality of reports a little.
Posted by: Clayton | 09/20/2014 at 05:23 AM
I want to second Jonathan's point. It's not a problem of false negatives per se. The general issue is just that, in my view, the so-called top journals do not have a particularly good track record of selecting very good papers. Part of the problem is, of course, that people disagree, and maybe even reasonably so, about what constitutes a very good paper. That said, I have read many papers (in my area) in the 'top journals' where I felt truly astonished how this paper could have made it past referees and editors. (I will add, FWIW, that more often than not such papers are written by famous people at prestigious programs.)
It seems to me that minimally, people should stop pretending that there is a magical formula by which the top journals scan papers for the best qualities. The decisions are ultimately made by people in the light of their own presumptions, biases, controversial criteria of 'quality' etc.
Posted by: Tom | 09/20/2014 at 06:54 AM
Hi Clayton: I was thinking the same thing. In my understanding, some journal editors used to take a much more active role in shaping journal reviewing standards (Ryle was well-known for taking an active -- some would say too active role -- while at Mind). Anyway, I think it is a great idea. When reviewers are left entirely to their own devices, bad tendencies can arise, be perpetuated, etc. (and indeed, seem to have done so). I think it would be great if editors took a more active role, and that a series of questions to direct reviewers would be a great place to start.
Posted by: Marcus Arvan | 09/20/2014 at 09:57 AM
I'm sympathetic with this worry, and wonder if it could just as easily be framed in the opposite way: Are journals insufficiently selective (on other criteria than those they are too selective about)?
As De Cruz suggests, maybe "the peer review process is geared towards finding mistakes rather than identifying bold new ideas." But that means journals should be much harsher about rejecting submissions that present--without mistakes--cautious, bland, predictable ideas. Even the language of "mistakes" here needs examination. What kinds of mistakes do and should journals focus on--stylistic, argumentative, factual?
Consider the point that Stanley's book was improved through the peer review process. Improved in what ways? In my experience, peer review improves my work above all in its clarity and style. It doesn't substantially change my idea, my thesis and argument, but how well I communicate it to the audience. Even where peer review finds mistakes in my argument, these are still primarily mistakes in presentation, which are fixed not by substantially changing my thesis but tailoring and refining it.
Now, that kind of improvement is perfectly compatible with a "bad" paper, understood as one with something new and insightful to say. A blank page has no mistakes of that kind. And so: even if peer review does "improve" a paper, does it improve it in the most important ways?
(This should also lead us to wonder: are the "best" journals really that great? What kind of "good" qualities are they selecting for, and might they select against other good qualities that are more philosophically important than those selected for?)
Posted by: Anon | 09/20/2014 at 10:37 AM
Hi Eugene: Thanks for your reply!
You write: "But I guess the point I was making was precisely that Stanley's experience wasn't "a string of false negatives." His work, on this line of thought, wasn't up to snuff while it was still being refereed at Nous, et cetera. Eventually, with enough feedback, his work was ready to be published. And it was!"
Following Anon 10:37, I sort of doubt this. Stanley's work might have benefited greatly from the three revise-and-resubmits at Phil Review and subsequent submissions (and rejections) from Nous, JPhil, etc., but presumably the paper's main *argument* -- the argument that has now become wildly influential -- was there all along. It's hard for me to believe it wasn't "up to snuff." It seems far more likely to me -- based on my own experience with uncharitable reviewers, as well as the historical record of ridiculous rejections in other fields (J.K. Rowling's string of rejections, Andy Warhol's, U2's, etc.) -- that Stanley ran into a string of people who failed to appreciate the importance of the paper's basic ideas. Alas, maybe the problem here isn't one at all unique to philosophy. Given that the problem crops up in most fields of work, maybe the real lesson is that people in *general* are too critical and often unable to recognize great ideas when they're staring them in the face!
Just a few fun examples:
“We feel that we don’t know the central character well enough.” The author does a rewrite and his protagonist becomes an icon for a generation as The Catcher In The Rye by J.D. Salinger sells 65 million.
“Too different from other juveniles on the market to warrant its selling.” A rejection letter sent to Dr Seuss. 300 million sales and the 9th best-selling fiction author of all time.
“An absurd story as romance, melodrama or record of New York high life.” Yet publication sees The Great Gatsby by F.Scott Fitzgerald become a best-selling classic.
http://www.literaryrejections.com/best-sellers-initially-rejected/
http://mentalfloss.com/article/55416/10-rejection-letters-sent-famous-people
Posted by: Marcus Arvan | 09/20/2014 at 11:32 AM
I think the lesson to draw here is that publishing in a "top" journal =/= good work, and not publishing in a "top" journal =/= bad work. Unfortunately, those are the inferences far too many in our profession tend to draw.
Posted by: Rachel | 09/20/2014 at 11:25 PM
I agree with some of what's been said about the process itself. I'll just add that top philosophy journals could absolutely remain super selective (rightfully or wrongly) and also simply accept more papers per year. By comparison the most prestigious journals in other fields such as science and nature clock in around 8-10%, whereas say, Mind is at 6%, and others probably less. I suspect it's not due to lack of quality in philosophy papers.
Posted by: Wesley Buckwalter | 09/22/2014 at 12:53 PM
I agree with Wesley that it would be no problem for the most selective journals to print (online publish) a few more issues per year, without perceptible loss of quality. Australasian J of Philosophy has an acceptance rate of 5%, for instance. I think Mind and the other top 5 have even less acceptance rates. And yet, I'm not super-excited by most of the stuff I read in Mind, or JPhil or PRev. Most of it is, like much of philosophy, incremental. There are, of course, wonderful papers in there too. But by accepting a bit more, we might get a few more daring papers in good journals that otherwise journey from journal to journal. Now that many libraries are moving to online issues and cancelling their physical subscription, cost isn't an issue. PhilStudies publishes lots of issues, and it's a good quality journal. So if all top journals moved to the PhilStudies model instead of the 4 issues per year, I don't think it would be at the detriment of quality.
Posted by: Helen De Cruz | 09/22/2014 at 01:45 PM
I often wonder where is the line between a paper putting forth an implausible argument/account and a paper putting forth an argument/account where the reviewer has in mind substantive objections. Recently I had a paper receive a revise and resubmit, which had four sets of referee comments. The first two said that the paper was outstanding scholarship, the third missed the point of the paper entirely, and the fourth cited work in cognitive neuropsychology that purportedly cast doubt on the argument I was making. I looked up and read the literature on the work cited in the fourth referee's comments and the findings are less cut and dried than the referee suggested. This seemed to me exactly the kind of thing that should be discussed in a public forum--an author makes an argument drawing on empirical work, another person provides an objection, based on different empirical work, and the author replies by trying to show that the objector's cited empirical work doesn't cast sufficient doubt. Alas, two sets of glowing referee comments + one clueless set + one set citing questionable empirical work purported to refute the argument = R&R. (This after nine months under review. I suspect that the journal's editors were looking for a reason not to accept the paper.)
Posted by: Scott Clifton | 09/23/2014 at 08:38 PM
Eh, slap in a footnote raising the objection and pretty much what you just said here (but fleshed out a bit). And in a referee report note that referee #3 whiffed on their interpretation of your paper. Should be a slam dunk publication decision out of this.
Posted by: Rachel | 09/23/2014 at 10:03 PM
Unfortunately, Rachel, the particular journal I had submitted to handles R&R's like new submissions. That's a whole new review process and I cannot wait another nine months for a decision that might not go my way (if, say, the fourth reviewer is again asked to review it).
Posted by: Scott Clifton | 09/24/2014 at 12:49 AM
But even if R+Rs are sent to new referees, don't the editors send the original referee reports to them as well? Surely the journal doesn't treat the R+R literally like a fresh submission, does it? I've had about 5 R+Rs and even in cases where the papers were sent to fresh referees, they always say the original referee reports along with my resubmission report.
Posted by: Rachel | 09/24/2014 at 08:55 PM