Over the weekend, a number of my philosopher-friends shared the article, "Why Professors Are Writing Crap That Nobody Reads", on social media. The long and short of the article is that academic publishing has mostly turned into a scoring-system for tenure and promotion:
Professors usually spend about 3-6 months (sometimes longer) researching and writing a 25-page article to submit an article to an academic journal. And most experience a twinge of excitement when, months later, they open a letter informing them that their article has been accepted for publication, and will therefore be read by…Yes, you read that correctly. The numbers reported by recent studies are pretty bleak:
- 82 percent of articles published in the humanities are not even cited once.
- Of those articles that are cited, only 20 percent have actually been read.
- Half of academic papers are never read by anyone other than their authors, peer reviewers, and journal editors.
So what’s the reason for this madness? Why does the world continue to be subjected to just under 2 million academic journal articles each year?
Well, the main reason is money and job-security. The goal of all professors is to get tenure, and right now, tenure continues to be awarded tenure based in part on how many peer-reviewed publications they have. Tenure committees treat these publications as evidence that the professor is able to conduct mature research.
A lot of my philosopher friends commented that this is disturbing--but I'm not sure it has been adequately appreciated just why it should disturb us. The real problem isn't that it has turned academic publishing into a cynical and depressing mechanism for personal advancement. The problem is that it undermines the epistemic credentials of our discipline. Allow me to explain.
Consider the following thought-experiment. Suppose that researchers in physics--say, people who work at the Large Hadron Collider--published empirical findings in academic journals year after year. Suppose then that because competition for journal space is incredibly competitive, only some of these findings found their way into "top journals" in the field. Finally, suppose that, moving forward, 80+% of all of the findings were systematically ignored, and never discussed or evaluated in the literature. Suppose, next, that some of the 80% of findings that are ignored contradicted the findings in the 10-20% of the literature that is discussed, providing alternative findings. Finally, suppose that the field itself--including its history books--only reported the 10%-20% of findings that were discussed as "progress in the field." There are fairly obvious reasons to think that if this were the case in physics, it would not be a field in good epistemic standing. The scientific method only works to the extent that findings and hypotheses are attended to and tested, not summarily ignored on the basis of personal or group judgments. If some "findings" are discussed but others are summarily ignored, a serious question arises of whether the "progress" the field reports is more a matter of sociology than sound science or inquiry.
Might philosophy's epistemic credentials be similarly imperiled? I have suggested before that there are some reasons to wonder whether this may indeed be the case, as sociological forces can plausibly affect the subfields of philosophy that people go into, the arguments they make, which arguments "win out", and so on. But instead of making that case again, I'd like to address a common reply I've come across on more than a few occasions: the reply that most articles in philosophy are "crap" that don't warrant any reply in the literature. Yes, unfortunately, I have heard this more than a few times, from more than a few people. But here's the problem. Aside from being in my view offensively dismissive (we owe our colleagues better than simply dismissing their work), there is a deeper problem with this reply, which I will call the "Trust-Me" Model of philosophical progress and evaluation.
Go back for a moment to the thought-experiment above. Suppose once again that, in physics, 80+% of articles reporting new findings were summarily ignored. Then suppose that a common reply in the discipline were, "Most of those new findings are crap"...despite the fact that almost no one ever actually engaged with the work in question or showed that it is "crap." Here is the basic epistemic problem with this. Good science, and good epistemic practice, is not based on principles like, "Trust me, I know X", or even "Trust us, we know X." Sound epistemic practice involves carefully testing and evaluating hypotheses. It may well be that only 10-20% of philosophy articles are any good. But, if we are to be an epistemically responsible discipline, this cannot be simply asserted on the basis of "personal judgment"--for human beings are notoriously biased creatures. One possibility that article X did not get published in Phil Review is that it is a bad article. Another possibility is that it is a good article and important contribution but some other reason led it to end up in a lower-ranked journal (perhaps the person needed publications to get a job, or tenure, etc.). In order to know which of these (or other hypotheses) are true, the work in question must be tested: its merits and deficiencies must be examined. And again, a critical part of scientific/epistemically responsible practice is that a suitable examination is not merely, "Trust me, it doesn't deserve discussion." We wouldn't accept this model in physics or biology or psychology, and for good reason. "Trust me" is not an argument. If a work is not worthy of discussion, it should be fairly easy to publish something showing why--and we should not assume that it is not worth discussion until someone actually shows it.
If this is right, philosophical norms should change. I've written before about how journal restrictions on "reply" pieces is a problematic barrier to philosophical discussion. I want to suggest that this is only a small part of the problem. Another big part of the problem is disciplinary norms: we don't currently expect ourselves or others to engage with and evaluate with work in a broad range of venues. Currently, academic philosophy consists largely in each person publishing free-standing articles, one after another--most of which, again, are summarily ignored. Replies are the exception, not the rule. And the result is not so much a philosophical discussion--an open exchange and testing of philosophical ideas and argument--as it is a series of unanswered monologues in which only a select few monologues are engaged with according to a "trust me" model of philosophical evaluation and significance.
This is not the way things have to be, and not, I think, how they should be. In some other fields, such as physics and psychology, articles in lower-ranked, out of the way journals are regularly engaged with. For instance, my most-cited paper by far (with 25 citations since 2013) is a psychological study in an interdisciplinary journal. Despite not appearing in anything like a top-journal in the field, academic psychologists read and engaged with the work--because those are the norms of their discipline. They do not work with a "trust-me" model of evaluation and engagement. Psychologists expect themselves and each other to engage with new work wherever it appears. We philosophers can--and, I believe, should--do the same. If we want to be an epistemically defensible discipline, we should not appeal to the Trust-Me Model of philosophical evaluation. We should engage with published work in general, and demonstrate the merits and deficiencies of that work.
How might this be done? Let me float a few possibilities. First, journals might reduce the number of pages for standalone articles, and increase the number of pages for reply pieces, including replies to pieces in other journals. This might not only make publishing standalone articles more competitive, helping to ensure good quality of those pieces; it would plausibly increase incentives for philosophical discussion. Second, we must develop a culture and norms for engaging with each other's work, including work in lower-ranked journals. This might be done by reviewers and editors imposing more stringent controls on citation and discussion practices within standalone articles. If, for instance, a new standalone piece is submitted to a journal (including a high-ranking journal), but that piece only cites and discusses articles in the area from a few high-ranking journals, ignoring relevant publications from other journals, then the piece should not be accepted. It should, at the very least, be given a revise-and-resubmit with suggestions or requirements to cite and discuss relevant pieces in other journals. While I expect this might sound onerous to some readers, it is important to recognize that this is standard research practice in other fields. In fields like psychology and physics, one is not allowed by journals and editors to only cite and discuss articles by "famous people" in "top-journals." One is expected to know and engage with relevant recent literature in general, wherever it appears.
I propose that these kinds of changes in practice would not only plausibly make philosophy a more enjoyable and inclusive place to do work (as it can surely be dispiriting to defend arguments that people just ignore), but perhaps more importantly a more epistemically defensible discipline. If we are after philosophical truth and telling good arguments from bad--as I hope we are--we should not utilize a "trust me" or "trust us" model of evaluation. We should engage with philosophical research broadly and publicly demonstrate the merits and deficiencies of that work. Or so say I. What say you?