In response to my post, "Baz (and Mizrahi!) on the method of cases", specifically in response to concerns I raised about citation practices -- which I have raised before -- our own Moti Mizrahi proposed in the comments section that I begin a campaign for better citation practices broadly analogous to the Gendered Conference Campaign for gender-equity in philosophy conferences.
I think this is a potentially worthwhile idea. I am far from the first to note that citation practices in philosophy are problematic (see here, here, here, and here). Broadly speaking, citation practices in philosophy arguably unfairly privilege:
- Men over non-men
- The "famous" over the non-famous
- Articles that appear in "good journals" over those that appear in "bad journals", and
- "in groups" (i.e. people citing "friends" but not "people they've never heard of")
Furthermore, when I've raised this issue here before (and to people in person), I've elicited what I think are incorrect views about what citations are for. In response to my earlier post on this, one commenter suggested that citations are a form of giving "professional kudos" for good work (i.e. in good journals). This, in my view -- and in every other academic field -- is a profound misunderstanding of how citations are supposed to work. Citations exist for a very different reason: to point out to the reader the fact that someone has published on the relevant idea(s) previously. They are to give credit for the mere fact that previous work on the subject exists, and has appeared. To fail to cite a paper simply because you think it is "bad work" or not worth paying attention to is not the function of citations -- for it simply misleads the reader into thinking that work on the subject has not appeared when in fact it has. A more fundamental problem with the practice of citing "only things you find relevant" is that it invites bias, exclusion, institutional capture (i.e. "publication rings" of people just citing their friends' work), etc. (biases, exlusion, etc. which we have empirical evidence have systematically occurred in citations practices in philosophy).
All told, the evidence is that 82% of all humanities articles go uncited. This is in contrast to only 12% of articles in medicine. And I think it is unconscionable. Nobody should put great time and effort into publishing things that are then systematically ignored for reasons of gender, prestige, etc. Something ought to be done. But what? I think Moti has a good suggestion. Little seems to change in this world until some body of people stands up and says (as it were), "Enough." We should press for better citation practices in philosophy, and we can be the ones to do it -- if we so choose.
Should we so choose? How might we do it? Here is one possibility I have entertained: a new blog (maintained by me) where people can -- respectfully, and without "shaming" -- draw attention to what they feel to be wrongful citation omissions. As far as I am concerned, the posts on such a blog could be anonymous (to save the person from professional blowback). The only constraints might be that the posts would have to not involve "shaming" (they would simply say: I think article X should have cited articles Y and Z, but didn't), or any kind of slander or defamation. What does everyone think of this idea? Would it be a good idea? A bad idea? What? At this point, in my mind, it's just a thought -- but I think it is a thought worth discussing. I've personally seen what I take to be way too many articles that fail to cite recently and clearly relevant articles by lesser names. Also, in closing, I want to emphasize that I do not regard myself as above reproach when it comes to citations. On the contrary, if I have ever failed to cite someone I should have, I would like to know it! I would like to do a better job, giving credit where credit is due -- and this is precisely why I think a blog of the sort described above might be a good idea.
Hi Marcus,
I hope you won’t take this the wrong way but I think that, if we start a new blog for the Biased Citations Campaign, it will probably suffer the same fate as the blog for underappreciated philosophy. So I think we should make the Biased Citations Campaign (and the Underappreciated Philosophy Campaign, too, by the way) a regular feature on the Cocoon. Cocooners who are contributors can post on papers that they think fit the bill and readers can write in with requests to post on papers that they think fit the bill. I take it that’s more or less what they do over at Feminist Philosophers as far as the Gendered Conference Campaign is concerned.
Posted by: Moti Mizrahi | 04/24/2014 at 09:03 PM
Just speculating here, but is it plausible that a lack of citations in the humanities as compared to medicine is due to the length of articles? After doing a quick search through the New England Journal of Medicine, I found that a typical original article is about 10 pages in length.
Although I know very little about other humanities, I do know that philosophy articles are often much longer than 10 pages (aside from articles in Analysis and Thought) and sometimes extremely long-winded. Some papers in Mind, for example, give one the impression that they have committed to reading a monograph rather than a research article. And since the trend of lengthy writing seems to persist in philosophy, lengthy articles ultimately have an effect on the number of articles one can feasibly read in a day. Consequently, there are articles on a particular issue that will not be read before one submits their work to a conference or journal.
None of this goes against your main points, however. I think that the biases that you described above play a role in how one goes about screening the articles that will and will not be read.
Posted by: PhD Candidate | 04/25/2014 at 08:44 AM
While I appreciate some of the problems with the status quo, I'm not sure what reasonable norm would help. I don't think that a norm that says you should cite everything that's ever been published on your topic is a reasonable norm.
Suppose I'm writing a paper on the contextualist response to skepticism. This is actually one of the more manageable topics, since this is a very young idea by philosophical standards; basically everything comes from the past twenty-five years or so. Even so, we're talking about a LOT of papers. There are 126 entries in the PhilPapers directory under that topic; I'm sure there are lots more not recorded as such, but let's assume that's the comprehensive canon. Do you really think I need to cite all of those papers? I don't think I should cite them unless I've at least looked at them; do you agree with me about that? If so, you must think I have an obligation to read those 126 papers before I can publish anything on contextualism and skepticism. Assuming it takes me zero time to track them down and one hour to look at each paper, that's just over three 40 hour weeks just reading enough to prepare a bibliography on a topic. Assuming it takes an average of 15 words to cite an article, those 126 papers take up almost 2,000 words -- a very substantial proportion of the word limit for most journals.
It sucks when one's work doesn't get cited, and you're certainly right that lots of biases come into play when one is selectively choosing which works to cite. But I just can't see that a norm that you cite everything is realistic.
Posted by: Jonathan Jenkins Ichikawa | 04/25/2014 at 12:42 PM
PhD candidate: I don't think it's that plausible. My wife works in I-O psychology, their papers are long, and they cite *everything* recent.
Jonathan: Thanks for your comment. I appreciate your worries. But how about this rule: you must cite anything published in the last 3-5 years or posted on philpapers as forthcoming that is *directly* on the topic your paper is on.
A couple of examples. In 2012, I published a short paper, "Unifying the Categorical Imperative." I think the argument in the paper is sound -- yet I have seen several papers published on the relationship between the C.I.'s different formulas the past couple of years that fail to cite my paper. I think this is wrong.
Or take Moti's papers on intuition-mongering and the method of cases. There has been a ton of work on these issues lately, yet Moti's work systematically fails to be cited by people working in these areas. This too is wrong.
Anyone who writes on a topic should *have* to cite any relevant papers on that *exact* topic that have appeared in the last 3-5 years -- or, at least, make a clear good-faith effort to do so (viz. my and Moti's papers are among the very first things that show up if you do a philpapers search on the topics I mentioned. I don't mean to single out Moti and I, by the way. They are just two examples that immediately come to mind!).
I think this 3-5 year/good faith rule is a good one. Do you not? If so, why not?
Posted by: Marcus Arvan | 04/25/2014 at 01:19 PM
Hi Moti: Thanks for the suggestion. You are probably right! What I think I will do is offer to put together a monthly report here at the Cocoon based on reader submissions (i.e. readers emailing me about papers they think have used poor citation practices). How does that sound?
Posted by: Marcus Arvan | 04/25/2014 at 03:24 PM
Jonathan,
To add to Marcus’ examples, I will mention just another one. The following paper just appeared in SHPS:
Ilkka Niiniluoto, Scientific progress as increasing verisimilitude, Studies in History and Philosophy of Science Part A, Available online 12 March 2014.
It engages specifically with a debate between Alexander Bird and Darrell Rowbottom on scientific progress. It does not cite two of my papers:
Mizrahi, Moti (2013). What is Scientific Progress? Lessons from Scientific Practice. Journal for General Philosophy of Science 44 (2):375-390. Published online: 17 November 2013.
Mizrahi, Moti & Buckwalter, Wesley (2014). The Role of Justification in the Ordinary Concept of Scientific Progress. Journal for General Philosophy of Science 45 (1):151-166. Published online: 30 January 2014.
Even though they are the *only* two papers that engage with Bird and Rowbottom on progress in print. Moreover, my papers present evidence against the very view that the Niiniluoto paper defends. Arguably, that’s not just bad citation practices but also bad scholarship (not to mention bad philosophy). This sort of thing has to stop! Philosophers should not be able to get away with not citing papers that don’t mesh nicely with their cherished views.
Marcus,
Sounds great to me!
Posted by: Moti Mizrahi | 04/25/2014 at 09:09 PM
One complication worth mentioning is that it's a lot easier to overlook recently published work. When I write a paper there's a research phase during which I actively look for published work on the topic at hand, and then there is a writing phase, and then a revising phase, and then finally a waiting for months on end while the paper is reviewed phase. The result is that by the time my paper is published, other papers on the same topic may have appeared in print, but I won't have cited them. I don't think this practice is unreasonable; so it might be unreasonable to expect a given paper to cite everything published in the last 3-5 years of its publication date.
Posted by: B.M. | 04/26/2014 at 02:27 PM
Hi B.M.: Thanks for your comment. So, I've had a few papers bounce around at journals for a while. One of them bounced around for 6 years. During that time,other people were publishing on that paper's topic left and right. I think it was my obligation to be aware of those developments in the literature, and to update my paper's references to reflect the stuff that came out. And indeed, I was once upbraided by a reviewer for failing to do precisely this (not citing stuff that had just come out). I think these things are entirely reasonable. Before submitting a paper anywhere (say, during revisions), one should do a brief philpapers search to see if new stuff has appeared. How long does that take? A couple of minutes! And, how long does it take to skim a few articles to see if they should be cited? Not that long. So, I say, I still think the 3-5 year rule is reasonable (obviously, one cannot cite a paper after is already under review at a journal, but one can always do so after the paper is accepted, at the final-revisions stage one submits before receiving one's proofs).
Posted by: Marcus Arvan | 04/26/2014 at 02:35 PM
B.M.,
I don't think it is a serious complication. Oftentimes papers are presented at conferences and are available online through PhilPapers, PhilSci Archive, Academia.edu, etc., long before they are officially published.
The two papers I mentioned in my previous comment were both available through PhilSci Archive approximately two years before they appeared in print. I would expect a philosopher of science to check PhilSci Archive for work that is relevant to his or her research.
Posted by: Moti Mizrahi | 04/27/2014 at 11:14 AM
I take it that the problem is (roughly) that things that ought to be cited aren’t cited. Here are two possible explanations of this problem. The first is that our citation practices should be improved. The second is that the problem arises further upstream: work that ought to be engaged with is not engaged with, and so is not cited. Perhaps both explanations are relevant, but in neither case, it seems to me, would solving the problem require authors to cite all recent work on their paper's topic.
What is required for the citation process to go well? Plausibly citations perform more than one function. But I don't think that the following is true: "Citations exist … to point out to the reader the fact that someone has published on the relevant idea(s) previously. They are to give credit for the mere fact that previous work on the subject exists, and has appeared." Perhaps the citations in some papers ought to do this kind of work, e.g., survey articles. But otherwise it seems to me that journal articles don’t need to be databases of recent work: this job can be left to things like philpapers. So I don’t think the fact that some (recent) work on a topic is not cited is in itself evidence that our citation practices need improving.
What seems to me a more important role for citations is that we use them to (in Marcus's words) "giv[e] credit where credit is due". I think that what deserves credit is (something like) influencing the way the author thinks about and approaches the topic they deal with: such work should be cited. (No doubt citations can serve other purposes too.) It seems plausible that, when it comes to putting down on paper who has influenced how we think about the topic we're engaged with, biases might come into play and steer us towards the famous, the men, the people we know, etc. If so, then solving the problem requires coming up with ways to encourage people to acknowledge influences that, at present, are not being acknowledged.
But even if all goes well with the citation process, it is still possible that things that ought (in some sense) to be cited might not be cited because things go wrong before we get to the citation process. If work that ought to be read and engaged with is not, then this work won’t get cited. It seems plausible that, in deciding who to read and who to engage with, biases can once again interfere and direct us towards some works rather than others. If so, then solving the problem requires coming up with ways to encourage people to engage with work that, at present, isn’t being engaged with.
Depending on where the problem arises (and it could be from both sources, and perhaps from other places too), then different responses are called for. But in neither of the cases that I’ve discussed does it seem that citing (or reading) all recent work on the topic of one’s paper is required. (It would, of course, be good if we could do this, but perhaps, as previous posters have suggested, it is too much to demand of people.) What would help? I don’t have any positive suggestions, I’m afraid. Hopefully others do.
Posted by: Jonathan Farrell | 04/27/2014 at 07:14 PM
Jonathan: Thanks for your comment. However, I think your points are in real tension with one another.
First, you write that (1) we needn't cite everything recent because "journal articles don’t need to be databases of recent work: this job can be left to things like philpapers."
You then write that (2) what matters is "giving credit where credit is due."
And you admit that (3) people often fail to engage with work they *should* engage with because of biases (in favor of men, famous people, etc.).
In short, you say:
(1) We should give credit where credit is due.
(2) People often *don't* give credit where credit is due by citing mainly men, famous people, etc.
But (3) We don't need to cite everything recent.
The problem here is this. If (1) citations should exist to give credit, but (2) people *aren't* giving appropriate credit, then (3) is false: we should expect people to cite everything recent, not just the famous men who may have most influenced their way of thinking.
It is, I believe, your denying this entailment that results in your not having a positive suggestion for how to solve the problem. For here's the thing: how *could* we possibly solve a problem of failure to cite and engage with people's work if not by expecting people to...cite work *besides* merely that which "influences" them. But that is just my proposal!
In other words, I want to say: if you recognize that there is a problem here (and you seem to), then there but one solution. If people fail to cite and engage with other people's work, we have to *expect* them to.
Posted by: Marcus Arvan | 04/27/2014 at 07:50 PM
Jonathan: I would also add (just to reiterate something I said earlier) that I don't know of any serious scientific field in which people are merely expected to cite those who have "influenced their thinking." In every other field I know of, authors are expected to know, and cite, all of the recent literature relevant to the topic. I would also add that this convention in other fields exists precisely to protect against citation biases.
Posted by: Marcus Arvan | 04/27/2014 at 07:53 PM
Marcus, the practice you point to in other fields has its own limitations (even though I believe you’re quite right). I’ve read somewhere (in a French more-than-layman, less-than-academic scientific magazine) that all too often, erroneous quotations were “repeated” throughout the subsequent literature, suggesting that many authors didn’t refer to the paper itself, but to its summary as found in other (more recent) papers.
For your proposal to be robust, I believe it should be added: “... and quote the *original* paper you’re quoting, not the version of it you read in a subsequent paper”. (That’s reasonable too, and quite in line with your proposal.)
Now perhaps a more modest approach could be worth exploring. In philosophy, we could assume that a bibliography often contains both what we could label “mandatory” and “personal” items. The mandatory items correspond to a restricted version of your proposal: the most important papers/books on the subject, those we can’t pretend to be ignorant of. (This need not correspond to “big names” or “top journals”.) The personal items correspond to our more personal intellectual background, to the “paths” we have followed. So for example, I take Anderson’s “Point of Equality” to be mandatory if one is working on egalitarianism (you just can’t pretend you ignore her contribution, and if you did this would be deeply wrong of you), while, *as of now*, I take (say) Laura Valentini’s work to be more personal (although I hope she will soon be viewed as “mandatory”).
To be sure, I’m not convinced by this more modest proposition. In its favor, I’d say that it meets the demandingness objection, but the absence of a clear-cut distinction between “mandatory” and “personal” creates a difficulty: what could/should we take the scope of the mandatory to be? If it is too restricted (big names in top journals), we meet the demandingness objection but probably fail to avoid the biases you point; if it is too large, we might not meet the demandingness objection anymore (this isn’t so bad after all), and we lose sight of the point of the personal background.
Posted by: Pierre | 04/27/2014 at 08:56 PM
I agree with Jonathan Farrell. I am puzzled by the suggestion that the point of citations is "to point out to the reader the fact that someone has published on the relevant idea(s) previously." This is the point of article databases, not article bibliographies. The point of citations, in my opinion, is to record the sources that one has used. This also implicitly conveys information about what sources one considers worth using, which is why there's room to discuss proper citation practices, and why it can make sense for criticising someone for failing to cite what she should. If the discipline adopted Marcus's proposal, citation rates would tell us nothing at all about the quality or importance of the work cited; they would only tell us about the popularity of the topic.
Sometimes bad papers get published. Sometimes when bad papers are published, it's helpful to engage with them in subsequent papers, to point out their errors, but not always. Sometimes doing so would just be a distraction. I'll go so far as to say that sometimes, one does a disservice to the profession by drawing attention to neglected published work. Some work is best forgotten. Obviously I don't think it'd be appropriate for me to name papers here, but I think we have all come across at least a few papers like this. I don't want to cite them in my subsequent papers, and I don't think I should.
Again, this means that choosing what to discuss and cite is a value judgment. Sometimes people will get it wrong, and sometimes they'll get it wrong because of systematic biases. This is a serious problem, and one that we should struggle with. And yes, it can be completely legitimate to complain that someone has treated you unjustly by not citing your work. But I don't think the fact that a journal has published your work on the topic is sufficient for you to achieve that standing. (And I *certainly* don't think the fact that a journal has communicated to you an *intention* to publish your work is sufficient.)
Marcus finds a tension in Jonathan's commitment to:
(1) We should give credit where credit is due.
(2) People often *don't* give credit where credit is due by citing mainly men, famous people, etc.
(3) We don't need to cite everything recent.
I share Jonathan's commitment to these three. And indeed, they seem to me to be entirely consistent. Marcus's line seems to be something like this: denying (3) is a way to solve the problem of (1) and (2); therefore (1) and (2) are in tension with (3). Only under an extremely weak reading of 'tension' does anything like this seem remotely plausible. Many weaker responses than denying (3) would do the job just as well. For example, one could maintain (3) and add (4): we need to cite everything worth citing.
Compare this argument:
(1) We should treat candidates fairly in making hiring decisions.
(2) People often *don't* treat candidates fairly in making hiring decisions, by favouring white people, men, people from wealthier backgrounds, people from more prestigious departments, etc.
(3) We don't need to make hiring decisions by random lottery.
I think that (1) and (2) are obviously true, and that there is no interesting sense in which they 'put pressure' against (3), which is also obviously true. This even though denying (3) would be a way to solve the problem of (1) and (2).
Posted by: Jonathan Jenkins Ichikawa | 04/28/2014 at 12:35 PM
Jonathan writes that the following claim is "obviously true":
"(2) People often *don't* treat candidates fairly in making hiring decisions, by favouring white people, men, people from wealthier backgrounds, people from more prestigious departments, etc."
I take 2 to mean not only that _if_ people were to favour candidates of these kinds that would be unfair but, in addition, that these kinds of candidates do get favoured (and that that is unfair). But this is not at all obvious to me. First of all, in order for this (possible) form of treatment to be obviously unfair (if it were to be actual) we'd need to add that the white-male-rich-pedigreed candidates are being favoured _because_ they have these traits. The traits are not relevant to the fair assessment of candidates, let's agree, so favouring people for irrelevant reasons such as these would be unfair.
But it could be that these kinds of people are being "favoured" for other reasons that are relevant -- the quality of their work, say -- and that when people are favoured for those relevant reasons the results skew towards hires in those categories. Now is it obvious that this is _not_ what's going on here? Surely in order to draw that conclusion we need some way of gauging the merits of candidates _other_ than the evidence provided by demographic data about hiring or publication or citation. (If we look to that kind of evidence, it will not show that others have just as much merit as white-male-rich-pedigreed people.) So what is that other evidence?
The same holds for the original claim that
"(2) People often *don't* give credit where credit is due by citing mainly men, famous people, etc."
That might well be true. But how can it be "obviously true"? What is the evidence that the "over-represented" types of people being cited really are _over_ represented rather than simply being the ones whose work most merits citation?
My questions are not rhetorical. It wouldn't surprise me at all if some types of people were being greatly over-valued and others greatly under-valued for irrelevant reasons. But I have no idea how we are supposed to evaluate this, or why it should be "obvious" that the status quo is unfair in this way.
Posted by: Ambrose | 04/28/2014 at 01:08 PM
Oh boy, our profession is more messed up than I had thought. Not only do we not know if our methods are any good, we don't even know what citations are for. Speaking of methods, Jonathan J.K., a question for you: How can one judge what counts as a good paper worthy of citation/engagement if one does not know what counts as good philosophy (given that philosophical methodology is very much a matter of dispute)? Is it simply a matter of belonging to a certain club that does things a certain way?
Posted by: Moti Mizrahi | 04/28/2014 at 01:30 PM
I meant Jonathan J.I. Sorry for the typo.
Posted by: Moti Mizrahi | 04/28/2014 at 01:31 PM
I'm not sure I understand what you're asking, Moti. Are you expressing skepticism about our ability to recognize whether a philosophy paper is any good? That seems a fairly radical stance. Are you also concerned about whether I have any chance of grading my students fairly, or providing accurate advice to editors who ask me to perform peer reviews? While I don't know how to give a general answer to your question, it's not at all hard to tell the good from the bad in many instances. For example, some papers have central arguments irredeemably based in obvious fallacies.
And no, it has nothing to do with clubs. I find your insinuation otherwise insulting.
Sorry if you think that's messed up.
Posted by: Jonathan Jenkins Ichikawa | 04/28/2014 at 02:18 PM
Hi everyone: I understand this is a divisive issue, and that passions are involved, but let's ensure that we discuss the issue in a safe and supportive ways, without insinuations or insults.
Disagreement is fine, and encouraged, but let's critique the arguments, not the people!
Posted by: Marcus Arvan | 04/28/2014 at 02:20 PM
Jonathan, I am not sure why you were insulted. I said nothing about *you*. I was talking about the profession. In any case, if you were offended, then I sincerely apologize.
I was thinking about something like this: http://philosophycommons.typepad.com/xphi/2014/04/cogsci-2014-referee-report.html
In this case, the referee seems to think that experimental philosophy is not a good method to apply to certain questions. As a result, s/he rejects the paper. The same probably holds for published papers. Someone who is not sympathetic to experimental philosophy won’t cite such work in his/her own papers because s/he thinks that experimental philosophy is a bad way of doing philosophy. That’s messed up!
By the way, I asked *how* you can tell the difference between good and bad philosophy, and your response was “I just can.”
Posted by: Moti Mizrahi | 04/28/2014 at 02:47 PM