I recently posted some concerns about citation practices in philosophy that have been echoed many other places (see here, here, here, and here). In response to my post -- in which I argued that authors should be expected to cite every relevant article in the past 3-5 years (following standards in other fields) -- several commenters have suggested that this is wrong way to think about citations. In their view, authors should only cite work that (A) has influenced their paper, and/or (B) which they judge to be "worthy" of citing (as good work). Although I respect their difference of opinion, I am unpersuaded by the arguments, and want to explain why in this follow-up post.
I want to suggest that however well-intentioned the views on citation-practices other people are raising may be, it is those very views in our profession that have resulting in unfair exclusion and bias in the literature. I believe that however right these people may be in princple, in practice their view of the purpose of citations has landed us in the very spot we are: a spot where people are systematically ignored/excluded on the basis of author biases. Allow me to explain by reference to a distinction in moral and political philosophy: the ideal/nonideal theory distinction.
I am happy to accept that in an ideal world, people would cite all and only "good" work. As Jonathan Ichikawa points out in his comment in previous post, there are costs to citing "bad" work: that work may get undeserved uptake into the literature. Okay, all fine and well.
What I want to deny is that this is good policy in the nonideal world in which we live. I want to say: it is precisely because people appear to be systematically corrupted by biases that, in the nonideal world, it is wrong to leave it up to a matter of individual judgment who is "worth" citing. In other words, however right such a policy (i.e "cite good work!") would be in an ideal world, the moral hazards of continuing to implement this policy in the real, nonideal world are too high.
Allow me to use a few real-life examples to illustrate. John Rawls' paper "Justice as Fairness" has received 903 citations according to Google Scholar. In the several decades since Rawls published that paper, famous philosophers from Amartya Sen, Michael Sandel, Will Kymlicka, Thomas Pogge, and others have received thousands of citations for raising problems with Rawls' theory -- many of the problems that Everett Hall raised in his initial commentary on "Justice as Fairness" in the Journal of Philosophy: "Justice as Fairness: A Modernized Version of the Social Contract." As I pointed out in my previous post on Rawls and peer-review, Hall literally anticipates most of the objections that people have raised to Rawls' theory in the literature: (1) Rawls' problematic focus on ideal theory, (2) Rawls' gerrymandering the original position to result in his principles, (3) the weakness of the argument for the principles, etc. But...do you know how many times Hall's article has been cited in 60 years. Nine times. People literally went on to make careers out of lodging similar objections against Rawls, and Hall received basically no credit for his contribution. Yes, his contribution may have been small, but that does not make it undeserved, or worthy of being ignored.
And what of judgments of "quality"? I am not the first person to suggest that Michael Sandel's famous communitarian critique of Rawls' theory of justice is clearly wrong. Indeed, Sandel's critique is so obviously wrong it is, in my view, baffling that it received so much attention. Sandel, for those of you who might have never come across the objection, alleges that Rawls treats people as essentially "disembodied" rational actors behind the veil of ignorance -- presupposing, in Sandel's view, a metaphysically problematic analysis of the individual. But this is plainly wrong. Rawls imagines each person behind the veil of ignorance as presupposing that they are a real person, with a real gender, race, talents, communal attachments, views of the good, etc.; they just don't know which individual they are. And yet, somehow, Sandel became famous for this critique.
Anyway, I digress. The point is simply this: sociological factors surely affect who is discussed, who is ignored, etc., and we cannot expect people to effectively self-police any more than we can expect the financial industry to self-police. The best that we can do in the nonideal world is institute norms and policies that counteract bias and function to ensure that people are not ignored. How can we do this? Answer: we can do it in the same way that every other legitimate academic field has done it. By not leaving it up to individual authors thttp://www.imdb.com/title/tt0372588/o decide -- on the basis of potentially biased grounds -- "who is worth citing", "who influenced their paper", "whose argument is good enough to cite", and so on. For all of these grounds are open to extreme, and sytematic, forms of bias! This, again, is why other fields have such policies in place. One does not simply cite those who influenced you, because you may have a biased set of influences. No, you are expected to cite everything, so as to counteract such biases. No, none of this is perfect. Bad stuff may get cited, and discussed. But, bad stuff gets cited and discussed now...and in part due to biases!
In other words, I agree with those people who believe, in principle, it would be great if we could just cite "the right people" -- people who deserve to be cited for good work, and not those who have done bad work. What I deny that this is a good way to go in practice. To take an analogy from Team America: World Police, in an ideal world we might be able to solve international problems by sending governments really, really mean letters. Similarly, in an ideal world, we might be able to get people to cite appropriately by teaching them to not be so biased. But this is not an ideal world. In a nonideal world, we need policies to deal with things, and the policy of citing everything -- to counteract bias -- is, I believe, one of the only effective policies for good citation practices that I can imagine...and it is a policy accepted in many fields for this very reason.
Hi Marcus, I'm not sure I fully understand your proposal, so could you briefly clarify what you mean by "relevant" articles?
If I'm writing a paper defending view X against objection Y, then I think I should certainly cite whoever first came up with objection Y, along with any subsequent papers that successfully improve upon the objection. It's also clear that I should cite any papers that partly anticipate my response in defense of X (whether or not those authors actually causally influenced me), along with any serious objections that would apply to my intended response, if such objections are already extant in the literature. I expect this much is uncontroversial. So what are you claiming beyond this?
Is the thought that I ought to be citing every (recent) paper that discusses X at all? Or at least every (recent) paper that discusses objection Y to X? But surely not every discussion of a topic is relevant to my discussion (which might take things in a very different direction from some previous discussions, for example).
So isn't a large part of the dispute here just going to come down to differing judgments about what other work is "relevant" to a paper?
I guess value judgments do come in at two points in the above: in determining what the best statement of objection Y is, and in determining what (if any) "serious objections" could in turn apply to my intended defense of X. So is your thought that we must cite every variation of objection Y, and every extant objection to the kind of moves I make in my paper? If so, am I also required to discuss all these possibilities, or should I merely cite them without discussion? The former demand seems ridiculous (a recipe for terrible, bloated papers), and the latter: pointless busy-work.
Posted by: Richard Yetter Chappell | 04/28/2014 at 04:23 PM
I really think you need to think through the social epistemology here a bit more, because I think your proposal would actually be radically _counter-productive_ to the goals you are trying to support. For, suppose an academic community adopted this sort of maximalist citation practice. That means that there would no longer be any signal value for paper X that paper X is getting cited anywhere. X's appearance in a bibliography would mean no more and no less than that X was published in an appropriate forum. (We'd still need to formulate policies about what sort of publication is sufficient to be included in that "everything". Something weaker than appearing in select list of journals, but stronger than, like, being a blog post on the topic. Let's just stipulate that this can be done.)
So, what happens now, when philosophers try to decide what to read when they want to consider working in a new area; or what to direct their students to read; or when they want to evaluate how well-received someone's work is in an area where they are not themselves deep in the relevant literature? As of now, citation practices give some decent-but-highly-fallible guidance, for those questions. But in this hypothetical situation, there is no such signal. And what will happen is people will have their own informal lists, which they might circulate a bit, or maybe occasionally publish a blog post of the form "20 works you have to read on topic Y". They they will ask their friends, perhaps via email or at an APA smoker. Indeed they will see who is getting on APA programs, and use that as a proxy of who should be read. And so on.
The long and the short of it is, under the hypothetical citation practice, (i) citations will cease to have any value as a guide to where one's philosophical attention should be directed, and (ii) people will turn to other guides, which by and large are _even more susceptible to these biases than current citation practices are_. One nice thing about our current citation practice is that they are a pretty easy way to signal to the profession at large (or, at least, whoever reads one's paper), that paper X is worthy of some of one's time & consideration. So it only takes one or two people noticing & approving of an under-attended piece of work, and then citing it in their own papers, to get that signal out there. But the citation practice you are advocating would destroy the pathway for that signal, while leaving only dimmer and more biased pathways available. But people are going to be looking for signals about what to pay attention to no matter what!
I think there is a parallel kind of debate between more radical and more moderate critiques of objectivity in science. Both take as a starting point that norms of objectivity, as standardly deployed, have a significant pro-male, pro-white, heteronormative, etc biases. But one kind of response to that situation is "let's get rid of those norms of objectivity -- they are just weapons for those in power!" and another kind is "let's take what measures we can to correct for those biases!" What you are suggesting seems to me like the former, radical approach, which I fear would have the consequences of only further empowering those biases. (If everything is just a power play, then those who have the power will be the ones who get to play.) I think it would be far wiser to pursue the latter, more moderate strategy. Maybe that means that we won't totally eliminate biases. But any such total elimination of biases was not going to be possible, anyway -- all we could accomplish, on the radical proposal, would be to drive the biases under cover, where they would be even harder to detect & correct for.
Posted by: jonathan weinberg | 04/28/2014 at 04:29 PM
I agree with Richard, and with Jonathan Jenkins Ichikawa's comments from the previous thread. I'd add a couple of points.
First, citations can have different functions in different fields, just as papers have different functions in different fields. Citation practices that make sense in particular scientific fields, where papers mainly function as reports on the outcomes of experiments, might not make sense in philosophy.
Second, I don't see how your recommendation that we cite everything recent really helps with the problem of unfair bias. We only care about citation counts because they are evidence of a paper's actual influence. If we all just cited everything, then citation counts won't be much evidence about a paper's influence anymore. You won't find evidence from citation counts that there are biases affecting how influential someone's work is, but that's just because we'll have covered up the evidence--not because we'll have done away with the real problem. Maybe the idea is that citing everything will help to reduce bias indirectly, because we use each other's citations as a guide to what to read--so if you don't cite a relevant paper, then others will be less likely to read it. But as others pointed out in the last thread, someone who wants to read *every* paper on a topic can get a list of them easily on philpapers. The reason people use citations as a guide to what to read is that it works as a sorting mechanism. It gives us some (imperfect) evidence about what papers are most worth reading. If citations stop functioning as sorting mechanisms, then *maybe* the result will be a world where everybody makes a point of reading everyone else's work...but I doubt it. Instead, people will just rely on other sorting mechanisms even more, like the institutional affiliation of the author, the journal a paper is published in, whether the author seemed smart when he/she gave that APA talk 5 years ago, etc. Some of these are even more prone to unfair bias than citations are!
Posted by: David Barnett | 04/28/2014 at 07:17 PM
Hi Richard: Thanks for your comment. I'd be happy to clarify. You suggest a few different possibilities for the scenario you present:
(1) "cite whoever first came up with objection Y, along with any subsequent papers that successfully improve upon the objection"
(2) "It's also clear that I should cite any papers that partly anticipate my response in defense of X"
(3) "[cite] every (recent) paper that discusses X at all."
(4) "[cite] every (recent) paper that discusses objection Y to X."
My proposal was that there should be a rule to cite every paper in the past 3-5 years on the topic the paper is about. Since the topic of the paper you describe is Objection *Y* (and X is taken as a background view), I would endorse (1), (2), and (3), but reject (4). In other words, yes, every paper in the past 3-5 years that has *advanced* objection Y should be cited. I do not think they all need to be discussed, and I also do not think it is pointless busy-work. If people have published on Y, they should be cited!
Posted by: Marcus Arvan | 04/28/2014 at 07:18 PM
Hi Jonathan: Thanks for your comment. Your main contention is that if my proposal were adopted, "there would no longer be any signal value for paper X that paper X is getting cited anywhere."
I submit that this is false. Suppose Jones publishes a bad paper on topic X in 2014, and suppose that Smith publishes a *great* paper on X on 2014.
On my proposal, people should have to cite Jones' (bad) paper for 3-5 years. After that, since the paper is bad, it will presumably fade into the netherworld. No one will cite it anymore, but at least it will have been recognized.
On the flip-side, Smith's *great* paper may continue to be cited for years hence -- decades even.
Furthermore, my proposal does not dictate who should be discussed. If Jones' paper is awful, it will (literally) be relegated to footnotes in history. But, if Smith's paper is great, his paper will not only be cited for a longer period of time; it will also receive *discussion* that Jones' bad paper does not.
In other words, my proposal would allow not one but *two* ways of signalling which papers are good and which are not.
This point also undercuts your suggestion that my proposal is a radical form of "getting rid" of all objective standards. It manifestly does not do that. It allows *great* papers to be (1) discussed more, and (2) cited more in the long run. It just insists that less well-known papers are given *credit* in the short run.
Posted by: Marcus Arvan | 04/28/2014 at 07:24 PM
Oops, looks like Jonathan made the same point while I was writing. I agree!
Posted by: David Barnett | 04/28/2014 at 07:35 PM
Hi David: Thanks for your comment!
You write: "[C]itations can have different functions in different fields, just as papers have different functions in different fields. Citation practices that make sense in particular scientific fields, where papers mainly function as reports on the outcomes of experiments, might not make sense in philosophy."
I reply: Maybe, but in *our* field dominant citation practices appear to be systematically biased in favor of (1) famous, (2) white, (3) whites, from (4) prestigious schools. Any discipline ought to have *policies* to prevent this. My proposal would work to do so.
You write: "Second, I don't see how your recommendation that we cite everything recent really helps with the problem of unfair bias. We only care about citation counts because they are evidence of a paper's actual influence. If we all just cited everything, then citation counts won't be much evidence about a paper's influence anymore. You won't find evidence from citation counts that there are biases affecting how influential someone's work is, but that's just because we'll have covered up the evidence--not because we'll have done away with the real problem."
I reply: it does not merely cover up the evidence of bias. Failure to cite people is plausibly one of the main *causes* of why work by women, minorities, etc., is ignored/not discussed.
Here's why (I've seen it happen). People tend to READ what they see cited. So, for instance, if I read a paper by Male Philosopher, and his footnotes on Philosophical Problem X almost entirely cite Other Male Philosophers, who am I most likely to read if I want to publish on Problem X? ANSWER: The Male Philosophers the person cited in the paper!
This is how it all goes down. Paper 1 is published and cites mainly high-status white males. Everyone who reads Paper 1 then reads all those white males and cites them in subsequent papers (Papers X, Y, Z, etc.). Suddenly, the whole literature is dominated by citations/discussions of prestigious white males. It is *not* an accident. The whole snowball of bias/exclusion is generated by the very fact that people do not cite less prestigious non-white, non-male people in less prestigious journals.
There is but one way to stop the snowball. Insist that people cite fairly. If people cite fairly, people will read the stuff they are presently ignoring -- and the problem will resolve itself. This is what every other major academic discipline has done. I see no reason why we cannot, or why we should not think it would have precisely the effects I suggest (i.e. leading people to actually read and take seriously stuff they are presently ignoring!).
Posted by: Marcus Arvan | 04/28/2014 at 07:37 PM
Hi Marcus,
I intended the first part of my comment just to be addressing some side remarks of yours about citation practices in other fields like psychology. I just meant that they have other reasons for citing everything, aside from the ones that you think we have. (For example, they need to report what the currently available data is.)
Concerning the second part, when I wrote the bit you quoted I wasn't entirely clear on what your view was. Now that I understand it, I take back that bit (which was aimed at the wrong view), and stick to the rest (which is more or less the same thing Jonathan said).
I should add that I don't think your response to Jonathan really addresses his and my main concern, which is that as citations become less helpful as a method of determining what to read, people will simply turn to other methods that could be even more biased. I take it as a given that people will be somewhat selective in what papers they read. It's just not possible to read everything. So the question is not *whether* to rely on each other's fallible value judgments in determining what to read, but *how* to go about it. I agree that there is evidence of unfair bias among citations, and so far as I know, there isn't any evidence about other methods people use to figure out what to read. But I think that's just because citations leave a paper trail, and that many of the other methods don't. From the armchair, I'm tentatively with Jonathan in thinking that the alternatives are probably even more biased than citations.
I do appreciate your effort to make people more aware of this problem, though! Even if there is no systematic solution, individual behavior can still improve.
Posted by: David Barnett | 04/28/2014 at 09:12 PM
Perhaps the practice of review essays --- similar to what economists do in the Journal of Economic Literature --- would help. Marcus’s proposal would work perfectly fine for this kind of work: review essays are usually expected to account for almost all that has been written on their subject matter. So it is not only perfectly reasonable, but no less than required, to apply Marcus’s proposal (and maybe an even stronger proposal) to review essays.
Unfortunately we don’t have a journal similar to the JEL. (And neither do we have a classification system similar to JEL codes[*].) We do have the Compass, but the essays are extremely short (yet no less helpful to have a quick overview of an issue). If there were such a journal, one could be expected to cite the most recent and up-to-date review essays on the topics he’s exploring. And one could be expected to browse these essays’ bibliographies to find out what the most relevant papers are, rather than relying on the worse-than-imperfect practice of reading what we found in a footnote.
[*] Every paper that appears in economics must indicate its JEL classification, which covers the whole field (including philosophy of economics). See here: http://www.aeaweb.org/econlit/jelCodes.php?view=jel
Posted by: Pierre | 04/28/2014 at 10:54 PM
Hi David: Thanks for your reply, and I appreciate your supporting the effort!
You say: "I should add that I don't think your response to Jonathan really addresses his and my main concern, which is that as citations become less helpful as a method of determining what to read, people will simply turn to other methods that could be even more biased. I take it as a given that people will be somewhat selective in what papers they read. It's just not possible to read everything. So the question is not *whether* to rely on each other's fallible value judgments in determining what to read, but *how* to go about it. I agree that there is evidence of unfair bias among citations, and so far as I know, there isn't any evidence about other methods people use to figure out what to read. But I think that's just because citations leave a paper trail, and that many of the other methods don't. From the armchair, I'm tentatively with Jonathan in thinking that the alternatives are probably even more biased than citations."
Here's my problem with what you and Jonathan are saying. You are both arguing from the *armchair*. You are saying, "Well, I think if we make people cite more fairly, they will be more likely to engage in other, potentially more biased forms of how to select what to read."
First, I think that even from the armchair, this is false. People decide what to read mainly on the basis of *what* they read. I know I do this, and I know other people do it. If I come across a citation in a paper that is relevant to what I am working on, I *read* it. And I would do the same thing if more people were cited. And I think -- from the armchair -- that other people would do the same thing. People have neither the time nor the effort to go about all kinds of other byzantine means to decide what to read. The most natural way to decide what to read are citations, and there is every a priori reason to think that if more people were cited, more people would get read. I've personally come across many people -- a "famous" philosopher just last week in fact -- who have said to me, "I'm surprised I've never come across your paper on X." I'm not surprised at all. No one cited the paper. If they had, the person would have come across it.
So, that's the first problem I have.
The second problem I have is the type of armchair reasoning you are engaging in. It is more or less an argument from ignorance (viz. "Well, things might get worse if we adopted your proposal...so we shouldn't"). Get worse? How? What we have *now* is a professional norm that predictably (and empirically) leads to systematic favoritism of white, famous, men at top universities to the exclusion of everyone else. Other fields have adopted something like the norm I have proposed and they do *not* suffer the same kind of systematic exclusion.
So, I say, the only empirical evidence we have supports my proposal, too.
Posted by: Marcus Arvan | 04/29/2014 at 09:22 AM
But -- as David and I have hit on independently, I am gratified to see -- the connection between reading and citation itself seems highly dependent on the exiting practices by means of which a citation is a positive (if highly fallible) informational sign of value & relevance. Destroy that signal, and there is no reason to think that the the connection between reading and citation will be maintained. And, we are arguing, there are clear mechanisms by which that connection will be deteriorated. If all bibliographies swell by dozens of references, then people will decrease their use of bibliographic presence as a cue to what they will read. This is an empirical claim but not, I think, an obviously controversial one: as a behavioral cue becomes both less valid and more expensive to use, people will rely less on that cue.
The other empirical claim, which again I think is not at all controversial, is that the other easy-to-hand cues as to what to read are _even more susceptible to bias_ than citation, like reading work that has a lot of "buzz", or, indeed, work published by famous people at famous places. In order to overcome bias, you need to provide people with a signal that is less biased than what we can get from citations. Destroying the citation signal simply doesn't do that.
The proposed citation norm doesn't seem likely to make people more likely to read cited works, and for that matter, it won't even make people more likely to read the works that they will now be required to cite. Why should they? Their citing of the work no longer is meant to confer any sort of signal, not even the minimal signal that they read the work & found it of some minimal value. I would expect the entire process of assembling bibliographies within that n-year window to become totally automated via google scholar or whatever.
Posted by: jonathan weinberg | 04/29/2014 at 01:25 PM
I agree with this last comment by Jonathan (what would be the purpose of an automated or mechanical list of papers one has not read?), but I am writing to add a further concern. I am afraid that the duty to mention all articles on X written in the last 3-5 years would amount to the license to ignore whatever has been written befor that date (after all, life is short, and if one has to read the ones, one will at least neglect the others), which seems to me a big evil, since it would amount to make philosophy a matter of fashion, with people quoting recent works and ignoring their roots and raising objections to, say, the concept of personal identity, which have already been discussed, although not in the last 5 ys.
Posted by: Elisa Freschi | 04/29/2014 at 05:15 PM