Our books






Become a Fan

« Are we already (irresponsible) gods? | Main | Some thoughts on Leiter's recent poll (redux) »

04/24/2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Moti Mizrahi

Hi Marcus,

I hope you won’t take this the wrong way but I think that, if we start a new blog for the Biased Citations Campaign, it will probably suffer the same fate as the blog for underappreciated philosophy. So I think we should make the Biased Citations Campaign (and the Underappreciated Philosophy Campaign, too, by the way) a regular feature on the Cocoon. Cocooners who are contributors can post on papers that they think fit the bill and readers can write in with requests to post on papers that they think fit the bill. I take it that’s more or less what they do over at Feminist Philosophers as far as the Gendered Conference Campaign is concerned.

PhD Candidate

Just speculating here, but is it plausible that a lack of citations in the humanities as compared to medicine is due to the length of articles? After doing a quick search through the New England Journal of Medicine, I found that a typical original article is about 10 pages in length.

Although I know very little about other humanities, I do know that philosophy articles are often much longer than 10 pages (aside from articles in Analysis and Thought) and sometimes extremely long-winded. Some papers in Mind, for example, give one the impression that they have committed to reading a monograph rather than a research article. And since the trend of lengthy writing seems to persist in philosophy, lengthy articles ultimately have an effect on the number of articles one can feasibly read in a day. Consequently, there are articles on a particular issue that will not be read before one submits their work to a conference or journal.

None of this goes against your main points, however. I think that the biases that you described above play a role in how one goes about screening the articles that will and will not be read.

Jonathan Jenkins Ichikawa

While I appreciate some of the problems with the status quo, I'm not sure what reasonable norm would help. I don't think that a norm that says you should cite everything that's ever been published on your topic is a reasonable norm.

Suppose I'm writing a paper on the contextualist response to skepticism. This is actually one of the more manageable topics, since this is a very young idea by philosophical standards; basically everything comes from the past twenty-five years or so. Even so, we're talking about a LOT of papers. There are 126 entries in the PhilPapers directory under that topic; I'm sure there are lots more not recorded as such, but let's assume that's the comprehensive canon. Do you really think I need to cite all of those papers? I don't think I should cite them unless I've at least looked at them; do you agree with me about that? If so, you must think I have an obligation to read those 126 papers before I can publish anything on contextualism and skepticism. Assuming it takes me zero time to track them down and one hour to look at each paper, that's just over three 40 hour weeks just reading enough to prepare a bibliography on a topic. Assuming it takes an average of 15 words to cite an article, those 126 papers take up almost 2,000 words -- a very substantial proportion of the word limit for most journals.

It sucks when one's work doesn't get cited, and you're certainly right that lots of biases come into play when one is selectively choosing which works to cite. But I just can't see that a norm that you cite everything is realistic.

Marcus Arvan

PhD candidate: I don't think it's that plausible. My wife works in I-O psychology, their papers are long, and they cite *everything* recent.

Jonathan: Thanks for your comment. I appreciate your worries. But how about this rule: you must cite anything published in the last 3-5 years or posted on philpapers as forthcoming that is *directly* on the topic your paper is on.

A couple of examples. In 2012, I published a short paper, "Unifying the Categorical Imperative." I think the argument in the paper is sound -- yet I have seen several papers published on the relationship between the C.I.'s different formulas the past couple of years that fail to cite my paper. I think this is wrong.

Or take Moti's papers on intuition-mongering and the method of cases. There has been a ton of work on these issues lately, yet Moti's work systematically fails to be cited by people working in these areas. This too is wrong.

Anyone who writes on a topic should *have* to cite any relevant papers on that *exact* topic that have appeared in the last 3-5 years -- or, at least, make a clear good-faith effort to do so (viz. my and Moti's papers are among the very first things that show up if you do a philpapers search on the topics I mentioned. I don't mean to single out Moti and I, by the way. They are just two examples that immediately come to mind!).

I think this 3-5 year/good faith rule is a good one. Do you not? If so, why not?

Marcus Arvan

Hi Moti: Thanks for the suggestion. You are probably right! What I think I will do is offer to put together a monthly report here at the Cocoon based on reader submissions (i.e. readers emailing me about papers they think have used poor citation practices). How does that sound?

Moti Mizrahi

Jonathan,

To add to Marcus’ examples, I will mention just another one. The following paper just appeared in SHPS:

Ilkka Niiniluoto, Scientific progress as increasing verisimilitude, Studies in History and Philosophy of Science Part A, Available online 12 March 2014.

It engages specifically with a debate between Alexander Bird and Darrell Rowbottom on scientific progress. It does not cite two of my papers:

Mizrahi, Moti (2013). What is Scientific Progress? Lessons from Scientific Practice. Journal for General Philosophy of Science 44 (2):375-390. Published online: 17 November 2013.

Mizrahi, Moti & Buckwalter, Wesley (2014). The Role of Justification in the Ordinary Concept of Scientific Progress. Journal for General Philosophy of Science 45 (1):151-166. Published online: 30 January 2014.

Even though they are the *only* two papers that engage with Bird and Rowbottom on progress in print. Moreover, my papers present evidence against the very view that the Niiniluoto paper defends. Arguably, that’s not just bad citation practices but also bad scholarship (not to mention bad philosophy). This sort of thing has to stop! Philosophers should not be able to get away with not citing papers that don’t mesh nicely with their cherished views.

Marcus,

Sounds great to me!

B.M.

One complication worth mentioning is that it's a lot easier to overlook recently published work. When I write a paper there's a research phase during which I actively look for published work on the topic at hand, and then there is a writing phase, and then a revising phase, and then finally a waiting for months on end while the paper is reviewed phase. The result is that by the time my paper is published, other papers on the same topic may have appeared in print, but I won't have cited them. I don't think this practice is unreasonable; so it might be unreasonable to expect a given paper to cite everything published in the last 3-5 years of its publication date.

Marcus Arvan

Hi B.M.: Thanks for your comment. So, I've had a few papers bounce around at journals for a while. One of them bounced around for 6 years. During that time,other people were publishing on that paper's topic left and right. I think it was my obligation to be aware of those developments in the literature, and to update my paper's references to reflect the stuff that came out. And indeed, I was once upbraided by a reviewer for failing to do precisely this (not citing stuff that had just come out). I think these things are entirely reasonable. Before submitting a paper anywhere (say, during revisions), one should do a brief philpapers search to see if new stuff has appeared. How long does that take? A couple of minutes! And, how long does it take to skim a few articles to see if they should be cited? Not that long. So, I say, I still think the 3-5 year rule is reasonable (obviously, one cannot cite a paper after is already under review at a journal, but one can always do so after the paper is accepted, at the final-revisions stage one submits before receiving one's proofs).

Moti Mizrahi

B.M.,

I don't think it is a serious complication. Oftentimes papers are presented at conferences and are available online through PhilPapers, PhilSci Archive, Academia.edu, etc., long before they are officially published.

The two papers I mentioned in my previous comment were both available through PhilSci Archive approximately two years before they appeared in print. I would expect a philosopher of science to check PhilSci Archive for work that is relevant to his or her research.

Jonathan Farrell

I take it that the problem is (roughly) that things that ought to be cited aren’t cited. Here are two possible explanations of this problem. The first is that our citation practices should be improved. The second is that the problem arises further upstream: work that ought to be engaged with is not engaged with, and so is not cited. Perhaps both explanations are relevant, but in neither case, it seems to me, would solving the problem require authors to cite all recent work on their paper's topic.

What is required for the citation process to go well? Plausibly citations perform more than one function. But I don't think that the following is true: "Citations exist … to point out to the reader the fact that someone has published on the relevant idea(s) previously. They are to give credit for the mere fact that previous work on the subject exists, and has appeared." Perhaps the citations in some papers ought to do this kind of work, e.g., survey articles. But otherwise it seems to me that journal articles don’t need to be databases of recent work: this job can be left to things like philpapers. So I don’t think the fact that some (recent) work on a topic is not cited is in itself evidence that our citation practices need improving.

What seems to me a more important role for citations is that we use them to (in Marcus's words) "giv[e] credit where credit is due". I think that what deserves credit is (something like) influencing the way the author thinks about and approaches the topic they deal with: such work should be cited. (No doubt citations can serve other purposes too.) It seems plausible that, when it comes to putting down on paper who has influenced how we think about the topic we're engaged with, biases might come into play and steer us towards the famous, the men, the people we know, etc. If so, then solving the problem requires coming up with ways to encourage people to acknowledge influences that, at present, are not being acknowledged.

But even if all goes well with the citation process, it is still possible that things that ought (in some sense) to be cited might not be cited because things go wrong before we get to the citation process. If work that ought to be read and engaged with is not, then this work won’t get cited. It seems plausible that, in deciding who to read and who to engage with, biases can once again interfere and direct us towards some works rather than others. If so, then solving the problem requires coming up with ways to encourage people to engage with work that, at present, isn’t being engaged with.

Depending on where the problem arises (and it could be from both sources, and perhaps from other places too), then different responses are called for. But in neither of the cases that I’ve discussed does it seem that citing (or reading) all recent work on the topic of one’s paper is required. (It would, of course, be good if we could do this, but perhaps, as previous posters have suggested, it is too much to demand of people.) What would help? I don’t have any positive suggestions, I’m afraid. Hopefully others do.

Marcus Arvan

Jonathan: Thanks for your comment. However, I think your points are in real tension with one another.

First, you write that (1) we needn't cite everything recent because "journal articles don’t need to be databases of recent work: this job can be left to things like philpapers."

You then write that (2) what matters is "giving credit where credit is due."

And you admit that (3) people often fail to engage with work they *should* engage with because of biases (in favor of men, famous people, etc.).

In short, you say:
(1) We should give credit where credit is due.
(2) People often *don't* give credit where credit is due by citing mainly men, famous people, etc.
But (3) We don't need to cite everything recent.

The problem here is this. If (1) citations should exist to give credit, but (2) people *aren't* giving appropriate credit, then (3) is false: we should expect people to cite everything recent, not just the famous men who may have most influenced their way of thinking.

It is, I believe, your denying this entailment that results in your not having a positive suggestion for how to solve the problem. For here's the thing: how *could* we possibly solve a problem of failure to cite and engage with people's work if not by expecting people to...cite work *besides* merely that which "influences" them. But that is just my proposal!

In other words, I want to say: if you recognize that there is a problem here (and you seem to), then there but one solution. If people fail to cite and engage with other people's work, we have to *expect* them to.

Marcus Arvan

Jonathan: I would also add (just to reiterate something I said earlier) that I don't know of any serious scientific field in which people are merely expected to cite those who have "influenced their thinking." In every other field I know of, authors are expected to know, and cite, all of the recent literature relevant to the topic. I would also add that this convention in other fields exists precisely to protect against citation biases.

Pierre

Marcus, the practice you point to in other fields has its own limitations (even though I believe you’re quite right). I’ve read somewhere (in a French more-than-layman, less-than-academic scientific magazine) that all too often, erroneous quotations were “repeated” throughout the subsequent literature, suggesting that many authors didn’t refer to the paper itself, but to its summary as found in other (more recent) papers.

For your proposal to be robust, I believe it should be added: “... and quote the *original* paper you’re quoting, not the version of it you read in a subsequent paper”. (That’s reasonable too, and quite in line with your proposal.)

Now perhaps a more modest approach could be worth exploring. In philosophy, we could assume that a bibliography often contains both what we could label “mandatory” and “personal” items. The mandatory items correspond to a restricted version of your proposal: the most important papers/books on the subject, those we can’t pretend to be ignorant of. (This need not correspond to “big names” or “top journals”.) The personal items correspond to our more personal intellectual background, to the “paths” we have followed. So for example, I take Anderson’s “Point of Equality” to be mandatory if one is working on egalitarianism (you just can’t pretend you ignore her contribution, and if you did this would be deeply wrong of you), while, *as of now*, I take (say) Laura Valentini’s work to be more personal (although I hope she will soon be viewed as “mandatory”).

To be sure, I’m not convinced by this more modest proposition. In its favor, I’d say that it meets the demandingness objection, but the absence of a clear-cut distinction between “mandatory” and “personal” creates a difficulty: what could/should we take the scope of the mandatory to be? If it is too restricted (big names in top journals), we meet the demandingness objection but probably fail to avoid the biases you point; if it is too large, we might not meet the demandingness objection anymore (this isn’t so bad after all), and we lose sight of the point of the personal background.

Jonathan Jenkins Ichikawa

I agree with Jonathan Farrell. I am puzzled by the suggestion that the point of citations is "to point out to the reader the fact that someone has published on the relevant idea(s) previously." This is the point of article databases, not article bibliographies. The point of citations, in my opinion, is to record the sources that one has used. This also implicitly conveys information about what sources one considers worth using, which is why there's room to discuss proper citation practices, and why it can make sense for criticising someone for failing to cite what she should. If the discipline adopted Marcus's proposal, citation rates would tell us nothing at all about the quality or importance of the work cited; they would only tell us about the popularity of the topic.

Sometimes bad papers get published. Sometimes when bad papers are published, it's helpful to engage with them in subsequent papers, to point out their errors, but not always. Sometimes doing so would just be a distraction. I'll go so far as to say that sometimes, one does a disservice to the profession by drawing attention to neglected published work. Some work is best forgotten. Obviously I don't think it'd be appropriate for me to name papers here, but I think we have all come across at least a few papers like this. I don't want to cite them in my subsequent papers, and I don't think I should.

Again, this means that choosing what to discuss and cite is a value judgment. Sometimes people will get it wrong, and sometimes they'll get it wrong because of systematic biases. This is a serious problem, and one that we should struggle with. And yes, it can be completely legitimate to complain that someone has treated you unjustly by not citing your work. But I don't think the fact that a journal has published your work on the topic is sufficient for you to achieve that standing. (And I *certainly* don't think the fact that a journal has communicated to you an *intention* to publish your work is sufficient.)

Marcus finds a tension in Jonathan's commitment to:

(1) We should give credit where credit is due.
(2) People often *don't* give credit where credit is due by citing mainly men, famous people, etc.
(3) We don't need to cite everything recent.

I share Jonathan's commitment to these three. And indeed, they seem to me to be entirely consistent. Marcus's line seems to be something like this: denying (3) is a way to solve the problem of (1) and (2); therefore (1) and (2) are in tension with (3). Only under an extremely weak reading of 'tension' does anything like this seem remotely plausible. Many weaker responses than denying (3) would do the job just as well. For example, one could maintain (3) and add (4): we need to cite everything worth citing.

Compare this argument:

(1) We should treat candidates fairly in making hiring decisions.
(2) People often *don't* treat candidates fairly in making hiring decisions, by favouring white people, men, people from wealthier backgrounds, people from more prestigious departments, etc.
(3) We don't need to make hiring decisions by random lottery.

I think that (1) and (2) are obviously true, and that there is no interesting sense in which they 'put pressure' against (3), which is also obviously true. This even though denying (3) would be a way to solve the problem of (1) and (2).

Ambrose

Jonathan writes that the following claim is "obviously true":

"(2) People often *don't* treat candidates fairly in making hiring decisions, by favouring white people, men, people from wealthier backgrounds, people from more prestigious departments, etc."

I take 2 to mean not only that _if_ people were to favour candidates of these kinds that would be unfair but, in addition, that these kinds of candidates do get favoured (and that that is unfair). But this is not at all obvious to me. First of all, in order for this (possible) form of treatment to be obviously unfair (if it were to be actual) we'd need to add that the white-male-rich-pedigreed candidates are being favoured _because_ they have these traits. The traits are not relevant to the fair assessment of candidates, let's agree, so favouring people for irrelevant reasons such as these would be unfair.

But it could be that these kinds of people are being "favoured" for other reasons that are relevant -- the quality of their work, say -- and that when people are favoured for those relevant reasons the results skew towards hires in those categories. Now is it obvious that this is _not_ what's going on here? Surely in order to draw that conclusion we need some way of gauging the merits of candidates _other_ than the evidence provided by demographic data about hiring or publication or citation. (If we look to that kind of evidence, it will not show that others have just as much merit as white-male-rich-pedigreed people.) So what is that other evidence?

The same holds for the original claim that

"(2) People often *don't* give credit where credit is due by citing mainly men, famous people, etc."

That might well be true. But how can it be "obviously true"? What is the evidence that the "over-represented" types of people being cited really are _over_ represented rather than simply being the ones whose work most merits citation?

My questions are not rhetorical. It wouldn't surprise me at all if some types of people were being greatly over-valued and others greatly under-valued for irrelevant reasons. But I have no idea how we are supposed to evaluate this, or why it should be "obvious" that the status quo is unfair in this way.

Moti Mizrahi

Oh boy, our profession is more messed up than I had thought. Not only do we not know if our methods are any good, we don't even know what citations are for. Speaking of methods, Jonathan J.K., a question for you: How can one judge what counts as a good paper worthy of citation/engagement if one does not know what counts as good philosophy (given that philosophical methodology is very much a matter of dispute)? Is it simply a matter of belonging to a certain club that does things a certain way?

Moti Mizrahi

I meant Jonathan J.I. Sorry for the typo.

Jonathan Jenkins Ichikawa

I'm not sure I understand what you're asking, Moti. Are you expressing skepticism about our ability to recognize whether a philosophy paper is any good? That seems a fairly radical stance. Are you also concerned about whether I have any chance of grading my students fairly, or providing accurate advice to editors who ask me to perform peer reviews? While I don't know how to give a general answer to your question, it's not at all hard to tell the good from the bad in many instances. For example, some papers have central arguments irredeemably based in obvious fallacies.

And no, it has nothing to do with clubs. I find your insinuation otherwise insulting.

Sorry if you think that's messed up.

Marcus Arvan

Hi everyone: I understand this is a divisive issue, and that passions are involved, but let's ensure that we discuss the issue in a safe and supportive ways, without insinuations or insults.

Disagreement is fine, and encouraged, but let's critique the arguments, not the people!

Moti Mizrahi

Jonathan, I am not sure why you were insulted. I said nothing about *you*. I was talking about the profession. In any case, if you were offended, then I sincerely apologize.

I was thinking about something like this: http://philosophycommons.typepad.com/xphi/2014/04/cogsci-2014-referee-report.html

In this case, the referee seems to think that experimental philosophy is not a good method to apply to certain questions. As a result, s/he rejects the paper. The same probably holds for published papers. Someone who is not sympathetic to experimental philosophy won’t cite such work in his/her own papers because s/he thinks that experimental philosophy is a bad way of doing philosophy. That’s messed up!

By the way, I asked *how* you can tell the difference between good and bad philosophy, and your response was “I just can.”

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Open job-market threads

Categories