Our books

Become a Fan

« Request for Submissions: New Series, "Notes from search committee members" | Main | Recent work by Cocooners [September-October] »



Feed You can follow this conversation by subscribing to the comment feed for this post.

recent grad

My guess is that some of the drop in citations per article has a lot to do with the arms race going on among grad students and early career philosophers. More people are being pressured to publish without having read the relevant literature. So, slow the arms race and I bet citations practices will improve.


How have you corrected for the fact that more recent articles have been around for less time? You can't compare citation rates per article between articles that have had 10 years of attention on one hand and 1 year (or less) on the other.
Given the slow pace of getting something published in philosophy journals, even (eventually) seminal papers would be pressed to collect any citations within 12 months.
You'd have to compare the proportions of articles that had been cited within x months of their publication to get any meaningful comparison here.

Lee Walters

Could you explain what data precisely the first graph is showing? Is it showing the % of all philosophy articles in the database cited in the relevant year? If so, that might demonstrate your point, but I find it hard to believe that that is what the graph is showing, unless there has been a huge increase in the database coverage or some such.


Hi Moti,

It's very frustrating. It happens quite often that I'll hear a paper or read a paper where someone offers an argument or point that I've seen made before without any acknowledgement. (And there is also the annoying habit that some have of trying to get others to cite their work without reciprocating when it's clear to both parties that the citations should mention the work of both parties.) One question about the data, though. Do we have any idea whether the number of articles published during this period increased? Two thoughts. First, as published papers pile up like corpses, you'd expect that if the same number of citations are used across time the percentage of published papers cited would decrease dramatically. Second, even if we look within a time frame (say papers published within 5 years) an increase in number of papers published within this time frame would account for the effect even if the number of citations per paper remained constant across that time.


Put pressure on journals to exclude citation details, or at least the bibliography, from the world limit. I have had papers with a bibliography of close to 800 words. For a journal with an 8000 word limit, that is 1/10 of the entire article.

Marcus Arvan

Hi Joe: With respect, I think you've misinterpreted Moti's first graph. It is not about how many citations the average philosophy paper contains. It is about how many citations the average paper in philosophy *receives*. This is important because, in many different areas of the philosophical literature, it is the same few dozen papers [usually by a few top people in a few top journals] that are cited over and over again. Moti's concern is about work being ignored when it would be scholarly appropriate to cite it, and about people exercising due diligence in scholarship.

Scott Clifton

I find this hard to believe, since I have received several reviewer comments that claim (sometimes correctly, sometimes not) that my paper fails to consider entire bodies of research or crucial pieces of literature. So now, I cite, cite, cite. As Jonathan points out above, that often produces a practical problem of word count, but I learned my lesson. How are people getting papers published when the papers fail to cite things that are directly relevant to the main argument or account? Yet again, a sign that journal reviewers pay close attention to different things and often for no discernible reason. Those papers that don't appropriately cite have just found venues that care little about this practice, but my experience suggests that other venues do care about it a lot--too much, in some cases.


There's a difference between philosophy and many higher-citing disciplines in that in philosophy, it's often a "point" or an argument that fails to be cited, rather than an experiment or a substantial conclusion. Suppose that I argue that X's conclusion "doesn't follow." Should this be cited by others who make the same point en passant? I don't think so. My own practice is to cite substantial positive theses in the general vicinity, whether they are in agreement or not, but not to worry about too much about others who might have trodden on the same ground. I find that this is pretty much in line with how others treat me. Do you feel that I am too casual?


I am not a professional philosopher, but I intend to become one if I enter graduate school, so I would like to contribute from the perspective of a graduate who is also worried about the publishing arms-race mentioned by 'recent grad.'

1. It is my assumption that well known philosophers submit their work in top journals. I also assume that well known philosophers occasionally narrow their research to work found in top journals. If a paper published in a top journal contains an argument that bears either a strong or weak resemblance to an argument made in an earlier paper, but the earlier paper was published in a less well known journal, then there is probably an innocent explanation for the resemblance.

Yet Moti describes an author who uses an example that Moti also uses; implicit in his description is either an accusation that someone illegitimately copied his example, or the (weaker) accusation that some authors unfairly advance in the profession by putting in less effort to find others who mention, discuss, or disagree with their views.

As Scott Clifton mentioned, citation practices significantly impact word count, especially since philosophers are encouraged to be charitable when interpreting or describing someone's view on a given topic. During my undergraduate studies, it was always recommended to me that I narrow my papers to a debate between few authors. In turn, I was encouraged to argue for a small, but robust point.

In other words, some undergraduates might feel pressured to research until they think their points can be made, unless that exact point was obviously found somewhere else.

Marcus Arvan

I have unpublished a couple of comments, and ask that commenters focus discussion on the issues rather than on individuals.

Jonathan Ichikawa

Like some of the other commenters, I don't understand what these graphs are supposed to be demonstrating or where they come from. I'm willing to be convinced that citation patterns have changed in the direction indicated—prior to being shown data I'm completely agnostic on this question; my anecdotal experience does not discriminate between this hypothesis and the hypothesis that we're all citing much more—but if these graphs are supposed to make the case, they need to be explained much more thoroughly.

another take

I have a different view. (Am I the only one?) I think we need to worry about ideas more, and about citations (that is, who proposed those ideas) less. There are many, many, articles out there, and we only have so many hours in a day. I would prefer we spend our limited time perfecting arguments rather than obsessing over giving (or getting) credit. I don't know, it just seems petty.

Of course, if one is purposelessly not citing relevant work, that is another story.

Marcus Arvan

another take: I think the concern is about ideas, not just [or even primarily] about citations.

If you look at my earlier discussion of Healy and Bloom's citation data--and indeed, at Healy's own discussion of his data [see links below]--the concern with citations reflects a broader concern about ideas: a concern about the field focusing on a relatively narrow set of authors in a narrow set of journals, not engaging with the ideas of a diverse variety of authors outside of narrow "citation-networks."

As Healy put it,

"As I’ve said before, an academic discipline is a kind of exclusive conversation. Even for very successful entrants, “participation” usually means being present as a contributor, but not as a topic-maker. It’s a little like attending an ongoing public debate. You need a ticket to enter and sit in the audience. You may be called on by the moderator to ask a question or make a point from the floor. But the agenda for the discussion is set by a much smaller panel of people up on the stage—people who started out as audience members. Most of academic life has this structure, from departmental talks to conference panels and plenaries to journal exchanges. A key issue then becomes how work gets selected for attention up on the stage, who gets engaged with from the stage, so to speak, and how this process plays out as new audience members come in the door."




Recent Grad,

There definitely is an arms race, that is true. But I'm not sure it explains the pattern. My impression--though I don't have data to back this up--is that the young philosophers are the ones who do the most citing. Only the very well-established philosophers seem to be able to get away with citing few or no other philosophers.


It's not surprising that the percentages decrease as you get closer to the present: there has been less time for an article to be cited. It takes at least 6 months to get anything published in philosophy, with a year or 2 being more realistic.

Or am I misunderstanding the data?


"Second, the first chart shows the percentage of cited articles in a given year."

Does this mean

that for each year on that graph, the graph is showing the percentage of all articles throughout time that are cited that particular year?

Lee Walters

Hello Moti,

The updates aren't really helping. You say "the first chart shows the percentage of cited articles in a given year". The way I would read this is that it shows what % of philosophy articles were cited in the relevant year. As I said in my earlier comment, this is the type of data you'd need to make your point, but I don't believe that this is what the data are. Moreover, you seem to concede this in update 2. So chart 1 is misleadingly labelled and irrelevant to the point you are trying to make.

You then try to respond to the point made by some commentators that the decline in citations is due to the fact that it takes time to read and cite articles. This is obviously true. You try to resist this by comparing philosophy with a different/overlapping field, but how is that supposed to help? You were claiming that philosophy practices have changed over time. If so you need to compare philosophy practices at one time with philosophy practices at another time not with some other discipline's practices. And this you haven't done.

There is, of course, the separate point about how philosophy citation practices compare with other fields, but that is made by chart 2.

So none of the data you have linked to supports the point about changing citation practices in philosophy.


Unfortunately, I have to agree with Lee Walters.

Brian Weatherson

Can you post the raw data for this? I tried to replicate it and completely failed, but that's largely (entirely?) due to my lack of skill at using the database.

It did seem that the database was missing a lot of citations, when I tried to look up how many times particular articles were listed as being cited on Scopus compared to Google Scholar, or Web of Science, or in some cases just my own knowledge of seeing where the article had been referred to in print.

Shane J. Ralston

The tendency aming philosophers not to cite others' work in Philosophy is confounding. The real frustration comes when you're going up for tenure and you have to explain to your committee why so few fellow philosophers cited your work. One way around this problem is to write for an audience of scholars outside Philosophy, scholars who are more likely to cite your work. I even took the step of redescribing myself as an interdisciplinary scholar rather than a philosopher, just to attract more citations. Very few of us can reinvent the wheel, but for some reason philosophers are more likely to believe that they can, even when they are really standing on the shoulders of an army of fellow philosophers (past and present) that helped to make that wheel in the first place.

Mohan Matthen

To repeat a question I asked on Daily Nous (and should more properly have asked here, my apologies): what is the x-axis in your second graph (the one entitled "H index"? And what does the size of the various circles indicate?

Charles Pigden

Dear Moti,It seems to me that your first two graphs show three things.

Graph one shows that after a few years (how many?) the majority of papers (about 80%) get cited by someone. (For the reasons others have given it absolutely does not demonstrate a radical drop-off in citation rates.) That’s good news (sort of) but not as good as all that given the second graph.

Graph two suggests a) that after a few years (how many?) the average citation rate for philosophy papers is somewhere between one and two. That’s bad news, especially for young philosophers, hoping to make their mark.

Graph two also suggests b) that the average citation rate for Philosophy papers is a bit lower than the average citation rate for Language & Linguistics and History papers but significantly higher than the citation rate for Religious Studies. Indeed for a wide range of humanities subjects the average citation-rate per paper is less than three.

If this is correct, then citation rates in the humanities are dismally low but Philosophy is not particularly unusual in this respect. Furthermore, whatever the explanation is for Philosophy’s low citation rates, it is not unique to Philosophy. So if we want to fret about it we should be fretting about the academic culture in the humanities generally, not the specific culture of Philosophy in particular.

Charles Pigden

Well it really would help Moti if you deigned to explain your graphs. You say ‘the second chart about Philosophy (not HPS) does not show that “after a few years (how many?) the average citation rate for philosophy papers is somewhere between one and two.” It cannot possibly show that’ If you have a blob representing Philosophy articles and if the center of that blob is situated at about 1.5 on a scale which is supposed to represent the average number of cites per article in that subject, then the obvious way to read it is that the average number of cites per article (after some undeclared period of time) is between one and two. *Some* time period must be involved because it would be a bit silly to include freshly published articles that nobody has had time to read or to respond to in the calculation. They are, of course uncited (most of them) but that’s because they have not had a chance to *be* cited. If your second graph shows that the average number of cites per paper is less than five (which you claim) but *doesn’t* show that it is about 1.5 then I am at a loss to understand how it should be read. Having checked out citation rates at various journals on the SJR, an average of 1.5 cites after four or five years sounds about right to me, though I don’t know how they aggregate the data.

What about the first graph? You say ‘the first chart about Philosophy (not HPS) does not show that “the majority of papers get cited by someone.” It cannot possibly show that. Rather, it shows the percentage of cited philosophy articles in a given year’. Well it surely cannot show the percentage of EXTANT philosophy articles that are cited in a given year. Why not? Because if roughly 80% of extant articles in most years were cited at least once, then over a period five years most articles would accumulate three or four citations, and THAT claim is belied by your second graph. So what the graph must show (for instance) is that 80% of North American articles PUBLISHED IN 2004 have been cited at least once with similar figures for the papers published in most years through to 2011. Does this suggest that most (North American) articles get cited sometime by someone? Yes it does. It can easily take four or five years for an article to be cited if is cited at all. (I have a now reasonably-well-cited paper which went totally uncited for the first *eleven* years after its initial publication in 1990 with 80% of its citations dating from 2006.) Thus the fact that 75% of articles published in 2010 have been cited least once, strongly suggests that the majority of North American articles published since that date can expect at least one citation, though they may have to wait for it.

If you want to demonstrate a drop-off, here’s what you have to do. You have to show that the papers published in 2005 or 2006 (for instance) had on average N citations *after three years* and that papers published in 2011 had on average M citations *after three years* where M is significantly less than N. It may be that you can do this and demonstrate your drop-off. But so far you have totally failed to prove your point that citation-rates in philosophy are in sharp decline.

One other point. You don’t say whether your statistics exclude self citations. If they don’t then it may very well be that the citation some time by somebody that most authors can expect is a citation by themselves.

Dr. Fred Young

I first heard about the practice of basing merit increases on an individual's citation appearances back in the '80's. There was also a study back then (no, I don't have it at my fingertips, so won't cite it), that indicated that as of the mid-'80's, the main difference between tenured and non-tenured professors was age. This was based on a computation of the number of citations and in the prestige of the journals published in. I'm still being cited, as of 2012, although I haven't been a professional philosopher since 1990. I'm sorry to see that the trends haven't changed. Whatever happened to judging a person by the quality of his or her thought? ::sigh::

Dan Hicks

I'm a PhD philosopher who has spent a large amount of time over the last two years working in bibliometrics. I've published in the area (DOI 10.1371/journal.pone.0168597) and in September will be starting a postdoc as the sort of in-house bibliometrician for UC Davis. (This is all just to note that I have some professional experience with bibliometrics.)

Scopus doesn't index books, and so is known by bibliometricians to be unreliable for humanities in general and philosophy specifically.

I also agree with the comments requesting more details about the methods used to generate these plots.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Subscribe to the Cocoon

Job-market reporting thread

Current Job-Market Discussion Thread

Philosophers in Industry Directory


Subscribe to the Cocoon