Our books






Become a Fan

« Some Facts About This Year's Hires (so far) | Main | Gender Equity in the Philosophy Blogosphere (including the Cocoon) »

03/31/2013

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Neil

Problem with citation data is that some areas are bigger than others, and some topics interest people outside philosophy. Bad work in bioethics routinely gets many more citations than better work in metaphysics: I know, having produced both.

Ben Bryan

I’m skeptical. It’s true that citation data are easier to track than other things. But that doesn’t give us good reason to care about them. We should, I think, be wary of embracing standards merely because they are more objective. This is, to beat your favorite drum, probably part of why we fetishize rigor: it is easier for readers to agree on what is rigorous than on what is insightful. If we ought to care about citation data it cannot be because citations are easy to track. So a defense of tracking citation numbers needs to say more about why we should care about citation data. This raises a few questions:

1: What do citations data track that matters? And for what purpose are they supposed to matter? Influence might matter when we’re trying to think about the the merit of papers that have been around for a decade or more, but it strikes me as a terrible way to evaluate the work of a paper published a year or two ago by someone on the job market. When we’re considering the merit of especially recent work, we want to think about what influence it could have, or perhaps should have, not what influence it has had.

2: (Neil’s point is a worry about this) Suppose citations track something that matters for some purpose. Does it do so consistently enough across the discipline to be helpful? You might suggest, for example, that influence is a good way to think about at least some part of the merit of the publication records of philosophers who are up for promotion to full professor. Even if that's so, we have to address the fact Neil points to: some subdisciplines are quite different from others, and this bound to make measures of influence favor some subdisciplines over others.

3: What effects, if any, would it have if we began to emphasize people to aim to have influence? What kind of papers should we expect to written in a discipline that encourages the production of "influential" work?

B.M.

Wouldn't all the same biases that distort what papers get accepted also distort what papers get cited? Someone who is not well connected is less likely to have his/her published work read in the first place. And those who do read it will be more inclined to ignore it (presumably if you know someone and already think highly of him/her, you will be less likely to ignore his/her paper on X when you write a paper on X).

Marcus Arvan

Neil, Ben, and B.M.: Thanks, all, for your comments. I certainly understand the worries that each of you raise. But here's an initial thought in reply.

It seems to me that we face something of a Scylla/Charybdis (or "rock and a hard place") dilemma here.

On the one hand, we can go with seemingly "subjective" measures of impact and quality -- i.e. judgments by people in the discipline at large. This seems to me to be the emphasis in philosophy. Yet this approach has costs, as there are all kinds of ways that judgments about quality and impact can be biased (e.g. in favor of people from top departments, people in TT positions, who have certain luxuries of time in sending things out to top journals, etc.).

Some (though by no means all) of these problems would seem to be addressable in terms of more "objective" measures (i.e. citation rates), as citation rates aren't based on subjective judgments about what is "good" but rather actual *impact* in the discipline (i.e. discussion, etc.). Despite the problems you all raise -- and I don't deny they exist -- these things seem to me relevant. A paper in a top journal which, say, ten years later has one or two citations has, I think, clearly made less of a contribution to the field than a paper in a lesser journal that has been cited a lot (even if it is a bad paper, bad papers can contribute to *better* future work, whereas a "good" paper that is never cited never does much of anything).

But, as you all point out, there are problems here too. Citation rates can be misleading on several grounds you all have pointed out: different sizes of disciplines, ignoring differences in quality of work, etc.

Here, then, is my suggestion. If both approaches have benefits and drawbacks -- and I think this is right -- there should be some *balance* to how they are used in evaluating people.

This is, I think, how it's done in many other fields. My wife, for instance, works in psychology. In their field, they make both kinds of judgments -- judgments about paper-quality and journal placement, but also judgments based on citation-numbers.

I didn't mean to suggest that we should do away with judgments of quality (though I realize the tenor of my post might have indicated that). My worry, rather, was that philosophy perhaps focuses too much on reputational judgments and not *enough* on numbers (viz. I've never heard someone say something like, "Yeah, so-and-so's paper was in Phil Review, but nobody's ever written anything about it" -- which still seems to me a relevant kind of judgment to make).

Finally, B.M., a quick reply to your comment: that is of course possible. However, my (admittedly anecdotal) research suggests that bias affects citation rates less than more subjective judgments of "quality." In my research, I regularly came across articles by well-known people that have hardly been cited at all and articles by lesser-known people, in lower-ranked journals, that have a lot of citations. In other words, I get the (anecdotal) impression that bias affects what people write about and cite less than, say, judgments about how good an article is based on what journal it has appeared in. But again, this is just an impression. It could be wrong.

Anyway, thanks again for all your comments!

Michael Cholbi

Citations are an imperfect measure. I've been told that one paper of mine put an end to debate concerning a particular topic, and that's why it's not cited! Cited work tends to be first or come at a critical juncture, but need not be the best. Indeed, I suspect some papers are heavily cited *because* they're wrong.

Nevertheless, I agree that we should rely more on objective measures. There's definitely an echo chamber effect in philosophy: this is a good paper because it's published in a top journal, and we all "know" the top journals publish the best material (as though even the most selective journals don't make mistakes about what to publish!). So yes, less of the reputational bias.

Chris Stephens

Incidentally, I don't think you've got the h-index definition right in your post. If you published 10 papers, each of which is cited 10 times, your h-index is 10. If, like Gettier, you published just one article, but it has 1,700 citations, your h-index is the same as someone who published one article with just one citation - namely, 1.

Your h-index can't be higher than the number of articles you've published, no matter how well cited they are.

So the idea behind the h-index is that it rewards people who write a lot of papers that are cited a lot of times. Those who write only a few papers, regardless of how well cited they are, cannot get a high h-index.

Helen De Cruz

Your definition of the h-index is not accurate. The h-index is defined as "the largest number h such that h publications have at least h citations." So a person with 20 papers, 10 of which are cited at least 10 times has an h index of 10; a person with only 10 papers, all of them are cited 10 times or more has the same h-index. In other words, you aren't penalized for writing papers that aren't cited.
Also, the total number of citations plays only a modest role. To take an extreme example, suppose you're someone like Gettier with few papers, but one of them is very influential (amassing 1000s of citations), your h-index cannot become higher than say, 1 or 2, even if you have a total citation count of over 2000.
The h-index is a good measure of combined productivity and impact for mature researchers. For instance, in Google Scholar you can find that people like Dan Dennett and David Chalmers have high h-indices. Dennett's h-index (66) is higher than Chalmers' (44) probably because the former is cited to a greater extent outside of philosophy than the latter (although both have a lot of influence outside of our discipline).
I don't think the h-index is a good evaluation tool for young researchers, but on the other hand, I think should count for something if you are early career and have a high h-index (e.g., in tenure reviews). It is possible to bump your h-index up by self-citation, as these are included, but it is very hard to raise one's own h-index significantly only through self-citation.
A while ago I looked at the h-index of journals in philosophy. You can find these data on Google scholar. What I found particularly interesting in this list is that journals that tend to get more citations do not correlate particularly well with perceived prestige in our discipline.
Here is the top ten list of journals in philosophy according to their h-index over 5 years:
1. Synthese
2. Philosophical Studies
3. Mind & Language
4. Noûs
5. The Journal of Philosophy
6. Mind
7. Journal of Consciousness Studies
8. Philosophy Compass
9. Philosophy and Phenomenological Research
10. Phenomenology and the Cognitive Sciences
See here for my blogpost about this: http://www.newappsblog.com/2013/02/the-h-index-of-philosophy-journals-and-their-relative-prestige.html

Marcus Arvan

Chris and Helen: Thanks for the correction!

Dan Dennis

This is an important question.

Bad papers may trigger a number of replies, pointing out the errors. So replies and citations are flawed. But then so are other measures...


I wonder whether it would be possible to institute a sort of Amazon type review and rate system for philosophy papers and books, where only bona fide academics could do the reviewing a rating? Academia.edu would be the obvious place to do this as it already has a large body of registered users…

Clearly there are potential problems with this, but it might be worth adding to the other flawed methods of assessment...

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Job ads crowdsourcing thread

Philosophers in Industry Directory

Categories