Our books






Become a Fan

« Principles for a globally inclusive philosophy? | Main | How can we help you? (September 2021) »

09/14/2021

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Tim O'Keefe

One problem here is that the citation information (by philpapers' own admission) is in Beta testing mode, and skimming my own citation there just now, it seems *quite* incomplete. The citation count for a book of mine, for instance: the number of citations in philpapers is less than 6% the number of citations in Google Scholar.

Now, the Google Scholar count has its own problems, including some double-counting, and maybe you'd say it includes some citations you'd want to strip out--for instance, in MA theses, undergrad honors theses, or popular books in fields other than philosophy. (It also misses some things.) But on the whole, Google Scholar gives a much better sense of how cited a piece is than does PhilPapers. (One other example: I have an old paper of mine with 69 citations, according to Google Scholar, and only 6 on Philpapers--and only a handful of Google Scholar citations are ones a person might want to strip out.

Ed

I've used PhilPeople publication metrics during the tenure and promotion processes at my institution. Admins and colleagues alike report finding them useful.

Tom Cochrane

I also recently used my publication metrics in a (successful) promotion application. It's particularly useful how they break it down by sub-field. Just one of the many useful services philpapers provides! I wish there was a bit more philosophical discussion on there.

Daniel Weltman

The PhilPapers database (at least for many subfields, maybe for all of them) is not presently in a place where these numbers are particularly accurate, and at least for my tastes they are so inaccurate as to be useless.

To illustrate: I have 3 citations, total. Like, out of all the things I have ever published, they have been cited a combined total 3 times. And all 3 of those times are me citing me!

These self-citations put me in the 93rd percentile for citations in the entirety of value theory in the past 5 years and the 3rd quartile for citations in value theory in the entire history of philosophy. Both of those are very impressive numbers - value theory is a giant part of philosophy! But, obviously 3 citations (all of which are me citing me) is not an impressive number. My statistics get even more impressive if you start to narrow the category down from "value theory" because that's where problems like the lack of accurate categorization really starts to come into play. (I am in the top 3% for # of citations in social and political philosophy in the past 5 years! I'm in the top 1% of publications in social and political philosophy for the past 5 years!)

There are lots of varied and overlapping reasons why the numbers are not very accurate or useful, but the biggest ones are the one Tim O'Keefe mentioned (the vast majority of citations are not tracked); under-categorization (many works that ought to be in a category are not, and so your works in that category are treated as having a higher rank than they ought to have); and over-categorization (lots of people end up in the database because they authored or co-authored a single non-philosophical work that ends up on PhilPapers, and thus as far as PhilPapers is concerned they are a very under-performing philosophy, and you do much better than all of those people because all of your publications and citations are philosophy ones which can end up on PhilPapers).

In a hypothetical world where every work of philosophy (and no works of non-philosophy) were accurately categorized on PhilPapers, the numbers might be kind of useful in a "how am I doing vis a vis others" sort of way, but not for much else, I think. And if one does not have any problems misleading administrators, who generally do not seem equipped to evaluate this sort of stuff, then these numbers can be used to impress and mystify them when compiling a tenure file or something like that. If I just fed the impressive numbers I mentioned above to administrators, I could make myself look good. Whether impressing administrators in this way is ethical is a bit iffy, I think. Ditto for colleagues who may not be aware of how misleading the numbers are, given the state of the database.

The best solution, I think, is for everyone to diligently keep their PhilPapers profile in order, which means doing the tedious work of going through all of your publications and adding all of the things you cite into PhilPapers's "what this work cites" list. I do this for all my publications, so if for instance I cite Tim, my citation appears for Tim's work. And if everyone did this, then all of our citations of Tim would appear, and PhilPapers would be accurate. Another way of fixing it is to go to the page for every work that cites you (which you can find via Google Scholar) and add the fact that they cite you. If everyone did this, it would again fix it. But, this is also tedious. Aside from this I don't think there's any real solution except perhaps integrating PhilPapers with Google Scholar somehow.

Tammo

Just to illustrate the problem Daniel mentioned a bit more: I just looked at my metrics, and it tracks my record for value theory -- I am in Q1 (the lowest quartile). I don't work in value theory, and I would never claim it as an AOS, BUT I once published an article about John Mackie that had a section about his moral error theory in it. That article has the tag "moral error theory", which PhilPapers rightly classifies as part of metathics, which is in turn classified as part of value theory. There's no real mistake here: all the classifications are accurate, and PhilPaper just compares all the users that have at least one article relating to value theory. But there's presumably a number of people like me who don't "work in" field X but have one or two articles with little or no citations that are classified in a way that makes them show up in that field, and they make the people that actually work in field X look pretty good.

I don't think this makes these metrics useless, we just need to be aware that being in the top 25% for a field isn't a remarkable achievement. Maybe I'll become part of the top quartile for value theory once someone cites my Mackie paper :)

OP

OP here. These are all really good points re: citations and subject metrics. Google Scholar is probably a better guide for citation counts. I imagine the fact that classifying a paper in a way that makes it more discoverable in search is somewhat at odds with classifying publications for the purpose of comparing various metrics, because it results in overly inclusive classificatory schemes that put people who actually work in a given area in the minority.
I wonder if the metrics for publication volume or downloads are more informative. Certainly Philpapers seems like it's in a pretty good position to measure *downloads on Philpapers*, and that does reflect some level of interest in one's work. Publication volume metrics might also be apt, so long as people actually upload their publications to their profiles.

Daniel Weltman

Re OP: Publication volume is ranked against other people on PhilPapers, which means in practice you're ranked against a lot of non-philosophers who wrote or co-wrote a single paper or book that ended up on PhilPapers (and who thus have 1 publication, as far as PhilPapers is concerned), and against a lot of philosophers who have published stuff that isn't on PhilPapers. So in both cases you end up rather inflated. And of course retired people, dead people, etc. are also counted, along with (as Tammo noted) some people who aren't in your subfield (and so who count as having just 1 publication).

(Again, I'm in the top 1% of publications in political philosophy over the past 5 years, but believe me when I tell you I am not one of the rising superstars of my subfield!)

Download numbers, as far as I can tell, are influenced quite a bit by whether the paper is 1) open access, 2) not open access but in a journal people can easily access via their institution or whatever, or 3) in a journal that not lots of people have access to (like a lot of the journals hosted on PDCnet). Three papers that would otherwise be downloaded equally as much will be downloaded in different amounts if they are in each of the 3 categories. The papers in categories 2 and 3 will have more downloads than the paper in category 1, and the paper in category 3 will have more downloads than the paper in category 2.

(For instance, I have two articles on very similar topics published in very similarly-prestigious journals that were published at basically the same time. The one in the open access journal has 77 downloads and the one in the difficult to access journal has 133 downloads. If anything I think the paper with fewer downloads is more important and has likely been read by more people.)

I adore PhilPapers, and I think it's one of the best things ever to have happened to the profession. I diligently upload all of my publications and I add every single citation in them, which takes a while. So I'll be the first person to shout from the rooftops about how useful PhilPapers is. But I think that the publication metrics measure the vagaries of the spotty state of the site's publication classification system, and nothing else. If someone tried to tell me they deserved a promotion because of some of those statistics, I would charitably assume they haven't really thought this through, or uncharitably assume they were trying to put one over on me.

In practice I think any philosopher with a pulse who is publishing anything tends to have pretty good numbers SOMEWHERE on that metrics page, for the various reasons that have been noted, so it's tempting to invest the numbers with a degree of importance, doubly so when administrators who know nothing about philosophy are liable to be dazzled by nice looking statistics. But a number's no good unless it's generated in a useful manner and I think unfortunately none of these PhilPapers metrics are at that point yet.

OP

Daniel, all very good points. I'm becoming increasingly convinced that these metrics are pretty much worthless. Perhaps it would be better not to have them at all...

Anon44

I personally do not take the metrics seriously -- at all!. It says I'm top 1% in downloads for the past 12 months for Social & Political and top 2% all time. But here's a couple of things I've noticed. First, as pointed out by others, if you've uploaded a version of your published paper, you're likely to attract more downloads. Second, and more important in my mind, is this: when I look at the web analytics, I see some really strange things. There are periods were a certain IP address hits one of my papers every single day. I suspect these sorts of hits are computers that simply scrape the internet and not people actually looking at the paper. I had one paper being "read" from a location in China nearly everyday for like a year. I'd be surprised if even half the reported downloads involve actual human beings deliberately clicking on the download button. Finally, a note about citations: my papers tend to have a citation count lower than what is listed in google scholar. Despite my citation count not being impressive at all (even if one includes all actual citations), I'm somehow still 89th percentile over the last 5 years in S&P philosophy. I don't trust any of it and I'd personally be embarrassed to reference the metrics in any materials whatsoever. That said, apparently some people have success doing so.

M

The metrics are not TOTALLY useless. If they tell you that you are in the top 1% or 5 % in some subfield, and that you are in the second Q in another, then it is a pretty indicator that your work in the one subfield is having a bigger impact than the other. That is valuable information for you, if you desire to have an impact.

Daniel Weltman

@M: I don't think we can conclude this at all. The three metrics PhilPapers measures are # of publications, # of citations, and downloads.

The # of publications measure is not a measure of impact. It's possible to publish a lot and have little impact or a little and have lots of impact. (The most famous example is Gettier.) Even if it were a measure of impact, the categorization issues noted above make it impossible to compare subfields against each other. The top 1% of a subfield that does not have a topic editor on PhilPapers is very easy to achieve, because without a topic editor to add papers to that subfield, the subfield will be under-populated. And the top 1% of a large subfield is also very easy to achieve, because a huge amount of stuff will be added to that subfield (rightly so, but whatever) and so you'll easily beat all the people who publish 1 thing and nothing else. (Again, I am in the top 1% of social and political philosophy for the past 5 years, but I am a nobody!)

The # of citations measure is to some degree a measure of impact, but the majority of citations are not tracked by PhilPapers, which means your less-cited stuff could be much higher in the metrics than your more-cited stuff. Right now the only thing this measure tracks is a mixture of how the PhilPapers auto-citation finder works (not very well) and how assiduously people add their citations to PhilPapers (for most people, not at all).

The # of downloads is determined to a large degree by how available your paper is elsewhere, as I noted above. If you've published mostly in open access in one subfield and mostly in closed journals in another subfield, your downloads will be much higher in the second subfield even if your work in the first subfield is much more impactful. (Again, taking two papers I recently published, what I suspect is the more impactful paper has been downloaded almost half as much as the less impactful paper, despite the two being otherwise more or less the same, because the more impactful paper is in an open access journal and the less impactful paper is in a journal that is tough to get access to.) And as Anon44 points out, it's not even clear how many of those downloads are legitimate human beings.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Philosophers in Industry Directory

Categories

Subscribe to the Cocoon