Our books






Become a Fan

« Submitting to Conferences | Main | Some Thoughts on Kukla's Post on Social Norm Violations »

08/26/2013

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

eyeyethink

Google scholar is based only on recent citations, and there's no other distinctions made. Thus, Phil Review is ranked below Phil Compass, even though the reputation of the former is pretty indisputable, and the latter (while excellent and incredibly useful) is meant to contain little if any original research.

ESF/EIRH is fairly course-grained--it divides things only into 3 ranks (or 4 if you count W = not evaluated). Even then, it puts APQ and PPQ in the mid-range, behind many others I've never heard of. (But granted, many could be excellent specialist journals of which I'm simply ignorant.) SJR is not as course-grained, but also puts APQ and PPQ surprisingly low. (Of course, I'm American and perhaps these journals don't have the same reputation elsewhere.)

Brooks' Blog and Leiter seem to fit better with my experience, fwiw. (They are significantly in agreement as well, apparently, at least more than any other pairing.) Brian Weatherson also has a journal ranking, but he refers the reader to Leiter's as more up to date. So I'm most confident about Brooks/Leiter, though clearly these things are very hard to measure.

Michel X.

I guess it tells us that an unrepresentative sample of the philosopher-population thinks it's pretty good. That would be something for anyone who didn't know anything about how various journals are regarded but, as the tone of your post indicates, there's no way it could tell us anything more fine-grained than that.

I guess that's useful?

David Morrow

For starters, Leiter's ranking and the Google ranking are measuring two different aspects of journal quality—viz., prestige and citation density, respectively. So it's unsurprising that they find different results. Citation density is easier to measure but harder to interpret, IMO.

When it comes to prestige-based rankings, some significant disagreement is to be expected, especially when the rankings are compiled through unscientific surveys, as Leiter's and Weatherson's were. This introduces noise that should make us skeptical of fine-grained distinctions, as Michel points out. But it's not a coincidence that Phil Review shows up in all of these rankings, but the Transylvanian Journal of Vampire Studies does not.

What are prestige rankings good for? On a practical level, they help you gauge how people on hiring committees or tenure & promotion committees would view publications in various journals, especially if they lack the time or specialized expertise to read and evaluate your work for themselves.

Moti Mizrahi

Thanks for the comments, everyone. They're very helpful.

Would you guys agree, then, with the following? If one wants to impress other philosophers (especially senior philosophers like those who usually serve on search committees, etc.), then rankings of perceived prestige can be useful. If one wants one's work to be read and cited, then citation rankings can be useful.

Does that sound about right?

David Morrow

I agree with the first part of that, Moti, but I'm not sure about the second. Being read and being cited might come apart. There are probably many papers in prestigious journals that get read far more than they're cited, since I imagine that more people read the prestigious journals regularly, rather than just when they're looking for something in their research areas. So publishing in the prestigious journals might be the best way to get read. Also, I'm a little skeptical that papers in the most cited journals are being cited *because* they're in those journals, except insofar as the journal's prestige attracts readers to it in the first place. Maybe some most-cited journals—like Philosophy Compass—tend to publish articles of the kind that get cited. But if you had such an article, then it would presumably get cited widely wherever it was published, as long as it was being read. These sorts of considerations are the reason that I suggested that citation-based rankings are harder to interpret.

A more useful metric, if someone were to compile it, might be something like the correlation between the number of views a paper gets on PhilPapers and the journal it's published in.

Daniel

Like David, I agree that it's not so clear that "if one wants one's work to be read and cited, then citation rankings can be useful." Citation rankings usually employ metrics that are biased in favor of journals that publish a lot, like the h-index.

The h-index is essentially a measure of how many highly cited articles one has in absolute terms. (And the h5-median, also used by google scholar, is somewhat misleadingly titled, as it is only the median among a journal's most cited articles.) In rankings like that, of course phil review, which publishes 4 issues a year with just 2-4 articles per issue, will do worse than a journal like Synthese, which publishes 18 issues per year, with far more articles per issue. Likewise with phil studies, which is also ranked high.

I'm not sure what citation rankings look like that correct for this sort of bias. Incidentally, ranking individuals in terms of research productivity by h-index makes lots of sense. you don't want a pure average sort of metric there, since it makes sense that, e.g., David Lewis ought to get counted as more influential than Ed Gettier, even if Gettier's average article was cited more (just guessing). So some way of giving extra weight to producing more in absolute terms, as the h-index does, seems to me to make sense for rankings individuals. But not for journals, since a journal can always publish more articles at will, unlike an individual.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Writing Service


Categories