As you all know, there are a few rankings for philosophy journals: Leiter’s top 20, the Brooks Blog rankings, SJR, ESF/EIRH, Google Scholar, and so on.
Now take, for example, Synthese. On Leiter’s top 20 list, it is ranked 16. On Google Scholar, it is ranked number 1! What does that tell us about the quality of Synthese as a journal? Absolutely nothing, as far as I can tell. Am I wrong?
Are there good reasons to prefer one ranking over another? What are these rankings good for?
Google scholar is based only on recent citations, and there's no other distinctions made. Thus, Phil Review is ranked below Phil Compass, even though the reputation of the former is pretty indisputable, and the latter (while excellent and incredibly useful) is meant to contain little if any original research.
ESF/EIRH is fairly course-grained--it divides things only into 3 ranks (or 4 if you count W = not evaluated). Even then, it puts APQ and PPQ in the mid-range, behind many others I've never heard of. (But granted, many could be excellent specialist journals of which I'm simply ignorant.) SJR is not as course-grained, but also puts APQ and PPQ surprisingly low. (Of course, I'm American and perhaps these journals don't have the same reputation elsewhere.)
Brooks' Blog and Leiter seem to fit better with my experience, fwiw. (They are significantly in agreement as well, apparently, at least more than any other pairing.) Brian Weatherson also has a journal ranking, but he refers the reader to Leiter's as more up to date. So I'm most confident about Brooks/Leiter, though clearly these things are very hard to measure.
Posted by: eyeyethink | 08/26/2013 at 08:51 PM
I guess it tells us that an unrepresentative sample of the philosopher-population thinks it's pretty good. That would be something for anyone who didn't know anything about how various journals are regarded but, as the tone of your post indicates, there's no way it could tell us anything more fine-grained than that.
I guess that's useful?
Posted by: Michel X. | 08/26/2013 at 10:20 PM
For starters, Leiter's ranking and the Google ranking are measuring two different aspects of journal quality—viz., prestige and citation density, respectively. So it's unsurprising that they find different results. Citation density is easier to measure but harder to interpret, IMO.
When it comes to prestige-based rankings, some significant disagreement is to be expected, especially when the rankings are compiled through unscientific surveys, as Leiter's and Weatherson's were. This introduces noise that should make us skeptical of fine-grained distinctions, as Michel points out. But it's not a coincidence that Phil Review shows up in all of these rankings, but the Transylvanian Journal of Vampire Studies does not.
What are prestige rankings good for? On a practical level, they help you gauge how people on hiring committees or tenure & promotion committees would view publications in various journals, especially if they lack the time or specialized expertise to read and evaluate your work for themselves.
Posted by: David Morrow | 08/27/2013 at 08:01 AM
Thanks for the comments, everyone. They're very helpful.
Would you guys agree, then, with the following? If one wants to impress other philosophers (especially senior philosophers like those who usually serve on search committees, etc.), then rankings of perceived prestige can be useful. If one wants one's work to be read and cited, then citation rankings can be useful.
Does that sound about right?
Posted by: Moti Mizrahi | 08/27/2013 at 09:20 AM
I agree with the first part of that, Moti, but I'm not sure about the second. Being read and being cited might come apart. There are probably many papers in prestigious journals that get read far more than they're cited, since I imagine that more people read the prestigious journals regularly, rather than just when they're looking for something in their research areas. So publishing in the prestigious journals might be the best way to get read. Also, I'm a little skeptical that papers in the most cited journals are being cited *because* they're in those journals, except insofar as the journal's prestige attracts readers to it in the first place. Maybe some most-cited journals—like Philosophy Compass—tend to publish articles of the kind that get cited. But if you had such an article, then it would presumably get cited widely wherever it was published, as long as it was being read. These sorts of considerations are the reason that I suggested that citation-based rankings are harder to interpret.
A more useful metric, if someone were to compile it, might be something like the correlation between the number of views a paper gets on PhilPapers and the journal it's published in.
Posted by: David Morrow | 08/27/2013 at 11:16 AM
Like David, I agree that it's not so clear that "if one wants one's work to be read and cited, then citation rankings can be useful." Citation rankings usually employ metrics that are biased in favor of journals that publish a lot, like the h-index.
The h-index is essentially a measure of how many highly cited articles one has in absolute terms. (And the h5-median, also used by google scholar, is somewhat misleadingly titled, as it is only the median among a journal's most cited articles.) In rankings like that, of course phil review, which publishes 4 issues a year with just 2-4 articles per issue, will do worse than a journal like Synthese, which publishes 18 issues per year, with far more articles per issue. Likewise with phil studies, which is also ranked high.
I'm not sure what citation rankings look like that correct for this sort of bias. Incidentally, ranking individuals in terms of research productivity by h-index makes lots of sense. you don't want a pure average sort of metric there, since it makes sense that, e.g., David Lewis ought to get counted as more influential than Ed Gettier, even if Gettier's average article was cited more (just guessing). So some way of giving extra weight to producing more in absolute terms, as the h-index does, seems to me to make sense for rankings individuals. But not for journals, since a journal can always publish more articles at will, unlike an individual.
Posted by: Daniel | 08/27/2013 at 05:15 PM