[*Definition of h-index corrected (Tue, 4/2 1:45pm EST)]
The research I did on facts about this year's tenure-track hires (so far) got me thinking about something else: how our profession evaluates philosophical quality and impact. In many (I think most) other academic disciplines, individual articles and journals appear to be measured on the basis of measurable impact on the discipline. Journals appeared to ranked, for instance -- that is, their relative prestige in the discipline appears to be largely determined by -- their impact factor, which is simply the average number of citations to that journal's recent articles (another, possibly better, measure is the journal's eingenfactor). The impact of particular articles is typically also measured on these grounds. Finally, the impact of a given scholar is often understood in terms of what is known as the h-index (or Hirsch index), which understands a scholar's impact as "the largest number h such that h publications have at least h citations."
Here's what got me started thinking about this stuff. First, the more research I did on people's publications (both in TT-hiring and beyond), the more I noticed that many papers published in highly-ranked journals (e.g. Philosophical Quarterly, etc.) that have few, if any citations -- fewer citations that papers in lower-ranked journals. Second, very few philosophy journals seem to post their impact-factor. Third, a lot of evaluation in our discipline -- in particular, evaluation of how "good" someone is -- seems to be a matter of reputation. People say things like, "Awesome Journal is better than So-So Journal because Awesome Journal tends to have better papers in it" (I've heard this sort of thing a lot). Similarly, one often hears about how one must publish in top-25 journals, how search committees (particularly at R1 institutions) care about this, etc.
Here's the thing, though. Consider a paper published 10 years ago in, say, Phil Review. Suppose this paper has, I don't know, two citations. If the research I've done is any indication, you'd be surprised how much this happens. Lots of journal articles -- even articles in "top" places -- fall stillborn from the presses. Few, if any, people cite them. On the other hand, suppose a paper published two years ago in So-So Journal has 10 citations, two of which are standalone papers based on it (i.e. responding to it, etc.). Whereas the former paper (the one in the top journal) appears to have little to no impact on the profession, the latter paper has had more.
Now, here's the thing: in other disciplines, this sort of difference would presumably be recognized. People in, say, psychology and physics make a big deal out of how many times particular papers have or have not been cited. Yes, publishing in a top journal is an honor -- but if a paper published there makes no impact on future work, people notice that. Is this true in our discipline? My (admittedly anecodotal) impression is that it isn't really. One constantly hears about where people have published ("S-/he published an article in Phil Review"). But one rarely hears any mention of articles' impact (aside from "famous" articles).
Here's why I worry about this. There are many well-discussed issues with the peer-review process in our discipline. For one thing, reviewers are widely reported to engage in the abonimable process of "Google reviewing" (breaking blind-review by Googline paper titles). For another thing, in contrast to people in tenure-track positions -- particularly, at R1 institutions -- who don't come up for tenure for 5-6 years and thus can afford to have papers out at top journals for a few years, people in adjunct positions, post-docs and VAPs don't have this luxury: they need publications now, and so are often likely to send papers to lower-ranked journals where they have a better shot of getting accepted. One could go on.
Given these and many other issues of bias (e.g. "in-crowds" in the discipline, etc.), it seems to me that our discipline should take a good, hard(-er) look at how we evaluate philosophical impact and merit. Offhand, it seems to me that our discipline should attempt to be more objective in these matters than at present. We should look not look primarily at where articles have been published (even if a journal's stature is admittedly defeasible evidence of the general quality of work that appears within it); we should look, primarily, at the impact that articles have actually had in the profession -- which can be measured in many relatively objective ways (not just mere citation numbers, but eigenfactors, which measure, roughly, how "influential" a given article is).
What say you all?