Over at Helen De Cruz's recent NewAPPS post raising questions about the role that "pedigree" appears to play in the hiring process for academic jobs, Charles Pigden offered the following proposal to counteract bias in hiring (based, he writes, on something written previously on a Leiter thread):
May I suggest a ‘first cut’ heuristic for TT positions that would at least diminish the classist (and therefore racist) biases inherent a pedigree based system?
1) Rank the journals on a scale of 1.0 to (say) 0.4, ranking Mind and Analysis at 1.0 and The NoName Journal of Philosophy at 0.4.
2) Multiply each candidate’s actual and forthcoming publications by the rank number of their venues. Thus a publication in Mind counts as 1 and a publication in the NoName Journal counts as 0.4.
3) Sum the (multiplied) publications for each candidate.
4) For each candidate divide the result by the years out from their PhD (perhaps adjusted for any time out they may have taken to have babies, feed the starving or to save the family business from bankruptcy).
5) Select the top ten candidates for a more detailed examination.
Although I replied to Pigden's proposal over in the comments section there -- and although I agree with quite a lot of what he writes in his initial comment -- I want to comment on it again here in order to draw greater attention to how misguided I think the proposal is. I think it is a very dangerous proposal that would not only not eliminate biases, but probably worsen things. Allow me to explain why.
Carolyn's analysis of job-hiring data in the past has shown that men from top programs are more likely to get presitious post-docs than non-men or people from lower-ranked programs (this is, I think, common knowledge by now). Consider, then, the resources that someone has at a prestigious post-doc. One usually has 2-3 years to publish, along with an immense amount of resources (mentorship, etc.). In a post-doc, one has the time to send stuff to top-20 journals and wait. One doesn't have to rush to get stuff on one's CV...at least not like adjuncts or VAPS do.
Now consider the position of an adjunct or VAP (in a 1-year position). People in these positions often not only have very few resources (mentorship, etc.); they often do not have the luxury to consistently send stuff to top-20 journals. Why not? Simple. If you're in a 1-year job, you need to get stuff on your CV. Sending stuff to top-20 journals is a very, very high-risk strategy. You can wait anywhere from 4-8 months to hear back, and chances are your paper will get rejected. And you can't afford that. You need to go on the job market every year, and so you need to get stuff on your CV ASAP. So, what's the rational thing to do? Maybe send a paper to a top-20 journal from time to time. But the far more rational thing to do in general is to send stuff to lower-ranked journals, so that you can get some publications on your CV.
Here, then, is the thing. I don't think that making first-cuts on the basis of assigning numerical values to publications on the basis of journal quality is likely to counteract bias. Rather, it is likely to favor those who have the time and resources to target top-2o journals: namely, males from top programs in prestigious post-docs. Indeed, I shudder at what Pigden's proposal would mean for me. Although I am male, I have worked my tail off under sub-optimal conditions (a heavy teaching load, few mentors, etc.), and published a lot of work I think is quite good (I've gotten a number of literature citations, some online discussion, etc.). I just didn't have the luxury of waiting to get that work in top-20 journals. I'm not saying any of my work would have gotten into those journals -- though I'd like to think so! :) My point is simply that, like many people in 1-year jobs, I faced a great deal of pressure to send my work to places it would be more likely to get accepted. Anyway, Pigden's formula would basically have search committees cut people like me -- and candidates in even tougher professional positions -- in their first cut without actually looking at our work. Far from counteracting bias, Pigden's formulaic approach would seem to stack the deck even more in favor of candidates in more privileged professional positions.
Or so say I. What say you?
One obvious reason for thinking that making first cuts based on publications would reduce bias is that the publication process is blind, whereas graduate school admissions are not. My impression is that committees have to make a first cut without actually looking at your work; I for one would rather they replaced the current practice of focusing on where applicants went to school and how famous their letter writers are, with something focused on blind-reviewed publications.
Posted by: B.M. | 04/16/2014 at 01:54 PM
B.M.: Thanks for your comment. Here's another practice. Read CVs, read each person's research statement, and then judge -- in a holistic manner -- whether it might be worth reading a candidate's work...and making first cuts in this kind of holistic manner.
Here is why I think this is better. I hate to use myself as an example, but since I'm the example I know best, what the heck. ;) Although I published a few short replies in a top-20 journal early in my career, my work since then has been outside of top-20 journals (at least in part, I believe, because I had to rush to get stuff on my CV). Pigden's method (the one you seem to favor) would almost certainly lead me to be cut at the very first stage. But I think I'm doing interesting work, and that someone who bothered to make a first cut based on actually reading my CV and research statement might see this.
Although I could of course be wrong about the merits of my work, I think there are good reasons to prefer this kind of holistic approach to first cuts. Rather than basing a first cut on pedigree *or* numerical values based on publication venue -- both of which, I believe, favor people in fortunate professional positions -- a first cut that evaluates candidates in a holistic way on the basis of CV, research statement, and *perhaps* a quick read of a piece of one's work, would better enable candidates who haven't been so lucky to make it past the first cut.
And that, I think, is desirable. It is hard enough to get noticed these days by search committees without telling to essentially cut out people on the basis of a formula.
Posted by: Marcus Arvan | 04/16/2014 at 02:10 PM
It seems to be that there is a fallacy at the heart of almost any proposal grounded in journals’ rankings. The underlying thesis seems to be that “If J is a high-ranked journal, then papers that appear in J are good papers”. That may be right, but there is no serious prima facie reason to believe that good papers appear only, or even primarily[*], in top journals, even if we admit that top journals publish *only* top papers. Unless a paper appears in a journal we don’t even bother to read *because it usually publishes crap*, not getting published in top-10 or top-20 venues is *not* a sufficient indication of poorer quality.
Briefly, we can accept “If paper P has been accepted at journal J, then P meets *at least* the requirements for J”, but not “If P has been accepted in J, then P does not meet requirements stronger than J’s”. Thus we could accept that a publication in Mind or Ethics or P&PA has the highest possible value, but not that a publication in the NoName Journal has a lower value *due to the venue*. In this case, papers should usually be evaluated on their own merits.
(Moreover, as we discussed in an earlier thread, proposals of this kind induce a bias in favor of “productive” writers.)
[*] Since there are many more non-top-20 journals than top-20 journals, it might be argued that non-top journals publish more good papers *overall* than top journals themselves.
Posted by: Pierre | 04/16/2014 at 02:46 PM
You may be right that reading research statements at the first stage would be best, but the impression I get is that members of hiring committees typically do not have the time to read a hundred or more research statements. Another potential issue is that a person's evaluation of the strength of a candidate's research statement is going to be heavily influenced by where the candidate went to school. E.g., if a candidate from a highly ranked department defends unpopular theories or methods this might count in his/her favor (he/she is "an original thinker"), whereas if a candidate from a less well regarded department defends the same theories or methods it is much more likely to count against him/her (he/she is ignorant of the present consensus, or is some kind of nut).
Posted by: B.M. | 04/16/2014 at 02:47 PM
B.M.: Thanks for your reply. I have to confess, I fail to see much plausibility in the "not enough time" argument. It takes, what, 2-3 minutes to quickly scan a research statement to see if the person looks like they're doing interesting work? (I've read some research statements in a minute or two, been bored to tears by some of them, enthralled with others!). At that right, it might take at most a few hours to scan a few hundred research statements -- and search committees can significantly divide their labor.
I'm also not persuaded by the "bias" argument you present. Search committee members *could* of course be biased in favor of research statements by people from Leiteriffic departments -- but, quite frankly, this seems to me unlikely. I've read a number of research statements myself (as help in formulating my own), and while some research statements by people from top places have blown my mind, others have left me cold -- and the same is true for people from lower-ranked departments. I suppose it's possible that search committee members are so biased by pedigree that they can't even read research statements in an open-minded way, but I dunno...it still seems to me a better, more holistic way to judge candidates than a Pigden-esque publication formula.
Indeed, I just had someone email me about this privately. To paraphrase, this person wrote: "Any formula for cutting down candidates that could be carried out by a machine, without anyone reading so much as a *word* of a candidate's work, is likely to do far more harm than good." To which, again, I would add: especially a formula that is likely to systematically favor those lucky enough to get killer post-doc positions over adjuncts, VAPS, etc.
Posted by: Marcus Arvan | 04/16/2014 at 03:34 PM
From the post and the ensuing thread, there seems to me two different objections in play here.
The first objection is against using publication venue prestige as a proxy for research quality. Now, I think we can all agree that publication venue is a very imperfect indicator of research quality. But it is not clear to me what would be a better tradeoff between reliability and time efficiency.
To me, research statement is not any more reliable and requires more time. I would have only the vaguest of idea as to how to evaluate whether some proposed research program is viable or novel if I were to read a research statement of someone who does, say, highly technical philosophy of physics.
The second objection is against using a mechanical procedure, based on publication venues, to evaluate research quality. I don't actually think this is much different from what people do when they read CVs normally. So I didn't understand how this is supposed to privilege those who are more in a position to submit to "top-20" journals.
I take it Pidgen's idea is not that all schools have to use his assigned weights. Indeed, different schools can have different assigned weights. Some schools might want to really differentiate between, say, The Philosophical Review and The Philosophical Quarterly, but other schools might want to, say, give everything "above" Analytic Philosophy a weight of 1. And indeed, the formula can be made more complicated. Perhaps the formula will only take the top 5 publications of the last 3 years, or give an exponentially decreasing weight to anything after top 3 publications.
What a good formula would be would be up for debate, relative to each school, of course. But I don't see how this would be worse than any "holistic" look at CV and other materials. Yes, the cut will be highly coarse, but so would other first-round cut procedures.
Posted by: Shen-yi Liao | 04/16/2014 at 05:48 PM
Shen-yi: Thanks for your comment. The problems I have with your argument are basically the ones Pierre raises.
First, *any* formula based on publication venue will be utterly insensitive to the fact that some stuff published in great venues is garbage, and some stuff published in not-so-great journals is great.
Second, I don't think it is good to simply read CVs and think someone is awesome/not awesome on the basis of what you read there. Sometimes I'll see a Phil Review publication on someone's CV and I'll think to myself, "This person must be awesome." Then I read the paper itself and see that I think Phil Review made a massive mistake publishing it.
In brief -- and Rachel has made this point many times -- I think we need to get away from the CV "pissing contest" approach to hiring. 10 publications in PQ does not a good philosopher make. It is what is *in* those publications that matters...and you actually have to look at the work to judge.
Posted by: Marcus Arvan | 04/16/2014 at 06:22 PM
Journal refereeing is not anonymous; it disproportionately affects women negatively (we have data on this). This just further serves to disadvantage women and other (representational) minorities.
Also…getting a job is not about counting publications!
Posted by: Rachel | 04/16/2014 at 06:40 PM
Shen-yi: Actually we *could* imagine a better trade-off --- if we really want to assign numerical values to publications. Let me offer a sketchy idea. (I do not mean to endorse such a proposal: I simply *imagine* what it might look like.)
Today almost everything that has been published (at least in English) is referenced on PhilPapers. When we read something, we could give it a grade on a 1-10 scale, against criteria such as clarity/intelligibility, philosophical interest, strength of the argumentation, engagement with the relevant literature, etc. Once a paper reaches, say, 10 or 20 grades, the average is made public on PhilPapers: this would replicate the refereeing process on a larger scale. Such a grade could then be used on CV’s, and since it would be available on PhilPapers, hiring committees could check the veracity of the grade indicated on a CV.
For example, John Doe publishes “Cool Idea” (arbitrary title) in NoName Journal. According to the “venue” evaluation, this is worth (say) 0.4. Now readers of John’s paper find it really great. It’s clear, cleverly argued, engaging, etc. As readers grade the paper, it reaches a 9.2 average through many evaluations (including people who otherwise don’t know John).
Contrast it with Jack Smith, who publishes “Nice Thought” in the ReallyGood Journal. That’s worth (say) 1.0 with the “venue” evaluation. Yet readers of Jack’s work find it boring. It’s intelligible, but what he says brings not much, if any, to the area, and the argument turns out to repeat previous arguments. So Jack’s paper reaches an average of 5.7, even though he got published in a higher-ranked journal.
Again, this is not intended to be an *actual* proposal. I just imagined it to show that, *if* we really want to use numerical values, we *could*, in principle, imagine/use better ones. I do not doubt that the imagined proposal would have its own shortcomings, were it to be more fully developed (e.g. being graded by all your friends or being harshly graded by a bunch of restless enemies).
Posted by: Pierre | 04/16/2014 at 07:37 PM
@ Marcus
"First, *any* formula based on publication venue will be utterly insensitive to the fact that some stuff published in great venues is garbage, and some stuff published in not-so-great journals is great."
Yes. We agree that publication venue is a very imperfect indicator of research quality. The question is whether there would be something that could be as useful, when applied to 300-900 applicants, that would be as informative -- as uninformative as it may be. (Remember we're talking about first round cuts.)
"It is what is *in* those publications that matters...and you actually have to look at the work to judge."
I assume the proposal is not that search committees should read every paper by every applicant. But I'm not clear what is the best implementation of the idea that it's what's in those publications that matters (which, of course, we agree on.)
@ Rachel
"Journal refereeing is not anonymous; it disproportionately affects women negatively (we have data on this). This just further serves to disadvantage women and other (representational) minorities."
I don't understand how Pigden's proposal *further* disadvantages underrepresented minorities. (Though, we agree, that it disadvantages them/us.) Again, many search committees already use publication venues as an imperfect proxy at the CV-reading stage, and many will have some weight in mind such that, on a first pass, give more weight to a Philosophical Review pub than a Philosophical Quarterly pub. A mechanical procedure at least has the advantage of making sure they assign the same relative weights regardless of the applicant. (The closest analogy I can think of from teaching is the use of grading rubrics, which have been argued to be instruments to mitigate bias.)
"Also…getting a job is not about counting publications!"
We agree. But I take it Pigden was not describing how to get a job, but what he thinks a good procedure for making the first round cut would be. Furthermore, as described earlier, this doesn't need to involve some straight-up counting. There are many mathematical functions that are available to a mechanical procedure other than addition and multiplication.
@ Pierre
That would be interesting. I'm not convinced that it would be better given the ways to game the system and the possibility that various biases can creep in, given the non-anonymity. However, as you acknowledge, since such a system is not in place, it is not available to search committees right now.
The question, I take it, is whether a mechanical procedure utilizing publication venues would be better than the holistic judgments that people make right now, not whether it would be better than some other procedure utilizing information that are not yet available.
Posted by: Shen-yi Liao | 04/17/2014 at 01:35 AM
I have a dream that one day philosophers will be judged not by the prestige of the journal in which their work appears but by the content of their work.
Posted by: Moti Mizrahi | 04/17/2014 at 08:07 AM
Shen-yi: Because the review process at journals, including Phil Review/Phil Quarterly/etc are not anonymous, we have hard data showing that women are disadvantaged by non-anonymous review. So when SCs give weight to a publication in a "top" journal, they're engaging in adverse effect discrimination against women (we also have data for non-white people, too). The procedure is mechanical, but it's mechanical on biased, unfair data.
Garbage in, garbage out, as we say.
Pigden's procedure is explicitly a CV comparing competition for publications. That's exactly what we *don't* want doing all our work for first round cuts in job searchers.
Posted by: Rachel | 04/17/2014 at 09:48 AM
Also, we have recent hard data that people are able, subconsciously, implicitly (they can't say *what* they're noticing) to judge the gender of the author about 66% of the time. This means that even if a journal review is fully anonymous (and few are, because referees google the papers, for example--very bad behaviour!), implicit biases are still operating that disadvantage women.
Any publication counting metric that doesn't account for this will perpetuate unfair practices that disadvantage women.
Posted by: Rachel | 04/17/2014 at 09:59 AM
I'll have to agree with Pierre and Motzi here. I'm getting a bit tired of the 'x is in Phil Review so it must be awesome' bias. I have read terrible papers in Phil Review (Nous, etc.) and I have read very good papers from 'lower-ranked' journals.
And none of this is surprising given how the refereeing process works. You run into one cantankerous referee at Nous - you are out. You run into a lazy but generous referee (or a referee how happens to know who you are and who is favorably disposed towards you) - and you are likely to be in. None of this is obviously tracks quality. (In my view journals that rely upon at least two referee reports are far more reliable in that respect.)
Posted by: Tom | 04/17/2014 at 10:38 AM
@ Rachel
"So when SCs give weight to a publication in a "top" journal, they're engaging in adverse effect discrimination against women (we also have data for non-white people, too)."
We agree. However, as noted above, what I don't quite get is how the mechanical procedure proposal *further* disadvantages underrepresented applicants -- compared to the current procedure where people still use CVs and publication venues for making the initial cut.
"Garbage in, garbage out, as we say."
We agree that there are a lot of systematic biases in play. It's one thing to note systematic biases in data. It's quite another to say that it's garbage. Are you saying that committees shouldn't read CVs in making the first cut because the information represented involve systematic biases?
(In some moods, I've favored a simple lottery system for jobs. But I've found very few people who agree with me.)
"Any publication counting metric that doesn't account for this will perpetuate unfair practices that disadvantage women."
Great idea. So perhaps the formula can be tweaked so that journals that practice triple-blind review can be given more weight than journals that have comparable reputation but do not. Or the formula can be tweaked so that underrepresented applicants receive additional weight for their scores. All of that is compatible with a mechanical procedure.
---
The benefits of such a mechanical procedure are like those of grading rubrics. (1) It strongly encourages people to articulate criteria ahead of time -- and state it as explicitly as possible. It therefore discourages post hoc rationalization. (2) It strongly encourages that achievements of the same kind -- however that is to be spelled out -- are given the same weight, regardless of other information about the applicant (e.g. PhD institution).
---
By the way, I would really appreciate references to the studies. I tried to look for the particular figure you quoted (66%) in conjunction with your description, but could not find anything. Much thanks!!
Posted by: Shen-yi Liao | 04/17/2014 at 04:19 PM
Shen-yi: I don't quite understand. None of your replies (e.g. to Rachel) address the main worries I raised in my post, and which a number of commenters have also raised: namely, that (1) some articles in top journals (including Phil Review) are bad, (2) some articles in lower ranked journals are great, and (3) people with professional privileges have greater time, resources, and mentors hip enabling them to place articles in the top journals. Look, if I'd had *time* to send all my work to Phil Review, Nous, etc., I would have sent all my stuff there. But I didn't have that luxury. The two pieces I did publish in top-20 journals happened, not surprisingly, during a post-doc -- and yet I think some of the work I've published in lower journals since then are far better, and quite frankly, better than some of the stuff I've read in the journals you favor ranking highly. So, I say, what you advocate mostly privileges people in prestigious positions, and encourages search committees to overlook potentially great candidates on poor bases. I still haven't seen a good reply to this.
Posted by: Marcus Arvan | 04/17/2014 at 04:41 PM
@ Marcus
I don't have a response to that because I utterly agree. As I said in the first comment, "Now, I think we can all agree that publication venue is a very imperfect indicator of research quality." And as I said in the second comment, "We agree that publication venue is a very imperfect indicator of research quality."
However, what I want to emphasize are two things.
First, those worries are against a particular assignment of numerical values to publication venues (e.g. giving more weight to a Phil Review publication than a Phil Quarterly publication) and not against assigning numerical values to publications generally and using them in a mechanical procedure. As I've tried to stress throughout, there are numerous (indeed, infinite) number of ways to assign values to publications based on their venues (e.g. one can assign the same value to everything from Phil Review to NoName-ButStillRefereed-AndMinimallyRespectable Journal equally) and computing them for candidates. Throughout, I have not advocated for any particular assignment of numerical values to publication venues.
Second, I don't know what the alternative is for making a first cut from 300-900 candidates. Even you said that people should read CVs. I don't know what it is that people do with the publication section of CVs except assigning them some rough weights in their head (e.g. giving more weight to a Phil Review publication than a Phil Quarterly publication), even if there are no explicit values given. At least a mechanical procedure would ensure that they're consistent in doing so.
Posted by: Shen-yi Liao | 04/18/2014 at 12:56 AM
I'm with Shen-yi on this, and I am from a non-fancy PhD program and don't like publication weighting taken to the extreme. Still, claims made here seem statistically in the wrong direction. That some good papers are not published in top journals is irrelevant if a heuristic is what is at stake. On average Phil Studies papers are better than Philosophia, Nous better than Synthese, and so on. Further, we all think this when evaluating CVs quickly. If someone is in my area I can make a more detailed assessment, but if they are not then my experience is that very top journals are harder publication venues, and that lesser journals can have good papers but this is the exception rather than the rule. So I think Pidgin is just describing common practice, though I agree this should not be formalized practice, pace Pidgin's suggestion. I am all for considered judgment of people's work - though I doubt people are very good at this either, to be honest.
Also, the claims about non-anonymity and bias at journals made in this thread seem overblown, but I am happy to retract this in the face of evidence of bias (like Shen-yi, I haven't been able to find such evidence yet, and I am happy to be pointed to it).
Posted by: Anonymish | 04/18/2014 at 12:15 PM
Here's my prey to Marcus which I hope to publish on the other list (I have been having a lot of trouble accessing it).
To Marcus Arvan.
I agree that the prestige of the journal is a poor proxy for quality but a proxy is required (since it impossible to read every paper), and prestige-of-venue the best that is readily available since it is *relatively* blind to class-influenced factors such a pedigree. My proposal does indeed mean that OTHER THINGS BEING EQUAL search committees would be less likely to hire candidates with publications in low-ranked journals than candidates with publications in high-ranked journals.. But the qualification is important (which is why I put it in capitals). If indeed your publishing strategy is a good one (that of submitting your papers to low-ranked journals rather than high-ranked journals, thus maximizing the number of your publications ) then you would come out about even with a rival (let’s call her Dr X) of the same academic age who had pursued the opposite strategy with equal success. You could expect to have more publications than she had, but hers would be multiplied by a larger fraction, bringing you in at roughly the same age-adjusted figure. That seems fair enough to me. Of course you would both do worse in competition with somebody of the same academic age who had lots of publications in high-ranked journals . But again that seems fair enough. Such a person would have accomplished a more demanding (and academically relevant) feat than you, and would have accomplished it more often than Dr X.
Posted by: Charles Pigden | 04/24/2014 at 09:50 AM
Hi Charles: Thanks for weighing in. Here are my thoughts.
You write: "I agree that the prestige of the journal is a poor proxy for quality but a proxy is required (since it impossible to read every paper), and prestige-of-venue the best that is readily available since it is *relatively* blind to class-influenced factors such a pedigree."
I reply: I agree that a proxy is needed. But I deny that prestige-of-venue is the best proxy. I think there is no evidence that it is *relatively* blind. Yes, journal reviewing is anonymized, but there are *so* many problems here. First, Google reviewing is rampant. Much of the time when I submit an anonymized paper for review, my website reports that someone looked at my webpage after googling the paper title. Even if I had *never* posted the paper itself, it is not hard to find out who wrote a given paper. Second, there is the problem of networking. I know some early-career people in the field who -- for one reason or another, without *any* publications (mainly because they come from a pretigious place) -- have been invited to give numerous colloquium talks at universities all over the place, and are basically friends with everyone. What this means, in practice, is that reviewers are *very* likely to know that that person wrote the paper they are reviewing, and, if they like the person, they may be more willing to recommend paper acceptance. Third, as I argue in my post, people in less-prestigious positions may not have the luxury to submit to top journals consistently.
In short, for all of these reasons, and others too, I think there is *no* good evidence that journal-venue is a good, fair proxy for quality.
Here, I think, is a far better proxy. Read the person's CV and research statement yourself -- and, if they look promising to *you*, read at least one of the person's papers. Does this open things up to bias? Sure -- but I think there are reasons to believe such a process would be *less* biased than journal proxy. It would enable openminded committee members (i.e. people like me) who don't give a damn where a person came from or where they published to give each person a fair shot.
You write: "Of course you would both do worse in competition with somebody of the same academic age who had lots of publications in high-ranked journals. But again that seems fair enough. Such a person would have accomplished a more demanding (and academically relevant) feat than you, and would have accomplished it more often than Dr X."
I reply: I deny just about everything you write here except for the first sentence. I *don't* think it is fair enough. I do not think the person who has published a mediocre paper in Phil Review has accomplished *anything* more demanding or academically relevant than I have if I publish a stellar paper in the Philosophical Forum or Ethics and Global Politics (wink, wink).
There is nothing fair, more demanding, or academically relevant about favoring a candidate who publishes a mediocre paper in Awesome Journal -- especially if some of the reasons they got into that journal may be institutional prestige, friendly relations with others in the field, etc. Such a person may have had a lot of good fortune on their side *irrelevant* to the quality of their work. And a hiring-system which fails to adequately recognize that is severely deficient.
Posted by: Marcus Arvan | 04/24/2014 at 10:06 AM