**UPDATED with time-to-hire on the basis of publication-type (top-5, top-10, etc).
Apologies to readers for such a slow week around here. It's been a very busy week! I do, however, have some news to report. I have (A) analyzed the interviewing data I collected, and have also (B) collected and analyzed data for all of this year's hires to date as reported on philjobs. My main purpose in collecting like this is to test the kind of anecdotal job-market advice one commonly hears, both online and offline. Advice on the job-market is everywhere (viz. "You need to publish in top-ranked journals", "The job market is just a crapshoot", etc.), and yet the advice given is often contradictory and seemingly based on little more than the intuitions of the persons giving it. I want to try to do better. There are some actual empirical facts, after all--facts about what kinds of programs people are from, how many publications they have, how many interviews they've gotten, etc.--and these facts might enable us to develop a better factual understanding of what actually helps and does not help on the market.
Before I present the results of the analyses I have performed, I want to emphasize a few things. First, the samples in each case were relatively small (33 people answered my survey, and 57 hires have been reported on philjobs). Second, the individuals who answered my job-market survey were self-selected and so may not be a representative sample of people on the market. As such, the analyses I present should be taken with a BIG grain-of-salt. In future months (and years), I hope to continue collecting job-market/interviewing data--I'll probably try to do another survey on Qualtrics in a few months--as well as hiring data from philjobs. All that being said, I think the two data sets together provide some independent corroboration of some trends, and would like to report them. So on with the show!
1. Results of The Philosophers' Cocoon Job-Market Survey
33 individuals filled out the Cocoon survey. The survey included the following items:
- Leiter-rank of PhD program
- Years since graduation
- Total jobs applied for
- Total # of interviews
- # of TT interviews
- # of non-TT interviews
- Job offers
- Total # of publications
- # of publications in "top 5 journals"
- # of publications in "top 10 journals"
- # of publications in "top 20 journals"
- # of publications in "non-top-20 journals"
- Years teaching
- Student evaluation average
I then ran 2-tailed Pearson correlations using a standard .05 p-value for significance. My findings are as follows:
- No significant relationship between Leiter rank and interviews or job offers.
- A strong positive relationship (r=.501, p=.003) between years on the market and total # of interviews.
- A strong positive relationship (r=.508, p=.003) between years on the market and total # of TT interviews.
- No significant relationship between years on the market and actual job-offers.
- Strong positive relationships between total # of publications and total interviews (r=.549, p=.001), total # of TT interviews (r=.558, p=.001), and job offers (r=.344, p=.05).
- No significant relationships between top-5, top-10, or top-20 publications with interviews or offers.
- Strong positive relationships between total # of non-top-20 publications and total interviews (r=.521, p=.002) and total # of TT interviews (r=.547, p=.001), as well as a nearly significant relationship with job offers (r=.338, p=.059)
- No relationship between teaching reviews and anything.
2. Hiring-Data Analysis
I compiled the following data for all hires in this job-season (jobs that begin in 2015):
- PhD program Leiter rank
- Hire-type (TT, VAP, Lecturer, postdoc)
- Previous position-type
- Years since graduation
- Total publications
- Top-5 publications
- Top-10 publications
- Top-20 publications
- Non-top-20 publications
- Gender
Here is what I found:
- Top-10 publications correlated significantly with PhD program Leiter rank (r=.274, p=.039).
- Total publications correlated significantly with top-10 publications (r=.274, p=.039)
- The mean Leiter rank of all hires was 20-30. (I scored Leiter rank by tens)
- The mean time-since-graduation of all hires was 2.2 years, with a 2.12 standard-deviation.
- 35% of hires-to-date are direct from grad school
- 12% of hires graduated 1 year ago
- 10.5% of hires graduated 2 years ago
- 10.5% of hires graduated 3 years ago
- 14% of hires graduated 4 years ago
- 7% of hires graduated 5 years ago
- 10.5% of hires graduated 6 years ago
- Mean # of total publications for all hires is 5.5
- Mean # of top-5 publications for all hires is .07
- Mean # of top-10 publications for all hires is .22
- Mean # of top-20 publications for all hires is .21
- Mean # of non-top-20 publications for all hires is 4.9
- 22.8% of all hires came from Leiter top-5
- 19.3% of all hires came from Leiter 5-10
- 12.3% of all hires came from Leiter 10-20
- 8.8% of all hires came from Leiter 20-30
- 7% of all hires came from Leiter 30-40
- 7% of all hires came from Leiter 40-50
- 22.8% of all hires came from Leiter-unranked programs
- 64% of hires are men
- 36% of hires are women
UPDATE:
(1) Mean time to hire for hires with 1+ top-5 publication: 4.75 years
(2) Mean time to hire for hires with 1+ top-10 publication: 2.6 years
(3) Mean time to hire for hires with 1+ top-20 publication: 2.5 years
(4) Mean time to hire for hires with +1 non-top-20 publications: 2.5 years
(5) Mean time to hire for hires with only non-top-20 publications: 2.37 years
3. Discussion
Here are some thoughts I have when looking at the data and analyses:
- Leiter-rank: Although the survey I compiled did not show any significant relationship between PhD program Leiter-rank and interviews or hires, the hiring data suggest that there is some relationship--but a complex one. A high proportion of hires 44.2% this year have come from the Leiter top-10. But a significant proportion of hires (22.8%) come from programs that are not Leiter-ranked at all, and another 20% of hires come from programs ranked from 10-30.
- Publications: Both sets of data seem to strongly support something that I have long suspected (on the basis of personal experience)--namely, that the advice "You must publish in top-ranked journals" to fare well on the market is false. Top 5, top-10, and top-20 publications did not relate significantly to job-market success in either data-set, but total # of publications and total # of non-top-20 publications did. Both data sets thus suggest that in terms of getting a job, the really important thing is not where you publish but how much you publish.
- Time on the market: Both data-sets suggest that staying on the market longer does not hurt you, and may even help you (at least in terms of interviews). Although 35% of this year's hires are direct from grad school, there does not appear to be a hiring preference across years 1-6 on the market, and my survey (despite its small sample) indicated that people who have been out on the market longer may be getting more interviews (personal note: this was absolutely the case in my own instance).
I want to emphasize again that these results should be taken with a grain of salt. Only time--and a lot more data--will tell how well they hold up. They are, however, two independent data sets, and the results cohere well with my own personal experience on the market.
A priori, we would expect total number of publications to correlate strongly with time-since-graduation. And among job-seekers and recent hires, we would expect time-since-graduation to correlate strongly with total number of interviews. Thus, the correlation you report between total number of publications and total number of interviews doesn't tell us much about the behavior of hiring committees. The *lack* of correlation between number of high-prestige publications and total number of interviews does tell us something: it tells us that job-seekers with high-prestige publications get snapped up quickly.
Posted by: NT | 03/21/2015 at 08:29 PM
NT: Neither of those inferences are cogent. If people with top-ranked publications were "getting snatched up quickly", that would show up in the data: those types of publications would correlate with interviews and offers. But they don't. The data does not show those people getting snatched up. It shows instead that people with lots of total publications and non-top-ranked publications are being snatched up.
Posted by: Marcus Arvan | 03/21/2015 at 08:49 PM
Thanks for this, Marcus! This info is helpful for folks like myself trying to figure out where to send our work.
Posted by: Justin | 03/21/2015 at 11:08 PM
Marcus, I'm curious about the data. Did you have anyone submit a report on your list including a top-5 publication? I skimmed, but I didn't see any. I did see some speciality top-5s, but no general top-5s.
I know you also searched the philjobs data. How many top publications were in that set?
My guess is that lots of publications will get you snatched up (as you report) but also that top-5 publications will get you snatched up. I have a feeling a "In at Nous, revise and resubmit at Phil Review" will grab attention all over the place. But maybe I'm wrong? Did you see a statistically useful number of people with that sort of slate?
Posted by: Anon Grad Student | 03/22/2015 at 12:12 PM
Anon Grad Student: I treated both speciality top-5's and generalist top-5's as "top-5's." This seems to me a reasonable assumption. A publication in "Ethics", for instance, is widely recognized as "top-5"-type publication, even though Ethics is not a "top-5 generalist journal."
When searching the philjobs data, I treated both forthcoming and already-published work as top-5 publication (since forthcoming articles are, for all intents and purposes, publications). In the entire philjobs hiring data-set (57 hires), there were only a handful of top-5 publications. Almost no one had any! I didn't count revise-and-resubmits, but I do not recall seeing many of those at all--and I definitely would have noticed revise-and-resubmits at places like Nous, PPR, Phil Review, or whatever.
Posted by: Marcus Arvan | 03/22/2015 at 01:07 PM
Thanks for replying, Marcus. I shouldn’t have made bold conjectures about what your data ‘tell us’, since (unsurprising confession) I haven’t done my own analysis. I should have stuck to the methodological point. Consider two job-seekers, A and B. A spends five years on the market; she gets ten interviews and, eventually, an offer. B spends six months on the market; she gets four interviews and an offer. If I have correctly understood your measures of job market success, you rank candidates who look like A above candidates who look like B. (A and B are tied for offers, but A got loads more interviews.) But this is wrong: A found a job relatively easily, while B struggled for years.
Posted by: NT | 03/22/2015 at 01:11 PM
Oops! Last sentence should read "B found a job relatively easily, while A struggled for years".
Posted by: NT | 03/22/2015 at 01:16 PM
Marcus,
Can you provide some of the data. Specifically, how many people on the market this year: (1) just got their PhD, (2) are on the market for a second year (counting from the PhD), (3) on the market for a third year (since PhD).
Assuming that the new ones on the market are equal in number from year to year, we would expect that there are fewer and fewer people on the market from each category, moving away from the present new cohort. If the data do not show this, then there is reason to suspect that we have a biased sample. (I am sorry this is said in such a clumsy way) I hope people get my point.
Posted by: a concern | 03/22/2015 at 01:27 PM
Hi NT: Thanks for your reply. Fortunately, we can use the data to see who is getting hired earlier, and who is getting hired later (that is, who has been on the market longer than who).
Here are the facts:
(1) Mean time to hire for hires with 1+ top-5 publication: 4.75 years
(2) Mean time to hire for hires with 1+ top-10 publication: 2.6 years
(3) Mean time to hire for hires with 1+ top-20 publication: 2.5 years
(4) Mean time to hire for hires with +1 non-top-20 publications: 2.5 years
(5) Mean time to hire for hires with *only* non-top-20 publications: 2.37
As you can see, there is NO advantage in time-on-the-market for people with higher-ranked publications.
On the contrary, the people who took the *longest* to get hired were the (very few) people with top-5 publications, and the people who spent the shortest time on the market were people with only *non*-top-20 publications!
Posted by: Marcus Arvan | 03/22/2015 at 01:49 PM
a concern: Good question.
The Cocoon sample is indeed biased (strongly) towards relatively new candidates on the market. However, as I note below, it does *not* follow that it is a "biased sample." There are many reasons to think that the population being sampled--the entire job market--is biased toward people 0-3 years out, due to job-market attrition (i.e. people giving up!), in which case a *representative* sample should have the same "bias."
The Cocoon sample consisted of:
17 individuals=ABD
7 individuals=graduated 1 year ago
4 individuals=graduated 2 years ago
2 individuals=graduated 3 years ago
1 individual=graduated 5 years ago
1 individual=graduated 6 years ago
1 individual=graduated 7 years ago
The hiring data, on the other hand, do not tell us who is on the market--but it does give us proportions of hires:
(1) 35% of hires-to-date are direct from grad school
(2) 12% of hires graduated 1 year ago
(3) 10.5% of hires graduated 2 years ago
(4) 10.5% of hires graduated 3 years ago
(5) 14% of hires graduated 4 years ago
(6) 7% of hires graduated 5 years ago
(7) 10.5 of hires graduated 6 years ago
I hope to obtain a larger Cocoon sample in the future.
Notice, however, that even a larger sample will probably *naturally*/accurately be biased towards candidates a year or two out--as (if past discussions on the Smoker are any indication), many people appear to give up after a couple of years on the market.
Notice, further, that if this is the case--if job-market attrition entails that most , then if we were to normalize the hiring data to reflect this, it would turn out that the longer a person is on the market, the *higher* their chance of being hired. Allow me to explain.
Suppose there are, say, 100 hires. Then suppose, as the data say, 35% of all hires (so, 35 individuals) are straight out of grad school. Now suppose, however, that the lion's share of people on the market (say, 900 candidates) are in grad school. In that case, although 35% of hires are direct out of grad school, the probability that any particular individual will be hired out of grad school is 4%.
Now suppose, as the data say, 10% of all hires (so, 10 out of 100 hires) are 6 years out on the market. But, because of market attrition, there are only 30 candidates still out on the market after six years. In that case, although only 10% of hires are people who are six years out, the probability that any person in this cohort is a hire is 33%.
In other words, job-market attrition rates are needed to know just how well *any* cohort (ABD, 1 year out, 2 years out) is doing on the market. More data on this is needed, however.
Posted by: Marcus Arvan | 03/22/2015 at 02:14 PM
Marcus,
The data you have collected is too small of a sample. But that is the trend - the trend in your data - that we would expect for two reasons.
After all, people leave the market in two ways: they are hired out of it (into a job), and they drop out of it (leaving the profession).
Thus there should be a trend something like this:
1st yr on market 100 people
2nd yr on market 70 people
3rd year on market 50 people
4th year on market 30 people
5th year ... 12 people
You get the idea
Posted by: a concern | 03/22/2015 at 02:22 PM
a concern: Entirely agreed. Much more data needs to be collected. But, although small, the two data-sets presented here are independent and both point to similar (and surprising) trends.
Posted by: Marcus Arvan | 03/22/2015 at 02:31 PM
Thanks for the further info, but there's still a problem: you're counting number of publications at the *end* of the job-search period. Here are two possible explanations of the mean-time-to-hire data you report. (1) Candidates with high-prestige publications are less attractive to hiring committees. (2) Senior philosophers (measured by years-since-graduation) are more likely to have high-prestige publications than junior philosophers. Of course, (1) and (2) could both be true - but (1) strikes me pretty implausible. I bet you would find that high prestige publications at the *start* of job-search are strongly associated with short time-to-hire.
Posted by: NT | 03/22/2015 at 02:31 PM
Very cool, Marcus -- thanks for all this?
Could the lack of relationship with higher ranked journals be a statistic power issue?
Eric
Posted by: Eric Schwitzgebel | 03/22/2015 at 02:36 PM
(1) Mean time to hire for hires with 1+ top-5 publication: 4.75 years
(5) Mean time to hire for hires with *only* non-top-20 publications: 2.37
These results really are incredible, given that in my experience, everyone really does assume that a publication in a top 5 journal is a golden ticket. (Anyone who has ever seen all the praise on Facebook for those who have managed to secure such publications knows what I'm talking about. Not that I think such praise is a bad thing!).
Anyone have a guess at to what explains this? Do those that land top 5 publications tend to rest too much on such laurels to the neglect of the rest of their dossier?
Posted by: Eugene | 03/22/2015 at 03:28 PM
Eugene: I don't think it is incredible given how most universities work, and the incentives involved in hiring. Although I will explain in more detail in a future post, let me give the short story.
It is natural to think that a top-5 publication will be a "golden ticket" for two reasons:
(1) Most of us got our PhD's at research universities, where research awesomeness is the #1 priority.
(2) We assume that departments want to hire "the best candidate."
Here is why both of these assumptions are bad. Most universities (mine, for instance) are *not* R1 schools--and the fact that they are not sets up strong incentives NOT to hire the best researcher.
First, departments can wait over a decade to receive a single new tenure-stream line. Tenure-stream lines are *incredibly* hard to come by at most schools, particularly in humanities departments.
Second, if a department hires someone who then jumps ship for an R1 at the first opportunity, the department may lose that tenure-stream line altogether (many times, a department does not get the line "back").
As such, there are *very* strong incentives at many schools to hire someone who will not leave. But what do top-5 or top-10 publications signal? It signals that the person will probably, at some point in the future have (A) the desire to leave for a more prestigious program, and (B) the means to do it (viz. someone who has published in Phil Review, Nous, etc. is likely to do it again...and be attractive to R1 schools).
In other words, for everything except for R1 schools (which are few and far between), a top-notch publication record = "bad fit." Incentive-wise, departments want to hire someone who will (A) get tenure, and (B) not leave. And what's the best indication of both? Answer: someone with a "good enough", but far great, publication record.
Posted by: Marcus Arvan | 03/22/2015 at 03:43 PM
Hi Eric: Thanks for your comment.
It's possible, but on the whole it looks very unlikely, particularly when it comes to pubs and job offers. If it were a statistical power issue, you would expect results that "come somewhere close" to statistical significance. But, by and large, this isn't the case with top pubs and interviews or offers.
Here, for instance, are the correlation coefficients and p-values for publication type and job offers:
(1) Top-5 publications & job offers: r=-.020, p=.911
[Note: for those who don't know statistics, this is literally as far away from a significant relationship as you can possibly go. A correlation coefficient (r) of 0 is "no relationship at all", and the same goes for a p-value approaching 1].
(2) Top-10 publications & job offers: r=.096, p=.602 (also nowhere near any sort of statistical relation)
(3) Top-20 publications & job offers: r=-.060, p=.774 (also *nowhere* close).
In contrast,
(4) Total publications & job offers: r=.344, p=.050 (significant relationship of moderate strength)
(5) Non-top-publications & job offers: r=.338, p=.058 (close to significant but not quite there).
In other words, the only result that looks like it could be due to (lack of) statistical power is (5) not quite reaching a level of significance.
When it comes to interviews, on the other hand, one (but only one) null result came *somewhat* close to significance:
(6) Top-10 publications and interviews: r=.289, p=.109
Although not statistically significant (and nowhere close to as strong as the relationships observed with total pubs and interviews, or non-top-pubs and interviews--both of which had insanely high correlation-coefficients upwards of .5!), it's possible that this one could turn out significant with a larger sample.
Finally, however, there nothing at all close to a statistically significant relationship between top-5 pubs and interviews (r=.148, p=.412), or top-20 pubs and interviews (r=.160, p=.382).
Posted by: Marcus Arvan | 03/22/2015 at 04:17 PM
NT: I don't quite follow.
First, the data I have collected is from the start of this hiring season. This shows whether or not someone has a top-publication at the time of their hire.
Second, the chances of someone being hired and *then* getting a top-ranked publication (only after getting hired) is very small.
As such, the publications that people report at the time of their hire thus generally reflect the publications the person headed on the market--which is precisely what we're looking to measure.
Posted by: Marcus Arvan | 03/22/2015 at 04:32 PM
Possible case: Joe has been on the job market for five years, but has received no offers. He has, however, been (slowly) writing an excellent paper. Joe eventually completes his paper, and gets it accepted by Nous. He's then hired by the next department he applies to. Joe responds to your survey, reporting one top-five publication and a 5+ year job search. You crunch the numbers, and conclude that there is "no advantage in time-on-the-market for people with higher-ranked publications". My point is that one shouldn't treat Joe's failure to secure a job *before* his top-5 publication as evidence that hiring committees don't like top-5 publications. Moreover, if all we know about candidate X is that he's been on the market for 5 years and he has one top-5 publication, it is quite likely that the publication is recent. After all, it takes time to mature as a philosopher.
Posted by: NT | 03/22/2015 at 05:17 PM
NT: The single case you describe (Joe being hired the moment he gets a top-publication) is coherent. But the data will reflect the number of Joes out there, and whether there is a "Joe effect."
If there were "Joes" getting hired the moment they got a top publication, then would show up in the data. The Joes in the data-set would comprise a significant relationship between top-publications and job offers/hires. But this is precisely what we don't see. There is no observed "Joe effect."
Posted by: Marcus Arvan | 03/22/2015 at 05:29 PM
Two continuing worries:
1) If there are only a handful of people with top-5 publications in the data set, then it seems to me that the responsible thing to do is to prescind from offering any conclusion about top-5 publications. It certainly does not seem to be the case that, "As you can see, there is NO advantage in time-on-the-market for people with higher-ranked publications." It can't see that, not in the data you've collected. Get more robust data; until then, stick to the warranted conclusion that "There is not enough data this year with respect to top-5 publications to evaluate their relevance."
2) I think that Ethics is the speciality exception to the norm. It is nearly a top-5 journal on its own, without needing to appeal to some speciality ranking. My guess is that the vast majority of top-5 speciality journals would not fit anywhere near the Nous, PPR, Phil Review category. And my further guess is that the top-5 data is therefore marred, since the vast majority of the "top-5" publications in the set are probably these outsider journals, not true "top 5"s.
Posted by: Anon Grad Student | 03/22/2015 at 07:13 PM
Anon Grad Student: Those are both fair points. There were actually very few top-5 specialist publications in the data set, and I don't think there were enough to seriously mar the data. Let me re-code them separately and report back!
In any case, when I present the final data at the end of the job season, I will make sure to keep the two categories separate.
Posted by: Marcus Arvan | 03/22/2015 at 07:42 PM
Thanks for your patient responses. I understand that one can in principle check for an association between high-prestige publications and success in this year's job market. I fear I may have misinterpreted your data: I thought 'total #' of interviews/offers meant total # since beginning job search, but perhaps you (and your respondents) meant total # *this year*. If that is what you (and they) meant, then I agree that your data count against the hypothesis that high-prestige -- say, top-20 -- publications are a big advantage on the job market. (The mean-time-to-hire figures, though, do *not* count against that hypothesis.)
Posted by: NT | 03/22/2015 at 07:51 PM
NT: Thanks for your reply. Yes, respondents were merely reporting total # of interviews this year.
Posted by: Marcus Arvan | 03/22/2015 at 07:55 PM
Marcus: you seem to conflate 'years on the market' with 'time from PhD to (this year's) reported hire'. But do you have data telling you that someone who this year got a job 4 years out from their PhD was in fact applying every year? Lots of people get 2 or 3 year research postdocs or VAPs or lectureships; they may not apply for jobs in the first or second year of such posts. Likewise, some land a TT position, stay in it for 5 years, then land a job this year and enter your data; you can't assume this person was on the market each of those 5 years. Relatedly: almost a third (31.5%) of your data set was hired 4-6 years from their PhDs… but what percentage of those were already in a TT job (and thus moving 'laterally')?
And regarding your dismissal of the 'Joe effect': do you have anyone in your data set who has 1+ top-5 publication who both: did not get a TT offer this year AND is not presently in a TT position? If you have lots of those in your data set, it would be grounds for dismissing the Joe effect; but if you have none, you obviously can't dismiss it.
Posted by: anon postdoc | 03/23/2015 at 05:54 AM
Hi Marcus,
This is the caution I'd draw about saying that people with top-five publications showed a greater time to hire: Suppose, as seems plausible, that people are much more likely to publish in top-five journals when they've been out of grad school for at least three years. Then most of the people with top-five publications in your sample will have a greater time to hire--simply because, if they had a shorter time to hire, they would have been hired before publishing their paper in the top-five journal.
To spell this out with an example: Joe and Jane both submit a paper to a top-five journal their second year out of grad school. It is accepted for publication their third year out of grad school. Jane gets a job offer her second year out of grad school; Joe gets one his third year out, when his publication is already on his CV.
Jane will get coded as someone with two years to hire, with no top-five publication; Joe will get coded as someone with three years to hire and a top-five publication. But it's not because the top-five publication caused Joe to get hired later.
Posted by: Matt Weiner | 03/23/2015 at 10:15 AM