Our books

Become a Fan

« A reminder, and a "job-market boot camp" | Main | Query: Undergrad seminar in contemporary ethics »



Feed You can follow this conversation by subscribing to the comment feed for this post.


A priori, we would expect total number of publications to correlate strongly with time-since-graduation. And among job-seekers and recent hires, we would expect time-since-graduation to correlate strongly with total number of interviews. Thus, the correlation you report between total number of publications and total number of interviews doesn't tell us much about the behavior of hiring committees. The *lack* of correlation between number of high-prestige publications and total number of interviews does tell us something: it tells us that job-seekers with high-prestige publications get snapped up quickly.

Marcus Arvan

NT: Neither of those inferences are cogent. If people with top-ranked publications were "getting snatched up quickly", that would show up in the data: those types of publications would correlate with interviews and offers. But they don't. The data does not show those people getting snatched up. It shows instead that people with lots of total publications and non-top-ranked publications are being snatched up.


Thanks for this, Marcus! This info is helpful for folks like myself trying to figure out where to send our work.

Anon Grad Student

Marcus, I'm curious about the data. Did you have anyone submit a report on your list including a top-5 publication? I skimmed, but I didn't see any. I did see some speciality top-5s, but no general top-5s.

I know you also searched the philjobs data. How many top publications were in that set?

My guess is that lots of publications will get you snatched up (as you report) but also that top-5 publications will get you snatched up. I have a feeling a "In at Nous, revise and resubmit at Phil Review" will grab attention all over the place. But maybe I'm wrong? Did you see a statistically useful number of people with that sort of slate?

Marcus Arvan

Anon Grad Student: I treated both speciality top-5's and generalist top-5's as "top-5's." This seems to me a reasonable assumption. A publication in "Ethics", for instance, is widely recognized as "top-5"-type publication, even though Ethics is not a "top-5 generalist journal."

When searching the philjobs data, I treated both forthcoming and already-published work as top-5 publication (since forthcoming articles are, for all intents and purposes, publications). In the entire philjobs hiring data-set (57 hires), there were only a handful of top-5 publications. Almost no one had any! I didn't count revise-and-resubmits, but I do not recall seeing many of those at all--and I definitely would have noticed revise-and-resubmits at places like Nous, PPR, Phil Review, or whatever.


Thanks for replying, Marcus. I shouldn’t have made bold conjectures about what your data ‘tell us’, since (unsurprising confession) I haven’t done my own analysis. I should have stuck to the methodological point. Consider two job-seekers, A and B. A spends five years on the market; she gets ten interviews and, eventually, an offer. B spends six months on the market; she gets four interviews and an offer. If I have correctly understood your measures of job market success, you rank candidates who look like A above candidates who look like B. (A and B are tied for offers, but A got loads more interviews.) But this is wrong: A found a job relatively easily, while B struggled for years.


Oops! Last sentence should read "B found a job relatively easily, while A struggled for years".

a concern

Can you provide some of the data. Specifically, how many people on the market this year: (1) just got their PhD, (2) are on the market for a second year (counting from the PhD), (3) on the market for a third year (since PhD).
Assuming that the new ones on the market are equal in number from year to year, we would expect that there are fewer and fewer people on the market from each category, moving away from the present new cohort. If the data do not show this, then there is reason to suspect that we have a biased sample. (I am sorry this is said in such a clumsy way) I hope people get my point.

Marcus Arvan

Hi NT: Thanks for your reply. Fortunately, we can use the data to see who is getting hired earlier, and who is getting hired later (that is, who has been on the market longer than who).

Here are the facts:

(1) Mean time to hire for hires with 1+ top-5 publication: 4.75 years

(2) Mean time to hire for hires with 1+ top-10 publication: 2.6 years

(3) Mean time to hire for hires with 1+ top-20 publication: 2.5 years

(4) Mean time to hire for hires with +1 non-top-20 publications: 2.5 years

(5) Mean time to hire for hires with *only* non-top-20 publications: 2.37

As you can see, there is NO advantage in time-on-the-market for people with higher-ranked publications.

On the contrary, the people who took the *longest* to get hired were the (very few) people with top-5 publications, and the people who spent the shortest time on the market were people with only *non*-top-20 publications!

Marcus Arvan

a concern: Good question.

The Cocoon sample is indeed biased (strongly) towards relatively new candidates on the market. However, as I note below, it does *not* follow that it is a "biased sample." There are many reasons to think that the population being sampled--the entire job market--is biased toward people 0-3 years out, due to job-market attrition (i.e. people giving up!), in which case a *representative* sample should have the same "bias."

The Cocoon sample consisted of:

17 individuals=ABD
7 individuals=graduated 1 year ago
4 individuals=graduated 2 years ago
2 individuals=graduated 3 years ago
1 individual=graduated 5 years ago
1 individual=graduated 6 years ago
1 individual=graduated 7 years ago

The hiring data, on the other hand, do not tell us who is on the market--but it does give us proportions of hires:

(1) 35% of hires-to-date are direct from grad school
(2) 12% of hires graduated 1 year ago
(3) 10.5% of hires graduated 2 years ago
(4) 10.5% of hires graduated 3 years ago
(5) 14% of hires graduated 4 years ago
(6) 7% of hires graduated 5 years ago
(7) 10.5 of hires graduated 6 years ago

I hope to obtain a larger Cocoon sample in the future.

Notice, however, that even a larger sample will probably *naturally*/accurately be biased towards candidates a year or two out--as (if past discussions on the Smoker are any indication), many people appear to give up after a couple of years on the market.

Notice, further, that if this is the case--if job-market attrition entails that most , then if we were to normalize the hiring data to reflect this, it would turn out that the longer a person is on the market, the *higher* their chance of being hired. Allow me to explain.

Suppose there are, say, 100 hires. Then suppose, as the data say, 35% of all hires (so, 35 individuals) are straight out of grad school. Now suppose, however, that the lion's share of people on the market (say, 900 candidates) are in grad school. In that case, although 35% of hires are direct out of grad school, the probability that any particular individual will be hired out of grad school is 4%.

Now suppose, as the data say, 10% of all hires (so, 10 out of 100 hires) are 6 years out on the market. But, because of market attrition, there are only 30 candidates still out on the market after six years. In that case, although only 10% of hires are people who are six years out, the probability that any person in this cohort is a hire is 33%.

In other words, job-market attrition rates are needed to know just how well *any* cohort (ABD, 1 year out, 2 years out) is doing on the market. More data on this is needed, however.

a concern

The data you have collected is too small of a sample. But that is the trend - the trend in your data - that we would expect for two reasons.

After all, people leave the market in two ways: they are hired out of it (into a job), and they drop out of it (leaving the profession).

Thus there should be a trend something like this:
1st yr on market 100 people
2nd yr on market 70 people
3rd year on market 50 people
4th year on market 30 people
5th year ... 12 people

You get the idea

Marcus Arvan

a concern: Entirely agreed. Much more data needs to be collected. But, although small, the two data-sets presented here are independent and both point to similar (and surprising) trends.


Thanks for the further info, but there's still a problem: you're counting number of publications at the *end* of the job-search period. Here are two possible explanations of the mean-time-to-hire data you report. (1) Candidates with high-prestige publications are less attractive to hiring committees. (2) Senior philosophers (measured by years-since-graduation) are more likely to have high-prestige publications than junior philosophers. Of course, (1) and (2) could both be true - but (1) strikes me pretty implausible. I bet you would find that high prestige publications at the *start* of job-search are strongly associated with short time-to-hire.

Eric Schwitzgebel

Very cool, Marcus -- thanks for all this?

Could the lack of relationship with higher ranked journals be a statistic power issue?



(1) Mean time to hire for hires with 1+ top-5 publication: 4.75 years

(5) Mean time to hire for hires with *only* non-top-20 publications: 2.37

These results really are incredible, given that in my experience, everyone really does assume that a publication in a top 5 journal is a golden ticket. (Anyone who has ever seen all the praise on Facebook for those who have managed to secure such publications knows what I'm talking about. Not that I think such praise is a bad thing!).

Anyone have a guess at to what explains this? Do those that land top 5 publications tend to rest too much on such laurels to the neglect of the rest of their dossier?

Marcus Arvan

Eugene: I don't think it is incredible given how most universities work, and the incentives involved in hiring. Although I will explain in more detail in a future post, let me give the short story.

It is natural to think that a top-5 publication will be a "golden ticket" for two reasons:

(1) Most of us got our PhD's at research universities, where research awesomeness is the #1 priority.

(2) We assume that departments want to hire "the best candidate."

Here is why both of these assumptions are bad. Most universities (mine, for instance) are *not* R1 schools--and the fact that they are not sets up strong incentives NOT to hire the best researcher.

First, departments can wait over a decade to receive a single new tenure-stream line. Tenure-stream lines are *incredibly* hard to come by at most schools, particularly in humanities departments.

Second, if a department hires someone who then jumps ship for an R1 at the first opportunity, the department may lose that tenure-stream line altogether (many times, a department does not get the line "back").

As such, there are *very* strong incentives at many schools to hire someone who will not leave. But what do top-5 or top-10 publications signal? It signals that the person will probably, at some point in the future have (A) the desire to leave for a more prestigious program, and (B) the means to do it (viz. someone who has published in Phil Review, Nous, etc. is likely to do it again...and be attractive to R1 schools).

In other words, for everything except for R1 schools (which are few and far between), a top-notch publication record = "bad fit." Incentive-wise, departments want to hire someone who will (A) get tenure, and (B) not leave. And what's the best indication of both? Answer: someone with a "good enough", but far great, publication record.

Marcus Arvan

Hi Eric: Thanks for your comment.

It's possible, but on the whole it looks very unlikely, particularly when it comes to pubs and job offers. If it were a statistical power issue, you would expect results that "come somewhere close" to statistical significance. But, by and large, this isn't the case with top pubs and interviews or offers.

Here, for instance, are the correlation coefficients and p-values for publication type and job offers:

(1) Top-5 publications & job offers: r=-.020, p=.911

[Note: for those who don't know statistics, this is literally as far away from a significant relationship as you can possibly go. A correlation coefficient (r) of 0 is "no relationship at all", and the same goes for a p-value approaching 1].

(2) Top-10 publications & job offers: r=.096, p=.602 (also nowhere near any sort of statistical relation)

(3) Top-20 publications & job offers: r=-.060, p=.774 (also *nowhere* close).

In contrast,

(4) Total publications & job offers: r=.344, p=.050 (significant relationship of moderate strength)

(5) Non-top-publications & job offers: r=.338, p=.058 (close to significant but not quite there).

In other words, the only result that looks like it could be due to (lack of) statistical power is (5) not quite reaching a level of significance.

When it comes to interviews, on the other hand, one (but only one) null result came *somewhat* close to significance:

(6) Top-10 publications and interviews: r=.289, p=.109

Although not statistically significant (and nowhere close to as strong as the relationships observed with total pubs and interviews, or non-top-pubs and interviews--both of which had insanely high correlation-coefficients upwards of .5!), it's possible that this one could turn out significant with a larger sample.

Finally, however, there nothing at all close to a statistically significant relationship between top-5 pubs and interviews (r=.148, p=.412), or top-20 pubs and interviews (r=.160, p=.382).

Marcus Arvan

NT: I don't quite follow.

First, the data I have collected is from the start of this hiring season. This shows whether or not someone has a top-publication at the time of their hire.

Second, the chances of someone being hired and *then* getting a top-ranked publication (only after getting hired) is very small.

As such, the publications that people report at the time of their hire thus generally reflect the publications the person headed on the market--which is precisely what we're looking to measure.


Possible case: Joe has been on the job market for five years, but has received no offers. He has, however, been (slowly) writing an excellent paper. Joe eventually completes his paper, and gets it accepted by Nous. He's then hired by the next department he applies to. Joe responds to your survey, reporting one top-five publication and a 5+ year job search. You crunch the numbers, and conclude that there is "no advantage in time-on-the-market for people with higher-ranked publications". My point is that one shouldn't treat Joe's failure to secure a job *before* his top-5 publication as evidence that hiring committees don't like top-5 publications. Moreover, if all we know about candidate X is that he's been on the market for 5 years and he has one top-5 publication, it is quite likely that the publication is recent. After all, it takes time to mature as a philosopher.

Marcus Arvan

NT: The single case you describe (Joe being hired the moment he gets a top-publication) is coherent. But the data will reflect the number of Joes out there, and whether there is a "Joe effect."

If there were "Joes" getting hired the moment they got a top publication, then would show up in the data. The Joes in the data-set would comprise a significant relationship between top-publications and job offers/hires. But this is precisely what we don't see. There is no observed "Joe effect."

Anon Grad Student

Two continuing worries:

1) If there are only a handful of people with top-5 publications in the data set, then it seems to me that the responsible thing to do is to prescind from offering any conclusion about top-5 publications. It certainly does not seem to be the case that, "As you can see, there is NO advantage in time-on-the-market for people with higher-ranked publications." It can't see that, not in the data you've collected. Get more robust data; until then, stick to the warranted conclusion that "There is not enough data this year with respect to top-5 publications to evaluate their relevance."

2) I think that Ethics is the speciality exception to the norm. It is nearly a top-5 journal on its own, without needing to appeal to some speciality ranking. My guess is that the vast majority of top-5 speciality journals would not fit anywhere near the Nous, PPR, Phil Review category. And my further guess is that the top-5 data is therefore marred, since the vast majority of the "top-5" publications in the set are probably these outsider journals, not true "top 5"s.

Marcus Arvan

Anon Grad Student: Those are both fair points. There were actually very few top-5 specialist publications in the data set, and I don't think there were enough to seriously mar the data. Let me re-code them separately and report back!

In any case, when I present the final data at the end of the job season, I will make sure to keep the two categories separate.


Thanks for your patient responses. I understand that one can in principle check for an association between high-prestige publications and success in this year's job market. I fear I may have misinterpreted your data: I thought 'total #' of interviews/offers meant total # since beginning job search, but perhaps you (and your respondents) meant total # *this year*. If that is what you (and they) meant, then I agree that your data count against the hypothesis that high-prestige -- say, top-20 -- publications are a big advantage on the job market. (The mean-time-to-hire figures, though, do *not* count against that hypothesis.)

Marcus Arvan

NT: Thanks for your reply. Yes, respondents were merely reporting total # of interviews this year.

anon postdoc

Marcus: you seem to conflate 'years on the market' with 'time from PhD to (this year's) reported hire'. But do you have data telling you that someone who this year got a job 4 years out from their PhD was in fact applying every year? Lots of people get 2 or 3 year research postdocs or VAPs or lectureships; they may not apply for jobs in the first or second year of such posts. Likewise, some land a TT position, stay in it for 5 years, then land a job this year and enter your data; you can't assume this person was on the market each of those 5 years. Relatedly: almost a third (31.5%) of your data set was hired 4-6 years from their PhDs… but what percentage of those were already in a TT job (and thus moving 'laterally')?

And regarding your dismissal of the 'Joe effect': do you have anyone in your data set who has 1+ top-5 publication who both: did not get a TT offer this year AND is not presently in a TT position? If you have lots of those in your data set, it would be grounds for dismissing the Joe effect; but if you have none, you obviously can't dismiss it.

Matt Weiner

Hi Marcus,
This is the caution I'd draw about saying that people with top-five publications showed a greater time to hire: Suppose, as seems plausible, that people are much more likely to publish in top-five journals when they've been out of grad school for at least three years. Then most of the people with top-five publications in your sample will have a greater time to hire--simply because, if they had a shorter time to hire, they would have been hired before publishing their paper in the top-five journal.

To spell this out with an example: Joe and Jane both submit a paper to a top-five journal their second year out of grad school. It is accepted for publication their third year out of grad school. Jane gets a job offer her second year out of grad school; Joe gets one his third year out, when his publication is already on his CV.

Jane will get coded as someone with two years to hire, with no top-five publication; Joe will get coded as someone with three years to hire and a top-five publication. But it's not because the top-five publication caused Joe to get hired later.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Job ads crowdsourcing thread

Philosophers in Industry Directory