Our books

Become a Fan

« New-hire horror stories | Main | 8 papers rejected before winning the Nobel Prize »



Feed You can follow this conversation by subscribing to the comment feed for this post.


I cannot find the exact graph at Weisberg's page that you display. Can you help me here - which one is it? Also, I think it would be revealing to see the best fit line through the data. I suspect the value of R-squared is telling.
Further, it looks like you have cut off the lower values on the graph. This is a bit misleading (unless you tell us why).


I'm not seeing that graph in the link. What are the lower ranked programs that have good placements? Also, that graph seems to only go from PGR score 2 to 4. Anyway, I guess I'd like some more information to make sense of this.

Marcus Arvan

Brad & Pendaran: Apparently it is a new graph that Weisberg just put together the other day. He linked to his previous posts (the ones I linked to) for more background information. I don't think Weisberg cut off the lower end of the graph (though I'm not sure), as PGR scores go from 1-5 and he does report data points between 1 and 2. But I will definitely keep an eye out on his page and social media for updates. I agree it would be nice to have a best-fit line - though it seems fairly easy to eyeball what that line (roughly) is likely to be.


On eye balling the best fit line ... in some of the graphs on W's site, there are a bunch of dots on 0 on the y-axis.

Jonathan Weisberg

Thanks Marcus for sharing this, and thanks Brad and Pendaran for the comments.

I only shared this plot on twitter/fb; it's not on my website. I wanted to get some feedback and work on it some more before doing a more careful and full write up on my blog.

You can find an updated version with a trendline, and visualization of department "size" (# graduates), here: https://twitter.com/jweisber/status/1134257184936812544

The axes are truncated to maximize the viewing area; so e.g. if the y-axis starts at 0.1 that's because there weren't any data points (U.S. PhD programs) with a TT placement rate lower than 10% that were also ranked in the 2006 PGR. Modulo my coding/calculation errors, of course.

I think the R^2 was about 0.25 on this plot, iirc. Last night I was looking at things from the perspective of individual PhDs, as opposed to departments, and I think at that level PGR score may become quite clearly a very weak predictor of TT placement. But I need to work through all this more carefully. More soon I hope.


Aren't there are a lot of PhD programs that aren't ranked in the PGR? What rank are they in this system? Rank 1? Or are they excluded? Forgive me if this has been answered elsewhere.

Jonathan Weisberg

That's right, many programs are not included here because they weren't ranked in the 2006 PGR.

Jonathan Weisberg

Here are some followup up plots from a more individual perspective. Instead of comparing PGR ratings to predict departmental TT placement rates, these compare to an individual's chances of a TT placement.


Sam Duncan

So I may very well be wrong about this (math's never been my best subject) and if I am then I'm sure someone will set me right, but.... Doesn't this whole method basically ignore the criticisms that have been made of the PGR and doesn't it change the subject of discussion in a way that makes it inaccurate to really call it a placement ranking (or at least to do so without some serious qualification)? As I understand how this works jobs at schools without PhD programs will on this method count for much less than jobs at schools with PhD programs since the programs without PhD programs will never be linked back to when we sample a faculty member. And jobs at schools with PhD programs that don't tend to place their students at other schools with PhD programs will themselves count for less though the measures aren't as biased against them as schools without PhD programs entirely. If that's so, then two points jump out: 1. It seems like it just provides mathematical cover for the (really dubious) value judgments baked into the PGR i.e. teaching focused jobs are practically worthless, research focused jobs are better and jobs at Leiterrific research schools are worth more than jobs that might be exactly the same except at less Leiterrific places. 2. It's not at all a ranking of TT placement since it weighs different tenure track jobs radically differently. A program with mediocre overall job placement, but which places the students who do get jobs at big name schools with PhD programs could be much better ranked on this metric than a school that gets a much higher proportion of their graduates TT jobs. But if we're talking about which school is simply better at placing students in TT jobs then it's clearly the latter and not the former (never mind the fact that not all desirable full time jobs are even tenure track or that not all TT jobs are even all that desirable). Wouldn't it be more accurate to describe this as simply a measure of "how central [PhD programs] are in the hiring network of philosophy PhD programs" instead of describing it as a measure of placement full stop?

Marcus Arvan

"So I may very well be wrong about this (math's never been my best subject) and if I am then I'm sure someone will set me right, but.... Doesn't this whole method basically ignore the criticisms that have been made of the PGR and doesn't it change the subject of discussion in a way that makes it inaccurate to really call it a placement ranking (or at least to do so without some serious qualification)"

Sam: I'm embarrassed to say I hadn't noticed this (I've been incredibly busy the past few days), but offhand you seem to me exactly right. It seems to me the results should probably be heavily qualified not as TT placement rankings *simpliciter*, but as TT rankings for jobs in highly ranked PhD programs.

To that extent, I still think it's an important set of results to be aware of - but it should be qualified (and probably presented alongside TT placement rankings simpliciter), as in the ADPA report - see here: https://philosopherscocoon.typepad.com/blog/2017/10/the-adpa-report-job-placement-multiple-job-markets.html

Jonathan Weisberg

All TT jobs are counted the same here, whether they're at PhD-granting programs or not. The question is whether the PGR rating of the program you graduate from helps you get a TT job, and if so by how much.

One way of thinking about it: answering this descriptive question helps resolve the value question. The value of the PGR isn't being presupposed, but tested.

Sam Duncan


I didn't intend this as a criticism of your post. I think this is very revealing: Even if one puts a thumb very heavily on the scales in a way that benefits programs that do well in the PGR and hurts those that don't, some of the ones that rank well in the PGR still don't do that well and some schools that don't rank well do surprisingly well. That is a pretty telling criticism of the PGR!
But I do want to be clear that there's a thumb on the scale. And so my issue is more with the original headline of study and chart as "Placement" and "TT placement" rather than something much more accurate, if less eye grabbing, like "Centrality in Hiring Network of PhD Programs."


"Wouldn't it be more accurate to describe this as simply a measure of "how central [PhD programs] are in the hiring network of philosophy PhD programs" instead of describing it as a measure of placement full stop?"

If we're talking about the PageRank method then yes this seems correct.

But the attached graph doesn't mention PageRank...

But I confess that I feel confused. haha!

Sam Duncan

So the graph here doesn’t rely on the page rank method you describe on the placement page? But the “placement” rankings there do? Is the graph then based on the APDA data with no processing? This is all extremely confusing.

Jonathan Weisberg

There is no use of PageRank in any of these plots. These use simple counts of TT placements. The PageRank stuff was a completely separate analysis I blogged about last year. I don't know why it's being confused with these plots, except that it's linked above as "background".

Marcus Arvan

Jonathan: ah, that clears things up. That's what I first thought when I saw the plot on social media - it was the link to the old work that threw me (and others, it seems) for a loop. Thanks for clearing things up, and sorry for the confusion!

(And thanks for all of the work you do on this stuff. It's a very good service for prospective grad students, job-seekers, and the profession more broadly).


I recall that some of the lower ranked programs with the highest placement records are unranked - so it is pretty hard to compare Leiter rank with placement record when 50 plus out of, like 130 (?) total schools are missing. I don't know the total numbers but the point is not including unranked programs is significant.

Another thing to keep in mind is how many people don't graduate from a program, and the total number of students. If a school has a small number of students, it is just harder to gauge overall placement record because an idiosyncratic thing happening to one student with have a huge effect.

Still, I think there is something useful with the information, if people are careful to take everything in mind. (including that placement data is incomplete). We are really just at the start of keeping these sorts of records, and the start is typically messy.

Sam Duncan

That does clear things up. Sorry about the criticism then, but to be honest I dug into what I thought was the methodology before looking at the graph. And the page rank method is so flawed as a way of measuring placement that then I didn't really look too closely at the graph. I do think we ought to be skeptical of any sort of data that's been too processed. Yes sometimes it needs to be for it to tell us anything but the raw APDA data seems pretty informative in and of itself and a much better source of information for those applying to graduate school than the Leiter rankings.

Sam Duncan

Also, just to echo Amanda's point I do think we really need to start pushing programs to keep and publish completion statistics. Not only is the data much less informative than it could be without it-- if you really want to know your chance of getting a job you have to know your chances of finishing too-- but I can even imagine that judging programs on placement rates without completion rates in the picture could lead to perverse incentives like cutting or pushing out any student they thought unlikely to get a job. At the very least it would push programs to start taking the mental health issues and support issues Marcus has blogged about here in other posts seriously.

Jonathan Weisberg

Thanks to a criticism from Brian Weatherson I've posted a modifed version of the line graph (TT-placement-rate-by-PGR-of-PhD):


As Brian pointed out, the data on post-2014 graduates probably isn't "ripe" enough yet; too many people still in postdocs who will eventually find TT posts. Restricting ourselves to the riper 2012-14 graduates changes the picture significantly. There's a pretty strong correlation then between the PGR rating of one's PhD and the chance of landing a TT job. Valid criticisms of the PGR notwithstanding, this is information I think students should be aware of.

Aside to Sam: PageRank isn't a measure of placement, it's a measure of network centrality---hiring network centrality, in this case. Many students don't (or at least shouldn't) care about what it measures. But some do, and for them the relation to PGR can be valuable information.

Fwiw, I cared about that kind of thing when I was a student, though I now think that was a mistake. I thought I wanted a job at a central PhD program, but now having one, I think I should have aimed for a SLAC job instead. But many of my peers are very happy with jobs like mine. Folks differ.


I am glad you made the chart, but I really think it is hard to make confident statement in this regard without including unranked programs. (but...including *every* program might also be problematic, more on that later.)

1. As ADAP data has shown, some unranked programs have very high placement rankings in TT jobs. I can think of two that place almost 100% of their students.

2. There are a lot of unranked programs, over 50, which is the total number of schools included in the survey.

3. We have talked about the following theory for a while at this blog: students from mid-ranked schools often have the hardest time on the market, because they try really hard for research jobs, it is out of their reach, but then flight risk makes it hard for them to get teaching jobs. If there is any truth to this, and if unranked programs (that are not included) place well, then this will really skew the results.

4. Often lower ranked programs have less students, the less number of students one has (or less numbers included in any type of mathematical analysis) the harder it is to get accurate results. This doesn't speak in favor of low-ranked programs, it just says it will be harder to say anything about them with confidence.

All that said, the problem with including *every* program is some only graduate a student once every three years or so. So I think unranked programs should be included if they meet some type of minimum bar of yearly graduating students.

I don't have a grudge against Leither or Leiter rankings....I just think this we should be hesitant making strong claims on this data. I went to a pretty well ranked program and things turned out well for me, but less than 1/2 of our graduates got TT jobs. I do think it is very clear placement at PhD programs correlate with Leiter rank, and as you said for some people that is the kind of job they want and it is good for them to know this. Also, from what I can tell, most of the very top programs (5-10) place almost all of their students, once time for postdocs is accounted for.

Tim O'Keefe

There's a pretty strong correlation then between the PGR rating of one's PhD and the chance of landing a TT job. Valid criticisms of the PGR notwithstanding, this is information I think students should be aware of.

Yes, I agree that it's good to have this information. But is there any reason to use the PGR as a proxy for placement, when deciding which programs to apply to and what offer to accept, when we can use the placement data itself? The only one I can think of offhand is the postdoc issue, but there are ways around that, as you showed when constructing your new graph.

Jonathan Weisberg

Amanda: thanks for these points. I'll see what I can figure out about unranked programs' placement.

Tim: no reason to use PGR *instead of* departmental record, but there is reason to use it *in addition*. Individual departments often don't graduate many students, so their placement data can be noisy. By the time enough people have graduated from program X to bring the noise down, the factors driving their placement may have changed.

Aggregating (say) all PGR 3.0 graduates together helps ameliorate that problem. The cost is, it gives up relevant information that may distinguish (say) one PGR 3.0 program from another.

So my recommendation would be to look at both factors. If they clash, try to find out why before making a decision.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Current Job-Market Discussion Thread

Job-market reporting thread

Writing Service