Our books

Become a Fan

« Getting rec-letters many years post-PhD | Main | Tips on conference logistics? »



Feed You can follow this conversation by subscribing to the comment feed for this post.

less-recent grad

FYI, it actually takes longer than 10 minutes, if you do as it asks (e.g., listing every journal you've published in, every conference you've presented at, every grant you've received, and so on and so forth, all in separate entries).


I do not see how this approach to gathering can expect to get a representative sample. I cannot image a lot of people with no publications bothering - it is just one more way to feel bad about themselves and their career prospects. So we will end up with an inflated sense of how many papers people on the market have.


I agree with concerned. Unless those doing the study can explain why they are confident that the sample will be representative, and that in doing this they can answer worries about a selection bias toward those who are more confident in their abilities and happy with their accomplishments, then this will result in highly misleading (and potentially harmful) data.

John Turri

I applaud these researchers for putting in the hard work to acquire empirical evidence relevant to answering an important set of issues. I think that the concerns about selection bias are reasonable and worth keeping in mind, and I encourage the researchers to keep the concern in mind when interpreting their findings. However, it should also be kept in mind that these concerns remain *entirely speculative* until the data are in and are available for examination. The charge that their data will be "highly misleading" and "potentially harmful" is clearly unjustified at this point. Researchers don't have to address entirely speculative worries about the quality of their data in advance of data collection. And even if it turns out that their data are unrepresentative due to low participation from people with minimal qualifications, we could still learn interesting things from properly analyzing the data and cautiously interpreting the results.


John Turri
I do not think the worries that Amanda and I raise are so unreasonable. When a scientist designs a study they are to anticipate what biases might be there, and how they might fail to gather a representative sample of the data they are seeking.
There is no independent check after the fact to determine if the data that are gathered are biased or partial. So your suggested strategy is not helpful here.


Hello John Turri,

I did *not* just claim, flat out, that the data would be highly misleading and potentially harmful. I claimed that IF the researchers are unable to answer worries about selection bias, THEN that would follow. This allows for the possibility that the researchers are able to answer this, and then the worrisome consequences wouldn't follow. I also agree that it is possible that they *can* answer this, I just have not seen, with what has been presented so far, that the project is designed in a way that this will happen. Of course, it is possible there are aspects of the project design to which I am unaware. In fact, one hope was that my comment would open the conversation so that the researchers can explain why they believe their project controls for the fairly obvious possibility of selection bias.

And I am not sure what you mean by, "researchers don't have to address entirely speculative worries about the quality of their data in advance of data collection." Well, of course, they don't have to. But neither do others have to assume that everything about the project is well designed. It is okay to raise questions about aspects of publicly announced research, especially when the researchers are seeking public participation. By answering this reasonable worry, the researchers might gain more participants.

Next, I am not sure why my worry is being assumed "highly speculative." You yourself begin your comment by saying that these very concerns are reasonable, and that they are something the researchers should keep in mind. If so, I would hope the concerns get at something more than a randomly speculative worry, because it seems a bit much that researchers keep in mind all randomly speculative worries. Rather, they should keep in mind worries that are....reasonable. I suppose if you mean by "highly speculative" that there is no empirical data backing up my question, that is, of course, true. But in that case all sorts of speculative worries are reasonable. And ideally researchers should present data that is convincing in a way that wards of reasonable worries. Not all worries of course, just reasonable ones.

I agree that if the data collected has selection bias, this does not therefore mean the data is useless. It is indeed possible that there is some useful info that we might get from it. But this is not relevant to my comment. It is possible to be worried about misleading data even when not all of the data is misleading, and even when some of the data is helpful. My worries were specifically about using the data to make general claims about job market trends, which is what the researchers suggested is the central aim of their project. (they said it is get a picture of "job market applicants", and continued to explain this in a way that suggested they wanted to arrive at conclusions about typical or average candidates, not some select sub-set of candidates.)

a philosopher

Re: this debate between John Turri and others:

I think we can safely assume that if the only collection procedure is an open call for volunteers online, then the selection will *not* be random or representative. If these sorts of open calls yielded good samples, then all sorts of online polling we know to be junk would yield valid results.

Whether or not the specific "highly speculative" biases identified by Amanda and others are in play, it does not sound safe to assume that there are *no* selection pressures biasing the sample collected from this open call. It would actually strike me as rather amazing if an open call for responses yielded a random sample.

I also don't think the explanation given of the work contained enough information about how the data would be used to determine whether this is a problem. But Charles Lassiter does say that they are attempting a "systematic" collection of information, so it's fair to ask what makes this systematic.


I don’t know anything about the resources the individuals conducting this research have available to them, so I don’t direct this comment at them: as a graduate student at a public institution who feels they should be making at least 120% of their current salary—the mere chance to win a gift card, for an amount of money that I could earn with two hours of work at In N Out Burger—as the price of any unit of my time—well, it stings a little. This is a comment about the economy surrounding graduate students, and I think it belongs in this thread because maybe if it’s repeatedly pointed out things might improve a little.


FWIW, I filled out the survey. I have *a lot* of conference presentations (like, a lot a lot), and a fair few pubs. It probably took me 10-15 minutes to fill out. It didn't seem especially onerous, especially compared to the interminable HR forms you have to fill out to apply to most jobs.

(For the race question, though, it's a bit strange to only have "American Indian/Native Alaskan" as the First Nations option, since there may well be other North American indigenous or Métis people on the market who aren't best captured by that category or that terminology.)

John Turri

I agreed that the concern is worth keeping in mind, not unreasonable. Yes, scientists should keep sampling bias in mind. But no one bears an obligation ahead of time to argue against speculative predictions, such as that expressed by your comment (e.g. "So we will end up with an inflated sense of how many papers people on the market have"). And you're wrong about there being no way to check a sample for bias after the fact. For example, suppose, contrary to your worry, that *only* people with no publications completed the survey. But, as the job market season winds down, it turns out that many successful applicants nevertheless have publications. It would follow that the sample was biased in a very specific way.

John Turri

I deny the conditional, "If the researchers are unable to answer worries about selection bias, then that would follow." Which worries they are or aren't able to preemptively argue against is strictly separate from what their actual data, after being collected, can reasonably be interpreted as supporting.

I fully support anyone's right to raise even highly speculative concerns. However, in my experience, philosophers have a tendency to (greatly) overestimate the force of such objections. That's what I'm seeing here on this thread and in online conversations elsewhere about this research. To claim that the researchers might be causing harm is a good example of this.

What evidence is there that the researchers can increase participation by responding to worries about their sample?

Yes, by "highly speculative" I meant that it was an open empirical issue with no actual evidence that it's true. That doesn't prevent it from being reasonable and worth bearing in mind, of course, as I acknowledged to begin with.

John Turri

A philosopher,
The sample definitely won't be either random or fully representative. That is a foregone conclusion which places limitations on reasonable interpretations of the results. But it's still worth doing.

Shane Wilkins

@ Michel,

The race categories we've used are the ones the US Census Bureau uses. We were aiming to stay with those categories, because that opens up the possibility of comparing our data to official government data.

@ concerned, a philosopher

We're aware of the limitations of the sampling methodology we're using. Elsewhere, I responded to a similar question by pointing out that all of the other sampling methodologies we considered have similar limitations as well.

@ #gradlife

We don't have any funding for this project. Please take the offer a ticket to the gift card lottery purely as an expression of our thanks for being willing to donate a little of your time to help us gather some data.

Sorry if I missed anybody else!

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Job ads crowdsourcing thread

Philosophers in Industry Directory

Open thread on hiring timelines

Cocoon Job-Market Mentoring Program