Daily Nous recently ran a post on whether journals favor affiliated authors. In the comments section, Neil Levy wrote:
Note that there is a minimally pernicious explanation that may account for some bias: the bias may be toward topics or approaches (which it is an editor’s prerogative to favor) rather than people or institutions.
This got me thinking about something else I've worried about for a while: self-selection effects in the peer-review system. Here's my worry. From my experience as a reviewer, the selection of peer-reviewers tends to work something like this:
- A paper is submitted on topic X.
- The editors of the journal then seek out reviewers who specialize on topic X.
- The editors thus rely on judgments of people steeped in topic X to judge new papers on X.
So, for example, in my case, I receive a good number of requests to review papers in the following areas: nonideal justice, transformative experience, experimental philosophy, and free will--all areas I've published in. I am almost never asked to review a paper outside of these areas. Why? Because I don't specialize in the areas in question. But is this a good rationale? It might seem like it. Specialists are, it seems, the most qualified people to evaluate papers in their area. Alas, I think there's an underappreciated danger here: the danger of self-selection bias. Allow me to explain.
Do scenarios like this happen in philosophy? Do they happen a lot? I don't know of any good way to measure this, but for my part I have noticed something a bit peculiar. Sometimes I read papers in niche areas broadly in my AOS (moral and political philosophy), but which I don't specialize in...and find myself a bit flabbergasted by the kinds of things that have come to be accepted as premises in the area. I think to myself, "That premise doesn't seem true or plausible to me at all"...and yet, paper after paper in the area seems to treat the claim as, if not totally uncontroversial, then fairly safe to appeal to. When this happens, I wonder what is going on...and can't help but wonder whether the literature in question may have self-selected for people who find the premise plausible or true (whereas people like me, who don't find the relevant premises true or plausible, may simply steer clear of the debate or get filtered out of the literature by a process that selects reviewers sympathetic to the dominant position).
Insofar as this is something that can possibly occur--and, given the lack of methodologies controlling for it, I'm not sure how we can be confident that it doesn't--it is something we should want to avoid. In the sciences, there are clear methodological means for avoiding self-selection effects on published results: experimental design, statistical analysis, and replicability. Further, in one area of scientific inquiry--biomedical experiments--one way of controlling for observer bias is to utilize randomized controlled trials, where one area (the "experimental group") gets the medical intervention being studied and another group (the "control group") gets no treatment placebo. Because self-selection effects are possible in philosophy--and, if they affect what gets published, they may be philosophically pernicious (steering a literature in the direction of a select group's intuitions)--there seems to me a pretty good case for adopting editorial practices for avoiding them.
What might editorial practices like this look like? In the movie World War Z, viewers are told that after its history of calamities, Israel developed the "10th Man Rule":
After several disasters that NO ONE thought could happen, the Council decided that if a vote was unanimous against a possible outcome, one member would act as if it was ABSOLUTELY going to happen, and trying to prevent it. This way, if they have a crisis, one man is prepared for it, and assumes directorship of the council for the duration of the crisis.
Although this rule is obviously too strong for peer review, the basic idea behind it (make sure decision-making isn't left to insiders) seems an important one to approximate. How might journal editorial approximate it? Here's one possibility: have a policy of ensuring that at least one reviewer of a given paper is an outsider to the debate--a person who doesn't specialize in the area the paper is in. While this 'outside reviewer' might not be well-placed to evaluate certain parts of a paper (say, whether it engages adequately with the literature in the area, or whether its formal methodology is correct), this reviewer might be given a special task: that of evaluating whether the premises the paper is based upon are as plausible as people steeped in the literature seem to think. If the outside reviewer thinks not, they could play the special role of either advocating rejection, or else demanding a better defense of the premises being appealed to.
Of course, one obvious concern here might be that a premise in a given literature has become "safe" to appeal to because it was defended at length earlier in the literature. Still, as past trends in the profession seem to me to attest to, I don't think this is a reason to exclude outsiders from the peer-review process. If philosophical history is any indication, we should always be on guard against is an overly deferential attitude to earlier arguments. And indeed, this appeal to the success of earlier arguments only pushes the problem back one step. Why? Well, if my concerns above are right, earlier arguments in a given literature may have become accepted in part because of self-selection results in the past--in which case it would make sense to now include outsiders as a control to determine whether those past arguments are as convincing as their proponents in the current literature take them to be.
I'm curious to hear what others think. Are my worries misguided? Is my proposal a good idea? A bad idea? Why?