Our books






Become a Fan

« Unconventional teaching ideas that work: close reading of Asian philosophical texts (by Ian James Kidd) | Main | How can we help you? (January 2019 edition) »

12/31/2018

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Amanda

This would be great. Please let's make it happen somehow! My only fear is that because philosophers are overly critical people, some will find a few things wrong with this proposed change, and therefore it will never happen, and we will be stuck with the same old system everyone thinks is horrible.

Postdoc

I have two suggestions.

(1) I think it would help if there were discipline-wide standards, something of a grading rubric for journal refereeing. The rubric would in some way set out the dimensions along which a paper is to be rated, the ratings (e.g., 1-5), and what the ratings mean for each dimension. Room could be left for comments to explain each rating. On top of that could be journal-specific standards for interpreting the scores, e.g. some journals might weigh originality more than rigor. Other sorts of standards could be layered on top, e.g. criteria which everyone agreed obviously disqualified a paper, etc. Explicit bars for R&R and acceptance might be given as well, along with procedures for how to handle evaluating a resubmission.

In any case, it's hard to imagine that *some* sort of discipline-wide and explicit standards wouldn't help. Referees would know what they're supposed to be looking for. Authors would know what they needed to aim for. Consistency would be helped at least a little. The act itself of filling out a form might help take the edge off people's over-aggressive tendencies.

I find it a bit hard to imagine no one has suggested this before, but I haven't heard it. You might worry that there are no agreed on disciplinary standards, but if not, then all the more reason we as a profession should get to the collective job of articulating them. Other disciplines in the physical sciences have such standards, at least implicitly. Perhaps philosophy isn't the sort of thing which could have such objective standards, but if not, what's the point of peer review? Besides, we try to write up and enforce such standards for our students and apply them when grading papers.

(2) Alternatively (or perhaps somehow in conjunction), one could ask reviewers to reflect on their own reviews. Here is what I mean. For literally any philosophy paper (even the best), pretty much any professional philosopher will be able to raise a number of objections to the paper. So it's a given that for any journal submission, a referee will be able to write up a report which raises a number of objections and problems. So it's not terribly informative for a referee to find objections and problems. What is informative is some reflection on whether and why those problems are disqualifying for publication. So explicitly asking referees to not only articulate problems, but asking them to explain why that problem is disqualifying for publication, would be informative. And after all, I think we've all had the experience of one person raising but immediately waving off an objection as not forceful, while the next interprets it as fatal. This might even be a way to prompt the reflection: ask the referee to imagine they have to convince a colleague, who sees the objection but doesn't think it's significant, that it is in fact a problem.

Even without explicit standards, asking referees to go through this reflective exercise may temper aggressiveness and get people to consider their own reasons more carefully.

Pendaran

The peer review system won’t be improved any time soon, but as a purely theoretical exercise I like Marcus’s proposal. I too have proposed we use a central system in comments on this blog. My idea was to have submission fees. But following the idea ‘treat others as they treat you’ is a great idea. Authors should get exactly the quality of reports that they offer their peers. Love it! However, the objection will be that this system isn’t truth centric. Whether you’re a crappy referee probably has little to do with the quality of your own work. So this idea may hinder the publication of good ideas just because those authors lack referee skills. Every system will have flaws. My fee based system has the obvious flaw of adding a big expense to publication. The positive is that it gives all the unemployed/underemployed a way to make money refereeing papers. We could have a staggered fee system based on income, like taxes are.

Sebastian Lutz

Marcus wrote: "Because there's a fairly clear conflict of interest in letting authors take part, perhaps this could be restricted to editors".

I'm not sure what you mean by this. Do you just mean that authors who receive a recommended rejection might view the reviewer worse? I think that could be avoided by grouping reviewer evaluations by recommendation (i.e., all evaluations by authors of papers where you recommended rejection go into one group, all evaluations where you recommended R&R into another, etc.)

I want to point out one possibility that this opens up: If authors get to evaluate their reviewers, these evaluations could be (and should be) made accessible to the reviewers. I would love to hear what authors thought of my reviews, what was helpful, what was unclear, what was too harsh, etc. At the moment, the best we have is that we can (too seldom!) see the other review(s). I think this is great, but author feedback would be so much more helpful.

Marcus Arvan

Hi Sebastian: Thanks for chiming in!

I find what you say very persuasive. Part of the reason I initially suggested authors *and* editors might evaluate reviewers is that I thought there might be some value in that. I was just worried about the conflict of interest issue (authors giving reviewers bad marks simply for recommending rejection). But you're right: there may be fair ways to control for that! And I agree, learning what authors thought of my reviews might help me be a better reviewer.

David Bourget

I like the proposal, and I think it's quite doable from a technical point of view. It's an ingenious and nicely flexible variant on the general idea that reviewers get what they put in.

My only concern is this. I suspect that the main impediment to an overall well-working peer review system is that there is a fundamental imbalance between the number of submissions and the available time from sufficiently qualified referees. To a large extent (but obviously not exclusively), that's where the slowness of the review process comes from. According to some journal editors I've been talking with, there's been a big surge in submissions from more junior members of the profession in recent years. This is a challenge for them (I gather) because they want papers reviewed by more senior people they trust. So you have this huge demand on the time of a relatively small number of people. If we get more demanding on these people (asking for more detailed reviews) and/or put them in a situation where their ability to have their own papers reviewed in a fair and timely manner is on the line when they accept to review papers, we might be disincentivizing them to take on reviews, which (one might argue) would lead to worse reviewing.

To put things differently, if every paper submitter to journal X was an acceptable referee for journal X, then journal X could theoretically balance demand and offer for quality reviews through the right incentive system (in the most crude form, you get to be reviewed rapidly/with comments/etc only if you review rapidly/with comments/etc). However, that's not the case. For whatever reasons, a number of submitters to journal X are not acceptable reviewers for X. This creates an imbalance that requires the "qualified reviewers" to put in more than they get out. For this reason, I think the key to a better peer-review system is greater incentives for "qualified reviewers" (so that offer goes up). Accountability and improved standards are highly desirable, but they might be counterproductive if their effect is to make reviewing less desirable for those people that are highly solicited as reviewers. Increasing the pool of "qualified reviewers" (for example, by helping editorial teams discover and "vet" would be reviewers) would also be helpful.

I'm not sure how to incentivize reviewers who are already reviewing a lot more papers than they need to be reviewed.

Marcus Arvan

Hi David: Thanks for weighing in - I'm glad you like the proposal!

I also appreciate the main concern you raise. It's the same one David Velleman emphasized over in the Leiter thread (and in his Daily Nous post a while back). I will need to think a bit about possible ways to address it.

But here's one quick idea off the top of my head. What if journals like the ones you mention--really good journals that want senior people to vet stuff--adopted a *two-stage* review process? Here's what I mean. Suppose, in Stage 1, the editors recruit not-so-senior/prestigious people as reviewers. If those (much easier to secure) reviewers like the paper well enough for the editors to want to move forward further, then the process could move onto Stage 2: having the paper be reviewed by senior reviewers.

There's an obvious downside to this proposal. To get accepted, a manuscript would have to go through an additional stage of peer-review--thus lengthening the overall time those manuscripts might spend at the journal in question. However, I wonder if this might be creatively addressed in some way--maybe something like the following: what if the reviewers-selected at Stage 1 (junior reviewers) were *merely* tasked with reading the paper quickly (with, say, a 2-week deadline) and offering a thumbs-up/thumbs-down judgment on whether the manuscript should head to senior reviewers? If a paper had to get two thumbs-up from junior reviewers to proceed to Stage 2, the senior reviewers would presumably get far fewer manuscripts to review.

Anyway, just a thought. Curious what you think! In any case, I want to keep thinking about this problem, as it's clear (both from your comment and what Velleman and other editors have said) that it's one big problem journals are facing (though, again, not the only one!).

David Bourget

Marcus: I've also been thinking about this sort of tiered process. It would have the advantage of reducing the imbalance between offer and demand for reviews by offloading some reviewing work on the more junior members of the profession, who could more easily be incentivized to do good reviewing work through some kind of "you get what you put in" system.

I really don't think the additional stage would result in longer review times. I think there are two main things that slow down reviews: it takes time for editors to find reviewers (they get a lot of "no"s) and accepting referees often sit on papers forever because they aren't really incentivized to be quick at anything takes priority over reviewing papers. This system could demand quick reviews from both junior and senior contributors in virtue of offering much fewer reviewing opportunities to senior contributors. I think it would really speed things up overall, despite multiplying the reviews.

The main worry I have with this overall scheme is that whatever limitations junior members have as referees are not obviously limited to false positives (accepting papers that should be rejected). If junior reviewers also have more false negatives, a tiered system would result in more undeserved rejections.

It's an interesting empirical question whether junior reviewers really are less reliable than senior reviewers in practice. I don't know. Bearing in mind that the relevant junior/senior distinction isn't age-based but accomplishment-based, it seems natural to expect that senior reviewers are on average at least a little more skilled and have beneficial experience and perspective when it comes to assessing new work. At the same time, however, they are so over-solicited that any advantage they have might be outweighed by the deleterious effects of time pressure in the current system. I can also imagine a case being made that junior people have advantages on average. For example, maybe they are on average more open-minded, less committed, and less likely to be subjected to criticism in the reviewed paper--all factors that might conceivably contribute to avoiding false negatives.

Because the reliability of junior referees is not well established, a system like this would have to give the first tier a fairly minimal role at first, like you envision. This would reduce the usefulness of the first tier. The key question is whether confidence in the first tier can be high enough that enough of the decision power can be offloaded to it to justify the whole setup. For example, if the first tier cannot be significantly more discriminating than a desk reviewer spending 2 minutes on any paper, it's not justified.

I'm tempted to conduct an experiment to ascertain how the reviews of junior reviewers compare to those of senior reviewers. Solid data on this could really help in the design of a concrete system and in its promotion with editorial teams. Maybe this has already been done. Pointers anyone?

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Subscribe to the Cocoon

Current Job-Market Discussion Thread

Philosophers in Industry Directory

Categories

Subscribe to the Cocoon