A while ago I was asked by a journal of high standing to referee a paper on a topic of which I had little knowledge. I immediately pointed this out, but the editorial assistant soon afterwards replied that it would be fine as they valued my opinion nevertheless. I was uncomfortable. The journal in question had, during the course of my career, rejected every one of my submissions, now numbering almost a dozen. Their position seemed to be that while they had consistently deemed me unworthy of publication, I was nevertheless worthy of passing judgement on what was good enough for them to publish: outside my own topic too. If I really knew what they wanted, I would no doubt have produced it myself by now. I turned down the assignment.
There is a big intellectual flaw in the peer-review system. It is inherently conservative. Suppose an editor succeeds in securing an ideal referee, eminent in the field and working on the same subject area. Such a reviewer could well have a vested interest in protecting a particular view or theory. Paradigm-changing work is unlikely to go down well with those who support an existing paradigm. Many papers in my own field work within an existing set of shared assumptions, offering only small additions or footnotes, which produce a dull read.
Let us see with each published paper a date of submission, date of acceptance and names of the referees. Let authors know in advance the average decision time. Let us know if asked to revise and resubmit whether it will go back to the same referees or to different ones.
Personally, I think that Mumford's observations on the problems with peer-review are spot-on. However, I don't think his positive proposals would do much at all to improve the situation. How would posting dates, of submission, acceptance, names of referees, etc., correct for any of the problems Mumford cites in his article? Although I hesistate to get on my hobby-horse yet again, I can't help but think that there is already another system out there that does correct for most, if not all, of our current system's problems. Here, again, is how the physicists do it. They:
- Post their working papers on a database called the Arxiv.
- Read each other's papers, give feedback, and talk about those papers widely, and publicly, on physics messageboards.
- The feedback they give each other on the Arxiv enables them to help each other improve their papers (which is, like, totally awesome) and, in some cases, disprove and weed out bad papers.
- The papers that start getting talked about a lot get a "stamp of approval" by the profession before they head out for official peer review.
- The official peer review process itself is something of a formality. The papers that have generated a lot of discussion on prominent messageboards (and have not been disproved) are given the green-light by reviewers, and those that haven't are rejected or read more carefully.
Now, this alternative system isn't without potential problems. It is, strictly speaking, possible for bad work to get a lot of attention on messageboards -- though, for what it is worth, when this happens in physics it usually comes out on the messageboards why the work is bad, and the bad stuff doesn't make it into print. Still, it seems to me that, all things considered, the physicists' approach is far superior. It:
- Takes peer-review mostly out of the hands of unaccountable reviewers.
- Places it instead in the publicly accountable hands of the profession-at-large (as people explain, publicly, on the net, why such-and-such paper is SUPER GOOD or SUPER NOT-GOOD), while also
- Providing forums and incentives for people to help one another improve their papers (which helps everyone).
All of this seems totally awesome to me. Why don't we philosophers do it?