Our books






Become a Fan

« A philpapers reply journal? | Main | Extended deadline: 2018 Cocoon conference »

07/02/2018

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

non-Leiterific grad student

As a grad student at an unranked program, I don't want to see publication in philosophy go the way of ArXiv. If prestige bias is a problem for professional philosophy--as some on this blog have suggested--then people in my position *really* need anonymous peer review for a fair shot (a) at getting published in prominent venues, and (b) at compensating for their lack of prestige on the job market, which often requires (a). I agree that peer review doesn't *ensure* anonymity, but surely it makes anonymity more likely--especially if you make sensible personal decisions (e.g., omitting from your website titles of works under review, asking colleagues not to cite your work prior to publication, etc.).

As for the suggestion that "work by outsiders is often discussed" by the math/science community, that may partly be a function of broader agreement about what constitutes an "important result" in the field. If you're a grad student at an non-prestigious math program, but you're the first person to solve a well-known problem, then sure, your paper will drum up interest and excitement. But aside from work done in formal epistemology, say, I don't think philosophy has anything comparable. All the more reason not to reinvent the wheel for peer review--especially when, for all its problems, it's one of the few equalizers for people with my background.

Marcus Arvan

non-L grad student: I appreciate the concern. But I respectfully disagree.

I don't think we need 'anonymized' peer-review for people like you to have a fair shot.

First, as I mentioned here, I think 'anonymized' review is impossible to enforce--and I think it arguably benefits well-connected people from highly-ranked departments.

Here is what I suspect happens all too often. Well-connected people from good departments do what they can to make their work well-known. They present it at all kinds of conferences, share it with others online (in some cases openly!), and make connections with others in the profession. Whether or not there is any intent here to compromise 'anonymized' review, the reality--I think--is that it plausibly *does* compromise anonymized review, advantaging well-placed people in the profession. And again, 'anonymized' review can be compromised in so many other ways (e.g. by you presenting at conferences, Google reviewing, etc.).

Second, I think 'anonymized' review places the entire process--and 'outsiders'--too much in the hands of established people in the profession. So, for example, suppose you want to publish a paper on grounding or whatever. If you send a paper to a good journal, which reviewer(s) are they likely to select? Answer: people who have published on that topic--people who have 'skin in the game' in terms of which arguments they prefer, etc. It's a serious conflict of interest, one I've struggled to avoid as a reviewer.

In contrast, an open, ArXiv-like system makes *everything* transparent. It, I think, is *far* more equalizing than our current system. Here's how. Suppose Big Shot Philosopher uploads a new paper to the ArXiv. Then suppose it's not a very good paper. At that point you and others have the option to engage it critically. Whether Big Shot's paper is published will not just be open to the whims of a couple of 'anonymized reviewers' (who may be able to guess who they are). No, whether the paper is any good will be likely to come out *publicly*. That's an equalizer.

By a similar token, if you--a person from a lower-ranked program--put a really good paper out there, there may be people like *me* who draw positive attention to it, advocate for it, etc. You, as an author, would also be able to defend it against misinterpretations, update it in light of feedback, and so on. These too are *equalizers*. Indeed, I suspect there are a lot of people out there just like me: people who are looking for good philosophy *wherever* it comes from. Making the review process more public would only enable people like me--people in the profession at large--to draw attention to what *they* think is good philosophy: not just two referees!

Finally, I have to take exception to the idea that there are clearer standards for what counts as good work in math and physics. Although I often hear philosophers say things like this, I think it is just a preconception and does those disciplines a great disservice. If you google a little bit, you'll find there are immense controversies--both in math and physics--about what good research programs and methodologies are...nearly as many in philosophy. My wife works in a STEM field, and I see first-hand just how her field is like ours--just how many basic methodological disagreements there, how referees vastly disagree over whether a manuscript is publishable.

We really need to set that preconception aside--as do we, I think, our preconceptions here more generally.

Like you, I want a more equalized profession. But, I ask you: how good is our current system at doing that? Not very good, it seems (see e.g. https://philpapers.org/rec/DEDPBA ). So why not try something different, something that other fields have found works, and works well?

non-Leiterific grad student

Thanks for the detailed response, Marcus. I'll respond in order:

1. Yes, anonymity is impossible to guarantee. But it seems to me that the peer review process makes anonymity *far* more likely than it otherwise would be--at least for those people who (like me) are mostly invisible in the profession. Yes, there's a decent chance that elite folks who submit their material to Phil Review aren't truly anonymous. But I don't see how that undermines the anonymity of my submission to, say, Synthese.

2. Yes, people who have "skin in the game" are somewhat more likely to review papers. But not everyone who has skin in the game is well-established. I've published a few papers (one in a so-called "top ten" journal), and I've reviewed around 10, several of which fall comfortably within my area of research. As the demand for reviewers continues to increase, people like me with have a say in what's in and what's out--not just the elite folks. I think this is a positive development. (Others will doubtless disagree!)

ArXiv may be transparent. But prestige is also transparent, and that hasn't mitigated its biasing effect. Say I post a paper to the philosophy equivalent of ArXiv. A fancy pants philosopher (who, let's just say, happens to owe my mentor a favor) tweets in praise of my work. Suddenly, people start reading it. It might even be a good paper, and by the time it's submitted to a journal, its reputation precedes itself. Now, what about all those equally good papers that *don't* have the endorsement of a fancy pants? Will they be engaged with on the same level? Probably not. Unless someone just *happens* upon the paper--and, let's be honest, there's already too much to read--the less connected researcher remains disadvantaged, and the more connected researcher reaps the rewards. This scenario is perhaps a bit fanciful--if only I were so well-connected!--but I don't think much hangs on the details. I'm just having trouble seeing how an ArXiv-like system would be an equalizer for the discipline, instead of just reinforcing biases already in place. (That said, the situation might be different for non-"core" philosophers, who face considerable barriers to publication in mainstream analytic journals.)

3. I'm pretty far from the STEM world, so I should obviously defer to you on this. Sorry if I'm perpetuating an unhelpful myth. I wouldn't mind clarification on this bit, though: what do you mean by setting aside "our preconceptions here more generally"? Are you referring to preconceptions about the sciences or preconceptions about the peer review process?

Thanks!

Marcus Arvan

Thanks, NL grad student, for your detailed response as well.

Look, I think your concerns (and responses to mine) are entirely fair. Alas, I also doubt we can settle these matters one way or the other a priori. We'll never know for sure which review-process is better (viz. equalizing the profession) unless and until we try something different.

Still, my sense from what I know about math and physics is that (A) the system generally appears to work well, (B) it appears to equalize things to me or at least not exacerbate prestige bias (which surely exists in all academic fields), and (C) it seems to substantially address a *lot* of other problems I enumerated in my post (e.g. correcting scholarly errors prior to review, weeding out bad papers prior to review, improving papers prior to review, helping authors establish priority and have their work engaged with prior to review--all of which seem to me to substantially *help* authors, reviewers, and editors).

In other words, I think our main differences right now are these.

First, you're worried about sacrificing certain 'equalizing' features of anonymized review. In contrast, I am skeptical that those features of anonymized review work as advertised--and in any case I don't think we know with any real certainty that an ArXiv model would be worse/less-equalizing than our current model, especially given the success of the ArXiv model in other disciplines.

Second, these matters aside--which I don't think we can settle a priori--I think there is a good case to be made that the ArXiv model would substantially ameliorate if not solve a wide variety of clear problems with the current process, including:

(i) overburdened journals,
(ii) with editors struggling to find reviewers,
(iii) and long journal wait times,
(iv) all plausibly caused in part by authors submitting too many "unready" or unpublishable papers,
(iii) with scholarly errors that referees have to catch, or if they don't, errors that make their way into actual publications,
(iv) authors having to wait or months or even years in a state of anxiety, keeping their work a secret so as to 'not compromise' anonymized review,
(v) all of which interferes with the timely dissemination and discussion of philosophical ideas, and
(vi) can place scholars at risk of being "scooped."

In short, while I think your worries are fair ones, it is far from clear to me that they are decisive--whereas it seems much clearer to me that an ArXiv life system would solve a ton of problems that just about *everyone* has issues with!

Marcus Arvan

I realized I forgot to address your question about my point on preconceptions. I meant it in the broadest sense: that we should be very wary of preconceptions in general—about peer peer review, the sciences, and the profession more generally.

As I mentioned in my “Midcareer Reflections” post on conventional wisdom, my experience is that quite a lot of conventional wisdom/preconceptions in the profession—including things that many people seem to just take for granted—are wildly inaccurate. I think we should be especially wary of preconceptions that uphold the status quo, particularly when there are clear problems with the status quo (as I think is the case here).

non-Leiterific grad student

Thanks for your response, Marcus. I agree that we're unlikely to settle the matter a priori. I wonder whether PhilPapers might be a good place to do a test run for the ArXiv model (perhaps in connection with your response journal idea?)

I also agree that the ArXiv model could make significant headway on (i)-(vi), though I don't have a clear enough sense of the details to be confident that it would decrease journal submissions and/or improve the quality of those submissions. (Would journals desk reject on the basis of PhilPapers engagement [or lack thereof]? Would the model replace anonymous peer review, or merely supplement it? Would pre-anonymous peer review engagement carry any official weight at the review stage?) I'm not suggesting you need to answer these questions now. But I think the devil would be in the details, especially since it would mark a pretty radical departure from the usual way of doing things in the humanities.

Also, I think the benefits of the arXiv model would have to be overwhelmingly great to justify dispensing with anonymous peer review. Our current system is far from perfect, but--for what it's worth--I *love* the fact that, for every paper I've reviewed, I haven't known the identity of the author. I'm free to make an independent assessment on the basis of the quality of the argument and prose alone; I'm relatively safe from implicit sexual/racial bias; and I don't have to second guess myself on the basis of the author's status. (I can imagine thinking to myself: "I don't think this paper is very good. But Mr. Fancy Pants wrote it, so maybe it's really good and I'm just not competent to review it!") All that's to say: I do think anonymity is a huge benefit of the current model. It's not a decisive mark in favor of the status quo, but it's got to weigh heavily.

Remco Heesen

With apologies for the plug, this seems like a good time to mention that my co-author Liam Kofi Bright and I are working on a paper-length version of this argument, titled "Is Peer Review a Good Idea?". While we call it "abolishing peer review", our proposal is exactly to move to an ArXiv model along the lines Marcus suggests here.

The current draft is on my website. (We should probably post this to PhilPapers and/or philsci-archive to be consistent!)
https://remcoheesen.files.wordpress.com/2015/03/is-peer-review-a-good-idea.pdf

Marcus, in one of the comments above you say about the ArXiv model that "the system generally appears to work well". Are you aware of any research to back this up? We have not been able to find any work that tries to evaluate the ArXiv model on the basis of empirical data.

Skef

There has been some back-and-forth discussion in this thread (and previous ones) over whether norms of physics and math papers are meaningfully different from those in philosophy. Marcus argues that they aren't, or that there is little reason to think they are.

I wonder, though, whether *within* sub-areas of those fields there is much overt hostility to particular styles or approaches. There almost certainly is some of that *between* sub-areas (in any field), but I also have a STEM-y background and it's my impression that "I don't even consider this to be (proper) math/physics" (as opposed to "I don't think this is right") happens, but is not all that common.

However, it's pretty clear that *many* contemporary philosophers consider *most* contemporary philosophy, even in their own areas, to be ... let's just say "misguided". How would the arXiv model address that?

And -- related but not quite the same question -- how would it address the "arms race"? Journal editors claim to be overwhelmed by content. Is the idea that an archive model would remove that incentive, and there would be many fewer papers produced? If so, why would that be, specifically? And if the rates don't change, who is going to read papers in the archive to give them a chance in the first place? I can see the outlines of an argument for efficiency through shared effort, but I can also see how an archive wouldn't carry any institutional obligation to read papers prior to journal publication, which doesn't fix anything. A new paper is added to the archive. Who reads it? Why?

Or is the idea supposed to be that everyone would now feel obligated, or new professional norms would instantiate the obligation, to have read everything in the archive possibly relevant to a new paper, published or not? Isn't that ... really unlikely? As of now people don't even feel obligated to read papers in the less prominent journals. And what is the institutional "hinge point" after the switch to an archive? Your paper achieves status (however that happens) and then the journal rejects it because it doesn't engage with enough of the as-yet-unpublished literature? How do the journal's reviewers even know that?

Pendaran

I used to think our peer review system was seriously flawed, but now I think the fundamental problem is with the job market. The system is flooded with submissions, because there are way more PhDs than jobs. I've posted here before about how probably the market is worse than in 08. I'm far from perfect, but my 12 publications (most top 20) couldn't secure me a decent job. If that's my situation, it must be horrible for (mostly) everyone, so people are swamping journals with submissions. As long as the job market is this bad, I suspect we can't really have good peer review. Whatever system is proposed is going to be swamped or prohibitively expensive.

The first step that must be taken is to cut the supply of PhDs by half. This means that some kind of central authority must be established to regulate PhD programs. As I have said before on this blog, it should be mandatory that placement rates be posted. Programs with bad placement rates should be forced to limit admissions. Programs needs to be open about the state of the job market and the older professors educated. Students should be deterred from pursuing a Phd in general. Possibly programs should be responsible for their graduates, forced or highly incentivized to offer them jobs until they can secure them jobs elsewhere. As long as programs are pumping out PhDs with little regard for their employment prospects, the system cannot be fixed. We're basically collapsing our own profession through PhD hyper-inflation, destroying our salaries and job prospects. (A corollary is that once the supply of PhDs is controlled, those with jobs will have more power, not being so easily replaced. They can then work to limit adjunctification and short-term contracts.)

What is this central authority and how can it regulate PhD programs? The central authority could be the APA or it could be something new. Probably only those with power and authority in the profession could start a new body. The authority would regulate programs by offering accreditation. Accredited programs will be required to meet placement standards and whatever else we want. Of course, this only works if enough universities join the accreditation system. But that's a collective actions problem. If enough philosophers would get together and work together, the accreditation process could be made to work. Once a critical mass is reached, students will have many accredited programs to choose from. Non-accredited programs will thus feel the pressure to join the process. This would be helped along if Leiter etc. joined with accreditation movement and warned students not to attend non accredited programs, or maybe deleted them from his rankings.

Other professions use a similar method to control the quality of their programs. There is no reason why the method couldn't also be used to control the supply of PhD's through mandatory placement rates and so on. It's up to us really whether we want to save the profession. But first enough have to agree and then they have to act. If we could get control over our job market, the publication crisis would fix itself. Maybe this blog would be a good place to start recruiting people for this idea, or maybe there needs to be a new blog.

What happens to us if we don't act? I think the obvious answer is that everything will get worse for everyone. The oversupply of PhDs will further drive down salaries, and further increase short-term contracts, adjuncts, and fast food academia. Academics are going to lose all power over the university, their departments, and their futures to the administration. The swamped journal system is going drive down citation rates, delay or stop the publication of significant work, and become increasingly slow and unusable. Those who give up due to the horrendous system and go on to other things will speak badly of academia, vote accordingly, and tell their children not to attend university. The issues compound. We're on a dangerous path and the future is bleak. Let's do something about it.

Pendaran

Induction suggest that if things continue, the future is bleak not only for philosophy but also for the humanities more generally.

https://www.insidehighered.com/news/2017/08/28/more-humanities-phds-are-awarded-job-openings-are-disappearing

Marcus Arvan

Thanks so much for the lively discussion, everyone!

PENDARAN: If you'd like to write a guest post on this, please feel free to email me. However, I'd like this thread to stay on the topic of the OP. In brief, though, I'm sympathetic with your general concerns--and think something should be done--but am wary of giving a bureaucratic entity the power to close down programs. I think it would be better for the APA (or whatever) to have binding guidelines for programs (viz. transparency about completion and job-placement rates): guidelines that, if violated, could lead to something like an official censuring of a program (rather than closure). In general, though, I think people should be free to make their own informed decisions, and programs should not be shut down by a powerful entity just because some people think they shouldn't exist.

SKEF: As I see it, the ArXiv model would address the arms race like this. As any referee will probably tell you, a lot of papers we have to review are *clearly* unpublishable, and probably should have never been sent out for review. Why were they sent out? I'll give you my answer as an author. Over the years, I have probably sent out at least 10 papers that I learned *through* the peer-review process were either unpublishable or far from ready to be under review. In brief, I submitted them because I had neither the time nor resources to get a lot of feedback on them from other people. If we had an ArXiv system like physics--where papers posted to online repositories are openly discussed--I probably would have learned the papers to be garbage much more quickly...and I probably would have never sent them out. I suspect a lot of people are like this: they send papers to journals to *find out* whether they are publishable (in large part because having to keep papers out of the public eye makes it hard to get feedback). So, then, suppose there are 100 philosophers like me, each with 10 bad papers the ArXiv system might prevent them from sending to journals. That's *1000* papers subtracted from the reviewing poor. Now suppose there are 1000 philosophers like me (not an implausible number, I think, given the global scope of the discipline). That's *10,000* papers not sent out for review! As for the the notion that most people think most papers are "misguided", I'm not sure about that. I suspect that open public discussion of papers would lead to excellent open debates about *whether* a given paper is misguided, and I suspect many papers would have a lot of proponents and critics alike. And I suspect this would be a good thing. The current system leaves it up to *two* referees to decide whether a paper is misguided. If there were a lot of animated discussion on an ArXiv about *whether* a paper is misguided, I suspect that kind of controversy about a paper would be a good thing (viz. eventual publication). For, I tend to think that really good papers are *not* ones that everyone likes, but rather papers that ruffle feathers and stoke controversy. In this regard, I think the ArXiv would probably be a whole lot less philosophically conservative than traditional peer-review, which requires multiple referees to *agree* a paper is good before acceptance. Finally, for the ArXiv system to work, it would take a change of norms. However, I think the norms would change naturally. For instance, suppose I read a paper by so-and-so, and I find that the paper has scholarly deficiencies--say, not citing recent papers directly relevant to its argument. I would want to draw that to the author's attention. That in turn could lead the author to check out those papers--particularly, if other people raised similar concerns. In other words, I suspect the more open peer-review is, the more people will *push* for better scholarly norms (viz. reading and engaging new works posted to the ArXiv). This appears to me to be what happened in physics. No one in that field can ignore a new paper that is relevant to their research, because if they do someone else will say, "Um, you do realize that someone just posted a paper on that, right?"

REMCO: very cool. I look forward to reading it! For what it is worth, I am a bit skeptical of abolishing peer-review altogether. I worry that might turn philosophy into nothing more than an online popularity contest. What I think is nice about the math/physics system is that it *combines* the crowd-sourced nature of open discussion with traditional peer-review at journals. Both parts of their process seems to place a healthy kind of "check and balance" on the other.

NON-L GRAD STUDENT: As I mentioned above, on my rendering the ArXiv model would *not* replace peer-review. It would supplement it, as in math and physics. In those fields, there is open public discussion of ArXiv papers where papers are debated and judged, but then the papers also have to be judged by referees at journals. I think this is ideal, as each side of the process provides a kind of "check and balance" on the other. On the one hand, public discussion can clear up interpretive issues *for* referees, as well as give them a better idea whether their judgments of a paper's merits are accurate (on a few occasions, I have learned what other referees thought of a given paper, and in a few cases their comments led me to reevaluate some of my thoughts about it!). In this regard, public discussion would I think improve the review process at journals--by helping referees better evaluate papers. On the other hand, I think it is good for a formal peer-review system to still exist, as people need formal publications for tenure and promotion. As for whether the model can be expected to have such immense benefits to justify such an extreme change to the field, I suspect this will depend on just how bad one thinks the current system is. You seem to not have big problems with the current system. However, I know many people--including myself, Velleman, and others--who think our current system is unsustainable and deeply problematic (in ways, in my view, that needlessly and systematically interfere with scholarly discussion, place undue anxiety on authors, and so on). In other words, I think the problems with our current system are so overwhelming that we could hardly do worse--and I think the success of the ArXiv system in math and physics indicates that the benefits of that system are considerable: far considerable enough justify a dramatic change.

Amanda

While I agree there is a problem with the oversupply of PhDs, a HUGE problem, it is simply impossible to create a central authority out of thin air to regulate things. (putting aside whether this central authority is a good idea). There is no way for it to have any actual authority, given the structure of universities that are there own independent entities, and not subject to any authority other than the government for public universities. Now one might argue you can try to create this entity based purely on reputation, such that universities not approved by such entity will lose reputation power. But again, I have very little faith this can just be created by will. There would be great controversy over the authority of any such body, just like there is controversy over the APA. Controversy would prevent this "central authority" from having any power.

I do think peer review is broken (because many "blind review" publications are not blind). I am honestly not sure how the ArXiv model would work. I could see it evening out the playing field, and I could also see it having prestige problems. But it seems worth trying something different. I also think it would be nice to see other possible suggestions out there. If we are going to completely revamp how the discipline works, we should consider a variety of alternatives. And no - I don't have any other ideas yet lol.

Elizabeth Hannon

Thanks for the interesting post! Here are a few worries though...

Granted anonymous peer-review isn’t always perfectly anonymous, it’s far less hopeless than the discussion here implies.

—BJPS referees regularly recuse themselves from papers where they think they know the author(s) of the paper (we explicitly ask them to do this). And, equally important, these referees are frequently mistaken about the identity of the author(s). We have a ‘grand reveal’ at our editorial meetings, where the identities of the authors of accepted papers are revealed. Very occasionally one or other of us has guessed correctly, but that’s a very small minority of cases.

—The current system also allows authors some control: if an author is happy to opt out of anonymous review, they can post widely; if not, they can play it more cautiously. Some people have better reasons than others to prefer anonymity.

All that said, the ArVix system needn't depend on us foregoing anonymity: papers posted in an Arvix equivalent could have the author’s identity masked (obviously pitfalls include fake reviews of one’s own work, but presumably there are tech solutions to this).

Assuming this is an important goal of the proposal, I’m not clear how it would reduce the workload of editors and referees, and it may even lead to increases.

If the thought is that a significant number of papers will be abandoned at the ArVix stage, I'm not sure this is a safe assumption. Established people may well be happy for their papers to only appear in something like ArVix, just so long are their ideas are discussed (though this is a possibility already, certainly for ‘big names’, yet many still submit to journals). Early career people, on the other hand, will undoubtedly feel the pressure to send as many things out as possible, even if publication is unlikely.

As mentioned in a comment above, people would be acting as informal peer-reviewers at the ArVix stage, as well as formal reviewers via journals, and this seems like an overall increase, in the work being done.

From an editorial perspective, it would be great if more papers arrived better polished, but this would likely amount to nothing more than fewer desk rejections (thus more work for editors in securing referees). These referees will likely come up with new comments for authors to address, beyond what was said in the ArVix system (thus further work for authors). We see this whenever an original set of referees is unavailable to look at a revised paper—an entirely new set of objections arise. And while this further scrutiny might improve the quality of the final paper (although too many cooks etc.), I take it this isn’t particularly the point of this proposal.


Skef

"If we had an ArXiv system like physics--where papers posted to online repositories are openly discussed--I probably would have learned the papers to be garbage much more quickly...and I probably would have never sent them out. I suspect a lot of people are like this: they send papers to journals to *find out* whether they are publishable (in large part because having to keep papers out of the public eye makes it hard to get feedback). So, then, suppose there are 100 philosophers like me, each with 10 bad papers the ArXiv system might prevent them from sending to journals. That's *1000* papers subtracted from the reviewing poor."

But this assumes what I'm asking about. Why think a paper posted to a archive of unpublished philosophy papers would tend to be read at all, evern? There is already philpapers and (especially) academia.edu. Have you read a substantial number of unpublished papers on those sites? (I've *glanced* at a few, but maybe read one in detail.) Why would people be reading these papers if they are (for example) written by people no one has heard of?

Also, why would *uncoordinated* reviews of all of these papers wind up being more efficient than *coordinated* reviews. Journal reviewers feel they are overworked, now, but in this new world they're going to grab swaths of unpublished papers to check over, not knowing who else is doing so, because it's virtuous?

"As any referee will probably tell you, a lot of papers we have to review are *clearly* unpublishable, and probably should have never been sent out for review."

One hears this a lot, and I don't know how people are still saying it given the direction things have gone. A paper that is "*clearly* unpublishable" could be rejected with a note saying "this is clearly unpublishable because X and Y, among other reasons I don't have time to list." But many journals are now desk-rejecting a majority of submissions. I asked about desk rejections in an earlier thread on this site and the general response was "oh, well, who knows what that means?" If the problem is work of clearly low quality, why can't journals just say that? What does "clearly" mean when used in this sense?

It's now quite possible to send a paper to many journals and hear back "Sorry, we're not going to send this out for review, but we encourage you to submit it elsewhere" from all of them.

Marcus Arvan

Skef: actually I do check out most of the new papers posted on philpapers in my areas of research. Every morning when I wake up, I go to philpapers, click on new papers added, go down the list, and bookmark/download papers in my areas.

The problem with places like Academia.edu--besides the facts that you need a membership to read things, they try to monetize everything, and their user interface is terrible, especially on phones--is that they don’t have easily available places to have a good open discussion of people’s papers. If philpapers set up and promoted a good system for doing so, and more people pushed for changes to disciplinary norms—namely, that one *should* post paper drafts on philpapers, promote open discussion of one’s drafts, and comment on other people’s drafts—then I think things would change. Indeed, I have a bit of a hard time understanding your skepticism, as this is exactly what thyou physicists do: they constantly keep up with new papers posted to the ArXiv. Sure, you don’t have to give *every* paper a careful, but for all that it is pretty easy to skim them and read in detail the ones you think are the most important. If the physicists can do it, why can’t we? (And please, don’t say: physics papers are easier to read and evaluate than philosophy papers. My response is: try reading some physics papers on the ArXiv. Most of them are incredibly dense and complicated!).

Anyway, the reason that uncoordinated reviews would be more efficient is simple. One has more *information* to go on as a reviewer. In physics, where people follow online discussions of new papers, it is easy for a third party to the conversation to learn a lot from other people’s reactions to it. Other readers can clear up interpretive issues, raise and evaluate potential concerns, etc. All of this uncoordinated activity makes it far easier for one to do one’s job as a reviewer: it is a crowd-sourcing way to help *you* better understand and evaluate the paper as a formal reviewer. This isn’t to say that you should defer to the crowd. You have every right as a formal reviewer to form and compose your own evaluation of it. It is just that open discussion is likely to help you do a better job as a reviewer, and indeed, do so more efficiently. I follow many physics blogs where new ArXiv papers are discussed at length. The discussions are *incredibly* helpful for people—specialists and non-specialists alike—to form informed opinions of the paper prior to submission for review at journals.

I’m not exactly sure what your concern about desk-rejections is. To be sure, I have argued before that they seem to me vastly overused in philosophy, as I know of more than a few really good and influential papers that were desk-rejected multiple times. Desk-rejections should in my view only be used for papers that *clearly* fall below a minimal standard. The real problem in my experience is not simply that journals overuse desk-rejections (which I think they do). The deeper problem is that editors are not particularly well-equipped to desk-reject the ones they *should*. In my experience, the kinds of papers that make it past desk-rejection that shouldn’t (the ones I said are “clearly” unpublishable) tend to be ones whose projects are based on devastating scholarly mistakes, such as predicating their argument on the claim that “no one has argued X” when, on the contrary, numerous people have already done so. This papers tend to be a waste of time: as a reviewer, you have to patiently explain how their paper ignores vital arguments in the literature, etc. The problem is that editors aren’t necessarily in a good position to spot these failures, as the editor may not specialize in the narrow literature in question.

All of this would, I believe, be prevented by an ArXiv system. In physics, when a paper receives devastating objections in public discussion, the paper just languishes on the ArXiv in perpeuity. The author can get a pretty good picture that their paper is bad, and choose not to send it to journals. I can say with little doubt that if our system and norms were like those in physics, there are at least 10 papers I never would have sent out—or, if I had, they would have been a much better use of referees’ and editors’ time. I wish that were not true, but in our current system it is. I had no better option available given our system and norms: given how hard it is to get good feedback and given the urgency I had to deal with (viz. the job market and tenure), I had to send stuff out that I didn’t know was good or bad. That’s bad for the discipline, and I suspect a lot of people do it for the same reasons. An ArXiv system and norms would change this. It would help authors discover far more quickly which of their papers are and are not worth sending to journals, and help them substantially improve their papers that *are* promising. This would not only make things far easier on reviewers and editors. It would probably cut down on one of the single most time- and energy consuming parts of the review process: revise and resubmits. At many journals, papers that are eventually published went through 2, 3, or even 4 R&R’s. This is spectacular waste of journal resources, including available and willing referees. An ArXiv system and open discussion would better enable authors to revise papers to address potential concerns *before* submitting them for formal review.

Marcus Arvan

To give you all a better idea of how things work in math and physics, here are some examples.

Go visit Garrett Lisi's ArXiv paper "An Exceptionally Simple Theory of Everything." https://arxiv.org/abs/0711.0770

If you go to the bar on the far right of the screen, you will see '45 blog links.' You click on those links, which take you to blogs that discuss the paper openly.

People in those professions routinely visit these kinds of blogs, where new ArXiv papers are dissected in depth prior to journal review.

See e.g.

http://resonaances.blogspot.com/
http://www.science20.com/quantum_diaries_survivor

People have complained that philosophy blogs are dying. Why are they dying?

Here's one answer: people don't actually discuss *philosophy* on them. Most philosophy blogs today are either general profession/news blogs (e.g. Leiter and DN), special interest blogs (e.g. the Cocoon, Feminist Philosophers), mostly informal philosophical discussion forums (Phil Percs)--and at those blogs where research is discussed, it's usually to discuss books or papers people have already published (BORING!). It's *far* more exciting to discuss and debate work in progress--which is exactly what happens at math and physics blogs, thanks to their ArXiv-based peer-review model and norms.

A long time ago, I tried to start a series on this blog where people would share unpublished drafts. No one took me up on it. Why? Probably for the same reason *I* don't post unpublished drafts here or on philpapers: I am afraid of "violating anonymized review" and ticking off reviewers and editors. Seriously. That's why I don't post stuff. I think about doing it sometimes, but then I worry: "What if reviewers/editors think I am violating norms for preserving anonymized review?"

None of these things disincentivizing open discussion of unpublished drafts exist in math or physics. Consequently, in those fields people post and discuss ArXiv papers left and right. New drafts are then promoted and discussed in all kinds of forums. It is wonderful. I routinely visit physics blogs for a reason: it's awesome to hear from others about good new ArXiv papers, which I can then learn about in the comments sections and then check out myself on the ArXiv (with the added benefit of having some idea of what I will read before I read it!).

If only philosophy did something similar. My hope is that by drawing attention to how awesome all of this is in those other fields, we might get a better picture of why we should do it too. On top of all of the benefits for peer review, the ArXiv system helps those fields in terms of public relations (another thing philosophy has issues with). When new ArXiv papers are discussed on blogs, they are usually discussed in ways that specialists *and* intelligent casual visitors can understand. It helps physics have more of a public footprint. Given how many philosophy and humanities departments are being shut down, making philosophical discussion more accessible and lively--as in the physics community--would, I think, only be an added benefit.

Skef

"As I see it, the ArXiv model would address the arms race like this. As any referee will probably tell you, a lot of papers we have to review are *clearly* unpublishable, and probably should have never been sent out for review. "

"I’m not exactly sure what your concern about desk-rejections is. To be sure, I have argued before that they seem to me vastly overused in philosophy, as I know of more than a few really good and influential papers that were desk-rejected multiple times. Desk-rejections should in my view only be used for papers that *clearly* fall below a minimal standard. The real problem in my experience is not simply that journals overuse desk-rejections (which I think they do). The deeper problem is that editors are not particularly well-equipped to desk-reject the ones they *should*. "

If the main current problem were really a flood of clearly unpublishable papers, they could be rejected with a note to that effect. My "concern about desk rejections" is that their prevalence indicates that the problem is not really a flood of clearly unpublishable papers. Journal editors refer to drastically low quality but their actions aren't consistent with that explanation.

What really seems to be happening is that while many poor quality papers are sent to journals, the "flood" consists of papers that are not only not clearly unpublishable, but not necessarily unpublishable at all, but there are insufficient resources to sort out which are best. So each (one hopes) gets a skim, most are desk rejected on the basis of that skim (and a guess about its relative quality), and a few get an actual reviewer. Basically, there are more papers than anyone considered qualified to judge is willing to read.

Anyway, I won't pursue this further; I'll just note something that seems to have gone unmentioned in past discussions on this subject: arXiv is often presented (explicitly or implicitly) as having solved a publishing problem in math and physics that could be adopted in philosophy, and it is claimed that there are no significant differences between the fields to raise doubts about that.

But when you read histories of particular eras in physics, whether in the 20s or the 50s or the 70s, it appears that things already worked roughly the way they do now. Physicists would mail copies of papers around, or present them to different audiences, and the ones that eventually became well-known would wind up being published. The archives are just a modern improvement (and perhaps a democratization) of what was already common practice. So it would help if those people who think there are no relevant differences could try to explain this longstanding actual difference.

Marcus Arvan

Skef: I agree with your point about desk-rejections. Indeed, in a way, I think we are mostly talking past one another.

While you say the problem isn't with the "flood" of papers, but rather journal resources, I think these things are connected. Desk-rejections appear to be overused because journals lack the *resources* to evaluate them (even publishable ones) properly. Why? Because they are flooded with too many papers! Velleman basically made this point openly in his post on the "publication emergency." My point then is that there are reasons to think the ArXiv model would help stem that flood, thus freeing up resources for reviewing papers *better* (both publishable and unpublishable ones, the latter of which I think there would be far fewer of).

I also just happen to think that, in addition, desk-rejections are *under*-used in the cases where they actually should be used. So really, the problem with desk-rejections is two-fold: they are overused in *general* as a time-saving method (our point of agreement), but underused in cases where they actually should be used (my point).

Anyway, I do know the history of physics pretty well. I think you are overselling the differences a bit. After all, *philosophers* have long had a practice of sharing papers too! Consider Parfit's "On What Matters." He evidently shared drafts of it widely for years under the working title "Climbing the Mountain." And he is far from alone. Yet here is the problem...

The problem (as I see it) isn't that philosophers never share papers (or that physicists didn't have stronger norms to this effect decades ago). The problem is that our current model and norms *favor* the well-placed in these regards. Vulnerable, early-career people have to worry about whether posting their papers online will "compromise anonymized review" or leave them open to someone else poaching their ideas. More established people, on the other hand, don't have to worry about these things. Parfit was able to share his manuscripts widely because he didn't have to worry about "compromising anonymized review." By a similar token, I've seen particularly well-placed people in the profession announce on facebook that they have a new paper draft on X, stating the title openly for all to see. This is not equality or fairness. It is "anonymized" review giving rise to an unintended consequence: differential incentives that disadvantage the professionally vulnerable compared to the professionally established.

The problem with our current system is that while well-placed people can use these advantages, more vulnerable members of the profession are strongly incentivized to keep their drafts under wraps. As I indicated, this is what *I* do. I would love to share paper drafts openly. Yet I don't do it. Why? Because I fear "compromising anonymized review." It would be great if, as in physics, people like me didn't have to face this dilemma--while, all the while, well-placed people can share their work openly.

This, I think, is what the physicists and mathematicians figured out a while ago: that far from being "fair", traditional anonymized review plausibly advantages the well-placed (who don't have to worry about compromising it) and disadvantages the more vulnerable members of the profession. On its face, "anonymized" review *seems* like it should be fair. But, like others, I increasingly doubt whether it is--not to mention the fact that I think it makes the entire process vastly more inefficient, contributing to the "publication emergency" that Velleman and other editors have problems with. On the flip side, I think the ArXiv model is *more* fair, efficient, and equalizing--as it makes the entire process more transparent for everyone, as well as the norms and incentives for posting papers online the same for everyone.

So, yes, physics may have had these practices and norms decades ago. But what of it? Maybe it just goes to show that they were ahead of the game, and we need to catch up!

Marcus Arvan

Hi Elizabeth: Thanks for sharing your experience, as well as your worries.

I find your idea of an ArXiv that preserves anonymity intriguing. However, part of what I think is so great about the non-anonymized ArXiv model in math and physics is that it supports better scholarship and exchange of ideas. For example, when papers are posted on the ArXiv in physics, people are expected to cite them--even if they are not published in a journal yet. If you read physics papers, many of the citations will be to unpublished ArXiv articles. In other words, the discipline essentially *counts* them as published the moment they are posted to the ArXiv.

I think this is good for a variety of reasons. First, it ensures that authors get credit for their ideas. Second, it clears up so-called priority disputes. This, it results in quicker dissemination of ideas--as people can cite and discuss articles immediately. Maybe all of these things could still occur with an anonymized Philosophy Archive (would people just be expected to cite article titles, not knowing who the author is?). But anyway, I'm not sure.

I am also curious about what you said here: "[Our] current system also allows authors some control: if an author is happy to opt out of anonymous review, they can post widely; if not, they can play it more cautiously. Some people have better reasons than others to prefer anonymity."

As I explained in my responses to Skef, this is actually a big part of what I think is problematic about our current system. People who are well-placed in the profession can post article drafts publicly--as there is little risk (and plausibly benefits) for them to have people to know who they are (not just the possibility of a referee failing to recuse themselves, but also opportunities to get public feedback, establish "priority" with respect to a given set of ideas, etc.). On the reverse, people who are in more vulnerable positions (i.e. job-marketeers, untenured faculty, etc.) are incentivized to keep their work a secret.

On this note, now that I have tenure I've considered possibly posting paper drafts online (indeed, here on the Cocoon) for many of these reasons: I could use feedback, would love to get new work out quickly, etc. But this is precisely my worry: that I would be using my tenured status as an advantage that other people don't have--and which there is far less of an issue with in the math/physics model, where everyone is incentivized to follow the same norms, citation practices, etc.

Remco Heesen

A few quick follow-up comments:

1. The proposal Liam and I defend in the paper I advertised above is exactly an ArXiv model combined with peer review. So while we *call* our proposal "abolishing peer review" that's just a terminological difference. (Contrary to what you say Marcus, it's not clear to us whether it's crucial to keep peer review once we have moved to the ArXiv model, but that's because we think there is currently no evidence that could settle this question.)

2. Speaking of evidence, it would be great to have more systematic evidence that the system as used in math and physics works well. Your examples of blogs discussing ArXiv articles are suggestive, but they are consistent with most articles receiving no attention. This is not to criticize the proposal, which as said I agree with, but it would be great to have more systematic data on engagement.

3. I don't understand your worry about putting your own preprints online. You say "What if reviewers/editors think I am violating norms for preserving anonymized review?" but I put all my papers on my website and/or a preprint server before submitting them, and I have never had a reviewer or editor indicate a problem with this. Do you know of any cases where this has happened?

Marcus Arvan

Remco: Thanks for the follow up.

1. Cool! Now I *really* can't wait to read the paper. ;)

2. I agree. It would be great to have systematic data.

3. I don't know of cases where this has happened. However, I tend to be risk-averse when it comes to stuff like that. It's good to hear that you haven't had any problems with reviewers or editors there. While my worry is partly about whether uploading drafts to a preprint archive might tick off reviewers or editors, my bigger concern is with doing so in a way that openly promotes discussion of paper drafts (e.g. by posting links to new drafts here). While blogging about new papers is fairly common in physics, I worry that openly posting papers on a well-trafficked blog might rub people the wrong way. But, if I were to post papers online, that would be the way I would want to do it, as it would probably be a far better way of getting what I could really use (feedback) than simply posting something to, say, philpapers.

recent grad

I'm hesitant to move to the model you propose, for two reasons:

1) The criteria for a good paper in science and math are much clearer (and agreed upon) than in philosophy.

2) Prestige bias would be even more prevalent than in the current system. In the current system, not all prestigious authors' identities are known, even though some are. In addition, most unprestigious authors' identities are *not* known. One way to combat prestige bias would be to allow for anonymous or initially anonymous posting of papers, but I just don't trust that enough philosophers would read anonymous work that has had literally no known hurdles to jump.

FWIW, I'm more a fan of a collective awards/penalties system. Some examples:

-Create an default waiting period after submission (say, two weeks) before which a paper is not looked at by anyone. Then use an algorithm for referee ratings and let those with good ratings jump the que and add waiting time for those with bad ratings (say, another two weeks). Those with no referee experience would remain at two weeks.

-if an author posts their paper to their website during the review period, then either the paper is returned without review or they must wait another year before they can submit work to the journal.

These are just examples of ideas, but I think implementing them would be much less difficult than implementing the science model.

Marcus Arvan

recent grad: Thanks for weighing in. I really like your idea about a referee rating system and how that might work. However, as noted above, I'm skeptical about both (1) and (2).

I often hear philosophers say (1): that standards for a good paper are "clearer" in math and science. What I don't often see is any clear evidence given for this claim.

I read physics papers and blogs, and see that physicists routinely disagree over basic methods (not to mention the experimental and statistical complexities that underlie physics experiments, such as background modeling--which is a very complex and fraught affair necessary for taking accurate measurements). I also read the history of science. Many groundbreaking science papers were initially considered rubbish--in large part because scientists disagreed about the relevant methods. I'm also married to someone who works in empirical psychology. I've read her papers' peer-review reports, seen how much reviewers disagree over methodological issues, and discussed methodological controversies with her--as she often tells me "people have fundamentally different views over X as a statistical method or experimental set-up", and so on. I also hear that things are similar in math--that, oftentimes, specialists disagree quite a bit over *whether* a new proof is sound, in large part because new proofs can use new and unexpected methods.

My own personal sense is that (1) is something of a philosophers' fantasy. We tend to think paper evaluations are "clearer" in other fields...because we don't actually work in those fields or appreciate their complexities. If we did, we would appreciate (or so my experience has been) that things aren't that different in other fields.

As to (2), that's quite an assertion! Prestige bias *would* be greater in an ArXiv system? What basis is there for asserting that so confidently? It may of course *seem* that way from the armchair. But why should we trust armchair speculation? I've given armchair arguments to the contrary--for why an ArXiv model would plausibly *reduce* prestige bias by putting philosophy in public domain where many people (such as myself) could draw attention to non-prestigious authors whose work we admire. I've also noted how the kinds of warnings raised about prestige bias don't seem to have transpired in areas where the ArXiv model has been tried--as in math and physics, where the work of low-prestige scholars and even outsiders appears to be often discussed.

I think it is entirely fair to raise *concerns* about prestige bias. But I really do think philosophers should be wary of making confident pronouncements about things like (1) and (2).

recent grad

Hi Marcus,

Thanks for your reply. I take your point re: 1, though I think one potential difference is that recognizing good philosophy, for many, fits into the general category of "I know it when I see it". Whereas for normal science, one can lay out ahead of time specific criteria for what qualifies as a good paper.

You're right that my prediction is speculative. I don't think it's baseless, however. Prestige bias works only if identities are known. By moving from peer review to the ArXiv model, we would be moving from a model where the identities of some prestigious people are known and the identities of few unprestigious people are known, to a model in which all identities are known. There would of course be exceptions--I myself already seek the drafts of two professors who both work at "directional state" universities. But exceptions are consistent with a general trend. But you're right that it's speculation.

Marcus Arvan

recent grad: Thanks for clarifying. I still have doubts.

You write: "I think one potential difference is that recognizing good philosophy, for many, fits into the general category of "I know it when I see it". Whereas for normal science, one can lay out ahead of time specific criteria for what qualifies as a good paper."

I think this is another philosophers' fantasy. My sense--both from reading physics and talking with my spouse--is that, no, science does not involve laying out ahead of time specific criteria for what qualifies as a good paper, at least not anymore than in philosophy. In my spouse's field, each paper's methodological and statistical set ups and assumptions are unique--and it is often *not* clear (at all) whether the assumptions made are appropriate. Different reviewers often give very different evaluations of which methods should be used and why--just as in philosophy. Further, each psychologist has their own complex views about experimental set-ups, statistical analyses, etc.--and there are often deep and persisting disagreements, where authors argue for in their papers for adopting one approach over another, and different reviewers advocate for very different approaches.

All of this, it seems to me, is exactly what we do with philosophy papers. We all have our own assumptions about which methodologies are good, authors try to convince us they are using good methodologies (which may not be the ones we favor), and so on. And my sense is that this is broadly what happens in physics too. Go to a physics blog and visit the comments section. Or read papers in theoretical particle physics. Or read the history of science (https://www.amazon.com/Great-Physicists-Leading-Galileo-Hawking/dp/0195173244 ) There is almost always vast disagreement about methods and what constitutes a "good" paper in scientific fields.

You write: "Prestige bias works only if identities are known."

I have expressed several lines of skepticism about this above--arguing, to the contrary, that prestige bias is plausibly augmented by "anonymized review" relative to the ArXiv model.

First, journal referees--particularly referees at top journals--are often well-established, prestigious people in the field. This means that the main decisionmakers of whether your paper will be published are likely to be individuals with *prestige*, with all of the institutional and/or psychological incentives that accrue to their positions. To take one example, here's a true story: a number of years ago, I had a referee leave their name in the "properties" tab of their PDF review of one of my papers. They advocated rejecting my paper. Why? Their main rationale was that my project was misguided. Why? Their sole justification was to cite their own previously published work arguing for a different view (work that I was arguing was mistaken in my own article). No joke. Now, this is obviously just one case--but it shows how a form of prestige bias can infiltrate "anonymized review." Insofar as "anonymized review" concentrates decisionmaking in the hands of a very small number of people--often, though not always, from prestigious backgrounds--the process plausibly becomes biased toward the philosophical preferences and perspectives of that small subset of individuals: individuals whose judgments could in subtle ways be influenced by their position of prestige.

Second, as I argued above, "anonymized review" plausibly augments prestige bias due to the differential incentives it gives rise to. Take a look around you. What do well-placed people in the profession do under our system of "anonymized review"? They share papers privately, announce paper drafts on facebook, invite each other to special conferences and colloquia, etc. This not only gives rise to the possibility that peer-review may be compromised (in ways that might advantage prestigious authors); it also gives prestigious, well-placed people far better opportunities for public feedback to improve their work, not to mention the opportunity to establish "priority" for new philosophical arguments, views, etc.

None of this is to impute nefarious or even problematic motives to people or question their integrity. It is simply to point out that "anonymized review" plausibly cannot control or prevent many unintended forms of prestige-bias, whereas (or so I have argued) all things considered a totally open, public system of peer-review might counteract prestige bias better.

Skef

To expand a bit on what I was alluding to earlier, I wouldn't claim that the main/important difference between math and physics and philosophy is an agreement on standards. Instead I would claim that the time and effort it takes for an individual mathematician or physicist to arrive at their *own view* about the overall quality of a *typical* paper is much lower than it is in philosophy.

I read some psychology papers for research purposes and would say the same about those, for different reasons: most psych papers follow a sort of prototype so that you can read the abstract and front-matter and then you can concentrate on specific areas of interest. If you're worried about that statistical approach, for example, it is usually easy to find the relevant section quickly, and the discussion tends to be easily comparable to other papers.

Philosophy papers aren't generally like this. You often have to read the entirety of one, sometimes multiple times (perhaps skipping the conclusion), to form an opinion.

I'm frankly surprised that this is controversial. All of the people who routinely work with both philosophy and science papers that I've talked to about this (or who have given advice on the subject in classroom contexts) have noted these differences.

Marcus Arvan

Skef: I think science papers *seem* easy to evaluate, but that it is in many cases an illusion generated by the fact that we have little or no background in the complicated methodological and statistical questions and controversies in the relevant fields (which are often *very* complex and nuanced). I think it is all too easy for a philosopher or layperson to read a psych paper and think “Oh, that makes sense”, when in reality there are all kinds of serious problems. This recently happened to me. I was asked to review a paper for a journal on the border of X-phi and psychology. I accepted because I felt well qualified enough given my background. I thought the paper was pretty good, and only had a few minor methodological quibbles and some more serious philosophical ones. Anyway, after sending in my review, the journal forwarded the author and referees all of the other referees’ reviews. The other two reviewers gave the author (and me implicitly) a total beat down, raising basically a lot of really complex (and apparently fatal) methodological and statistical issues. It just goes to show: when you don’t really have a broad and deep training in a given area, you can fool yourself into thinking you understand stuff that you really don’t. I think this happens far more than people realize. Finally, it’s also worth bearing in mind that the psych papers you probably read are ones that actually made it through peer review (and likely an R&R or two), so a lot of complex issues probably got straightened out there too.

As for the time and easy of evaluating papers, all I can say is that my experience is different. Many philosophy papers today have roughly the same structure. They usually involve a few capitalized theses, a Vignette or two, or maybe a thought experiment, clarifying theses, and so on. In most cases, one can come to a quick first judgment relatively quickly. Then you need to go back and read more carefully. I think exactly the same is true in the sciences, particularly if you as a reader have sufficient background to appreciate all of the complexities that laypeople or outsiders are likely to miss.

Skef

"I think science papers *seem* easy to evaluate, but that it is in many cases an illusion generated by the fact that we have little or no background in the complicated methodological and statistical questions and controversies in the relevant fields (which are often *very* complex and nuanced). I think it is all too easy for a philosopher or layperson to read a psych paper and think “Oh, that makes sense”, when in reality there are all kinds of serious problems."

The general impression of my friend who recently got a doctorate in psychology is that all of that is more a matter of individual hang-ups than careful evaluation. There's very little about all of that stuff, and there was even less before the replication crisis became a thing.

What exactly am I assuming that you have contrary data about? I'm not saying that the papers are being evaluated without being read. I'm claiming that an individual psychology paper reviewer can read the methods section (for example) and arrive at their view on it relatively quickly. (Or they can happen to not be aware of those issues, as in your case, which is fine for some reviewers.)

Since you keep bringing up your wife, it's not like I'm only peering over the fence at this stuff. I have an MS in computer science (I even have an unread publication in the field!) I went to Caltech,. So forth, so on.

recent grad

Hi Marcus,

I'm having some difficulty understanding your recent response to Skef. I don't think anyone is denying that science is complex and difficult, so the fact that you overlooked the complexities involved in the paper you reviewed seems something orthogonal. Furthermore, the fact that both of the other referees took the author to task in the way you describe seems to support the point that you've been pushing back against: that there are relatively clear methodological criteria in science which makes the ArXiv model ideally suited for science and not philosophy. After all, it seems the paper failed to meet those criteria and the referees noticed.

Marcus Arvan

Skef: I'm not sure we're going to be able to settle these matters one way or the other.

My general experience--though I could be wrong--is that things just really aren't that different in philosophy. When I read philosophy papers, they generally seem to me to have at least as easy of an overall structure (e.g. a Thesis, Vignette, clarifying Theses, etc.) as science papers, and that they are at least as easy to evaluate (especially since in philosophy for the most part we don't have to worry about things like data analyses or complex statistics). Normally, when I read a philosophy paper, I'm able to form an initial opinion *really* quickly--and then I go back and read it carefully. I suspect this is true of most philosophers who have been around the block--and my experience is that evaluating papers in the sciences is similar. Anyway, we are mostly just trading differences in anecdotes at this point. I just wanted to push back at the assumption I often see, which I think lacks good evidence. I did not mean to prove the converse.

recent grad: The illustration I gave there was intended orthogonal to my main point. The example was merely supposed to show how we, as outsiders, can underestimate complexities that insiders appreciate, giving us a false sense of confidence about the differences between different fields. The fact that both reviewers agreed with each other in that one case is no reason to doubt my general point: which is that philosophers may tend to overestimate the differences between professions. After all, I've had *philosophy* papers where basically all of the other referees agreed with my assessment! That was the only point I was trying to use the example to illustrate. My more general point--that standards seem to me hardly more clear in the sciences than in philosophy--is a different one: one based in part on my reading of science history, reading my spouse's referee reports, and so on.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread