In my past two entries in this series--as well as in other past discussions of peer-review--I have tended to focus on how editorial practices and reviewer behavior might be improved: to better ensure quicker and more reliable turnaround times, as well as better reviewer comments. However, it increasingly appears to me that this is to focus on only one part of the problem.
As David Velleman noted here--and others have noted as well (see below)--another central reason why peer-review has the problems is does is the behavior of authors. Journals are overrun with far too many submissions, many of which seem--to editors and reviewers--as though they should have never been submitted in the first place. This bogs down the entire system, making reviewers not only more difficult to find but also increasing their workload (as they have more papers to review). This in turn may be one significant cause of 'poor reviewer behavior', both in terms of long turnaround times and the quality of their reports. Indeed, there seems to be significant anecdotal evidence for this.
First, respondents to our poll on peer-review generally agreed with the idea that too many half-baked papers are sent out for review:
Michael Magoulias said...
A viewpoint from a publisher: This is ultimately a practical matter that reflects several factors: the number of submissions a journal gets, the size and frequency of its issues, and most importantly, the size of its editorial team. One major philosophy journal I worked with in the past had a single editor who was tasked with accepting roughly 25-30 papers a year for a quarterly journal. The typical volume of submissions in a given year was 500. It's simply not possible for one person to write substantive comments for that number of papers. The obvious solution is to expand the editorial team, but that is not always as easy as it sounds, especially, if one is dealing with a journal that is not affiliated with a society, and therefore lacking in a pool of specialist talent already committed to the organization sponsoring the publication.
Another commenter wrote:
I have received about 30 papers to referee in my life, out of which only about 15 were, at least in my view, worthy of submission (5 I recommended). The rest were just not ready for submission to any professional journal and should have been weeded out early on by the editors (though I understand that that's harder than it sounds). Providing comments on those would take forever - there were too many things wrong/lacking. Journals are not there to educate authors how to work in their field, even if publishing is a learning experience.
This broadly coheres with my own experience, both as an author and as a reviewer. As an author, I have to admit (with a genuine sense of regret and embarrassment) that there have been multiple occasions where I submitted papers that, in retrospect, were not ready to be sent out. On the other hand, as a reviewer I have also reviewed a fairly good number of similar papers: papers that, in my judgment, were nowhere near ready to be placed under review.
Now, of course, as someone who was once a desperate job-candidate (I spent seven years on the market), and then someone who had to publish to get tenure, there is an obvious sense in which I understand why this happens. Job-candidates and tenure-track faculty exist in "publish or perish" culture and incentive structure that makes it appear rational to them to submit as many papers as they possibly can--something which, en masse, leads to journals (and reviewers) being overwhelmed with too many unready submissions.
If we recognize that this is a significant problem, the question remains about what might be done about it. One possibility, which Velleman suggested, is to "philosophy journals should adopt a policy of refusing to publish work by graduate students." However, like many who responded to Velleman's suggestion, I worry that this is a proverbial case of a cure being worse than the disease. Publishing is, in a very real (albeit imperfect) sense, "the great equalizer": it enables graduate students from lower-ranked and unranked programs to demonstrate their philosophical abilities so that they can better compete on the market with people from more highly-ranked programs. Prohibiting graduate students from publishing would be unfair, giving graduate students at highly-ranked programs an even greater leg-up than they might already have.
Another, alternative possibility might be to disincentivize authors from submitting papers not fit for review. How might this be done? Here, I think, is an intriguing possibility. In my last two posts, I developed a proposal whereby reviewers might get an Overall Reviewer Score based on a number of factors: how quickly they respond to review requests, how many they accept, how quickly they get their reviews in, and the quality of their review reports (which might be evaluated by editor and author surveys). My proposal then was that this might incentivize better reviewer behavior if a person's Reviewer Score affected the kinds of reviewers they are paired with as authors (viz. good reviewers having their submitted papers evaluated by good reviewers, and bad reviewers having their papers reviewed by bad reviewers). Notice, however, that this is only one side of things. What if, in addition to this, authors were assigned an Overall Author Score based on reviewer and editor surveys of whether their papers were minimally fit to be placed under review? If a lower Author Score were used, along with one's Reviewer Score, to match one's newly submitted papers with reviewers, then being a bad author would be disincentivized as well. If authors want their papers to be reviewed quickly and by good reviewers (as most of us presumably do, including job-candidates and tenure-candidates), then in this system one would have to both be a good author and a good reviewer.
Might a system like this disincentivize authors from submitting too many plainly-unready papers for review--thus lessening the "Publication Emergency" Velleman discusses, easing up things for reviewers and editors, in turn helping them do a better, more efficient job? I'm not sure--but given how dissatisfied everyone seems to be with our current system (authors, editors, and reviewers alike), I think something like it may be worth considering!
Hi Marcus,
Thanks for another thoughtful post. I don't mean to derail things, but as much as I don't like Velleman's proposal, I wonder whether the often-cited response to it is pressured by your own view of the market.
You write:
"Publishing is, in a very real (albeit imperfect) sense, "the great equalizer": it enables graduate students from lower-ranked and unranked programs to demonstrate their philosophical abilities so that they can better compete on the market with people from more highly-ranked programs. Prohibiting graduate students from publishing would be unfair, giving graduate students at highly-ranked programs an even greater leg-up than they might already have."
But isn't this a view of the market that you have argued against, suggesting even that the belief in the power of writing to equalize things ends up hurting candidates who cannot compete for teaching or research positions?
There are other reasons to resist Velleman's suggestion (or so many of us think), but I was curious how forceful you took the above concern to be given your view of market dynamics.
Posted by: Peter | 01/08/2019 at 01:04 PM
Hi Peter: Thanks for your comment! You're quite right--in fact, I had similar thoughts while composing the post. So let me clarify.
As I have noted in a number of previous posts, it appears to be very, very difficult for candidates from lower-ranked and unranked programs to publish their way into R1 jobs. So, one might wonder, is it really true that grad students publishing is "the great equalizer"? Here are two reasons to think that it is still is.
First, although it appears to be very, very difficult to publish one's way into R1 jobs (from a low-ranked program), it is not impossible! The data suggest it sometimes happens, just pretty rarely. Here, then, is what I think the problem is with Velleman's proposal: if it were implemented, grad students from low- and unranked programs would be at even GREATER disadvantage for R1 jobs than they already are (since they wouldn't even be able to prove themselves through publishing). In other words, when it comes to R1 jobs, publishing in journals may not be "the great equalizer." However, they are at least SOMETHING one can do to try to compete (in rare cases, successfully) for an R1 job. In Velleman's proposal, even that possibility would be summarily taken off the table.
Second, there is the separate issue of publishing one's way into non-R1/teaching-focused jobs--the kinds of jobs that candidates from lower and unranked programs appear more competitive for. Is publishing "the great equalizer" there? In one sense, it actually appears to be so. A few years ago, I collected informal data on new hires--and it turned out that one of the best predictors for being hired into any job at all (R1 or teaching school) was total # of publications...not total number of publications in high-ranking journals, but total # of publications simpliciter. Having served on search committees this makes sense to me. If one has to choose between two great teachers, one of whom has more total publications than the other, the one with more publications is (at a school like mine) more likely to be get tenure and promotion...which is precisely what search committees at schools like mine are looking for (someone who will succeed).
In sum, publishing may not be "the great equalizer." Still, for many different kinds of jobs, it appears to be SOMETHING of an "equalizer"--and so a good peer-review system should incentivize authors to submit papers that are good work worthy of publication and disincentivize them from submitting papers that waste everyone's time (including the author's!) by clogging up the system.
Does this make sense?
Posted by: Marcus Arvan | 01/08/2019 at 01:31 PM
"But isn't this a view of the market that you have argued against, suggesting even that the belief in the power of writing to equalize things ends up hurting candidates who cannot compete for teaching or research positions?"
And this is to view the job market in the _states_ as the only job market. A collection of good publications is your ticket to a good job everywhere else in the world.
Posted by: Gene | 01/08/2019 at 02:10 PM
Hi Gene,
I have heard that the conventional wisdom (that publishing is the great equalizer) is true in market's outside of the US (or perhaps outside of the US and Canada). I am curious, has anyone done any analysis on other markets to collect data about this?
Posted by: Peter Furlong | 01/08/2019 at 03:03 PM
Hi Marcus,
Thanks for your response. I am still curious about it, though. You say that publications matter in hiring even at schools like yours, and I take it that this is meant to show that it still acts as an equalizer in this way. But I wonder, would schools like yours need publications to equalize candidates with different pedigrees? Do Leiterific candidates already have a leg up at schools like yours, or are they (all things being equal) at a disadvantage because they are seen as a flight risk? Here is why this matters: it might turn out that publications do not help equalize students with different pedigrees even at teaching schools; instead, it might be that at such schools, the quantity of publications helps some candidates outshine others of the same pedigree. If so, then once again publications are not equalizers.
This is a genuine question (I don't mean to be needlessly argumentative), and even if it turns out that publications are not quite equalizers in this way, it is still super helpful for people to learn about what effect they do have in hiring practices at different schools.
Moreover, this still leaves your point that publications can sometimes act as an equalizer for rare candidates with low pedigree who find positions in R1 schools. This is a legitimate point. Just this morning, in fact, I was reading an article by an excellent early career philosopher who graduated from an unranked school and now has a position at a top 20 program. I assume that this sort of thing wouldn't happen much without publications in grad school.
Posted by: Peter Furlong | 01/08/2019 at 03:18 PM
Hi Peter: thanks for the follow-up. I guess I’m thinking of the notion of an “equalizer” differently in the two cases.
In the case of seeking R1 jobs, my sense is that top-flight publications may be the best (only?) way for candidates from lower or unranked programs to compete with graduates from top-ranked programs (the data, as well as the anecdote you report, all seem to me to suggest this). Top-ranked publications may not be *much* of an “equalizer” for these jobs, but they are at least something—and Velleman’s proposal would rule it out as a possibility.
When it comes to jobs at teaching schools, I am thinking of publications as an “equalizer” differently: as more of an individual-level equalizer that enables an individual candidate to show that they are just a bit more accomplished than any other similarly competitive candidate. Here’s what I mean.
To speculate, I suspect that graduates from top-ranked programs may indeed be at a disadvantage for jobs at (some) teaching schools—because they may appear to committees to be a “bad fit” for the type of institution. Consequently, the most competitive candidates for teaching jobs should probably come from lower or unranked schools. And what do you know? The ADPA report suggests this is indeed the case: that some unranked programs have the highest overall placement rates. Now suppose, however, that a search committee is choosing between two candidates from those programs—both of whom look to be good teachers and good fits for the institution, but one of whom has a higher quantity of publications. My sense is that, all things being equal (though they are often not), a committee at a school like mine will tend to favor the person with more publications. It is this sense in which I think that publications are “equalizers” for jobs at teaching schools: they give otherwise similar candidates an opportunity to show that they are just a bit more accomplished than the next person—enabling them to stand out for the job in a crowded field.
Does this make sense? When I talk about “publications as equalizers” for R1 jobs, I mean it in one sense: the sense of publications giving grads from lower-ranked programs a (mildly) more equal opportunity to compete with grads from high-ranked programs. When I talk about “publications as equalizers” for teaching jobs, I mean it in a different sense: as affording otherwise similar candidates (including candidates from similarly ranked programs) a more equal opportunity to stand out as the most accomplished for the job. Does this way of clarifying things help?
Posted by: Marcus Arvan | 01/08/2019 at 04:27 PM
It seems to me that incentive structures like this would only work if the authors submitting "half-baked" papers are well aware that they are submitting "half-baked" papers. But do we have good reason to think that those authors are aware of what they're doing?
For example, in your original post, Marcus, you say that with hindsight you think that you have submitted papers that weren't ready (and I think lots of people could say something similar, including me). But that's not the same as in-the-moment awareness of the unreadiness of the paper.
Posted by: Anon | 01/08/2019 at 10:15 PM
I think there already are incentives to only submit material which is ready for publication. My own experience has been that since I started making a real effort to polish and work on papers longer, and primarily produce material which seemed really worth publishing (i.e. not just epicycles to debates etc.), my papers have been desk rejected less at top 5 journals, and I have been getting far more interviews for permanent positions. This is despite the fact that the journals I have ended up publishing at have often been the same as some of my earlier (in my opinion weaker) papers. The journals you publish at do matter to hiring committees. But, in my experience, the quality of the work you publish often seems to matter just as much. Having a bunch of papers which have slipped by peer review is not that helpful on the job market.
I think the author score suggestion is not good (although I like the reviewer equivalent): I think that much of the time people don't realize they are submitting bad material. And at some points, they are doing so at pretty low and desperate points in their lives where they are not thinking clearly, and are not capable of properly assessing the quality of their own work. A poor author score potentially makes it even harder for somebody who may be a genuinely talented philosopher (in better circumstances) from redeeming themselves.
Posted by: Andy | 01/09/2019 at 05:11 AM
Marcus,
That makes sense. I guess I just wasn't seeing the second sense as a sort of equalizing. Whatever we call it, it is a good thing, though.
Your positive proposal is an interesting one. Here is a (mild) worry: might ambitious papers, or ones defending unpopular ideas be more likely to be thought unready for publication, and so lead to bad author scores? In other words, do you think the sort of structure you suggest might provide further incentive for playing it safe in what and how we write?
Posted by: Peter Furlong | 01/09/2019 at 07:34 AM
On this proposal, would someone’s author score be publicly viewable?
If so, this might be a mistake, since it would become a quick and somewhat unreliable way of assessing an author’s research ability. I say unreliable, since it would track an author’s savvy at judging their papers as minimally publishable (which could be significantly influenced by pedigree and network), rather than the quality of an author’s best work.
In other words, it could maybe become the research equivalent of teaching evaluations, something most people know to be fairly unreliable, but still turn to since it is quick and easy.
Posted by: Asst Prof | 01/09/2019 at 08:08 AM
Asst Prof: No, the way I am thinking of it, Author and Reviewer scores would be private.
More specifically, the scores would only be viewable by the Author/Reviewer themselves, as well as by the managing editor tasked with commissioning reviews--and only then after the paper has passed the desk-rejection stage by an area editor (so that Author Scores could not unfairly bias desk-rejections). I would also think that Author scores should be withheld from decisionmaking editors later in the process (when reviewer reports are received), so that an Author's past score could not influence the final editorial decision.
Posted by: Marcus Arvan | 01/09/2019 at 08:41 AM
Andy: I appreciate your concerns. But let me push back just a bit.
You say you think there are already incentives not to submit work that is unfit for review. However, the only incentive you mention is you learning in your own case that polishing papers leads to better results. The problem is that many people might not learn that lesson--or, if they do, only after many years of submitting papers that are unready. And indeed, I suspect this is what often happens. In fact, I am one example.
Earlier in my career, I used to send out material that no one but *me* had ever read. Why? Because I needed publications, and I figured writing up and sending stuff out quickly was the best bet, given my resources. Although it ended up working out in the end (I published a lot and got a permanent job), I also think it probably wasted a lot of my time, as well as the time of editors and reviewers who had to read work that was unready.
On that note, your second main point is that many people--particularly desperate people who may not be thinking clearly--may not recognize that they are sending in work that is unready. Indeed, this was the position I was once in. But notice: the primary reason that I did not know I was sending in unready work was that I did not *bother* to have other people read and give me feedback on it before sending it in. The point here is simple: if you don't incentivize people taking active steps to solicit *other* people's judgments of whether work is ready (e.g. mentors, outside readers, etc.), then you will not incentivize behaviors on the part of authors that will place them in a better position to reliably judge whether their work *is* ready. Conversely, if you do incentivize authors being more careful--by them having an Author score--then chances are they will take more steps to find out (by sending their work to others) whether their work is fit for review.
Which brings me to my final point: your point about desperation, and how an Author Score might make the lives of vulnerable people in the profession all the more precarious (if they receive bad Author Scores). Notice that this concern only follows if people who fit that description *receive* bad Author Scores. However, the entire point of the proposal is to incentivize them to do the opposite: to take greater steps earlier in their career to ensure that they submit work that is ready, and hence, receive good Author Scores. Finally, as your own anecdote attests, this may actually be best all around for people like that. In seeking to get good Author Scores, such authors may become more liable to submit more work that is ready--work that in turn is more likely to get published in good venues (as it has in your case!).
All of this would be good for authors in precarious positions, not bad: it would save them a lot of time and heartache (from many months of otherwise-avoidable rejections, as well as meanspirited referee comments), and likely lead them to publish more successfully. Or so it seems to me! (Indeed, in retrospect, *I* probably would have been better off this way, as it might have saved me a lot of time and improved my work by giving me greater incentive to get feedback prior to submitting things).
Posted by: Marcus Arvan | 01/09/2019 at 08:54 AM
What if authors that get desk-rejected were barred from submitting anything else to that journal for a set period of time?
I suspect that people would be more wary of sending something possibly half-baked to a journal if desk-rejection also closed off that journal as a publication venue for other works in progress.
I would understand being wary of the power that this would give journal editors, but in my opinion journal editors already wield substantial power over the profession, such that giving them this power in addition wouldn't change much. Journal editors might even become more cautious with desk rejections, if they were aware that desk rejection carried with it an additional penalty to the author.
Posted by: anon | 01/09/2019 at 10:43 AM
"The point here is simple: if you don't incentivize people taking active steps to solicit *other* people's judgments of whether work is ready (e.g. mentors, outside readers, etc.), then you will not incentivize behaviors on the part of authors that will place them in a better position to reliably judge whether their work *is* ready. Conversely, if you do incentivize authors being more careful--by them having an Author score--then chances are they will take more steps to find out (by sending their work to others) whether their work is fit for review."
I take issue with the idea that authors would be required to internally peer review their work before sending it out, for otherwise they are at risk of a poor author score and effectively being punished for it. Our system already drastically favors those with prestige and connections. This proposal makes the problem even worse by effectively punishing people for not having the connections to internally review their work. I for one know very few people who would be willing to provide worthwhile feedback in a timely manner. I bet this is true for many people. We rely on the peer review process for feedback.
My solution to this problem is simple. Editors should read all the submissions they receive and desk reject far more submissions. I know many general philosophy journals desk reject almost nothing, sending out papers for review that are of poor quality. It wouldn't be that hard for an editor to read a paper and determine if it's well enough referenced and written to be ready for peer review. Maybe non-experts in the subject of the submitted paper would have a particularly hard time determining its merit, but it's not that hard to get a feel for whether it's of a high enough standard for review: does it have have proper sections, paragraphs that aren't pages long, sentences that are readable, a good amount of references, original arguments and material that seems to make sense/be comprehensible.
Posted by: pendaran | 01/09/2019 at 11:11 AM
Marcus:
On the first point I think we are mainly in agreement. I had meant to say in my first post that although there is an incentive, too few seem to really know about it. I guess there is a semantic point about whether or not something can count as an incentive if not known, but that is not really important. The fact is that if more people knew this then they would (or should) be motivated to be more responsible.
On the second point, I share Pendarian's concerns. Many do not have the requisite networks, and for others who do have networks of people willing to read papers the people giving feedback are often people in a similar predicament (i.e. others still working out how to write good publishable papers).
But I would add to his comments that although peer commentary can be useful in many ways, it is not always the best gauge of whether a paper is publishable. People tend to raise substantive philosophical points, potential new avenues of interest, requests for clarification etc. But in my experience, it is rare for a colleague to say "this paper has potential but right now it is far from being publishable". There tends to be an assumption that if the paper is pretty rough, then the author will know that. Moreover, people are often just too nice in their feedback. It is easy to see why this would be the case, especially with early career people on the job market. Nobody wants to further crush somebody who is going through what most of us on the job market are going through. Your system involves people being further punished for wrongs they are not knowingly committing. It digs them into a hole that, it seems to me, would be pretty hard to dig oneself out of.
The way I see it, authors are already punished for submitting sub-par material: they get strings of desk rejections which take forever (or, they wait 6 months for irate and unhelpful reviewer comments) and thus have huge amounts of crucial time wasted. There are, as I think we agree, good reasons for people to polish material more for publication before sending things out. The problem, as I see it, is that too few people are aware of these potential incentives.
Posted by: Andy | 01/09/2019 at 02:23 PM
Hi pendaran: your concern seems to me the best one to have. In fact, I faced the reality you mention.
Early in my career (when I first got a non-TT job at my present university), I had little to no access to feedback. I worked in a three-person department of very busy people, none of whom worked in my areas. I also didn't have connections outside my university, for a variety of reasons. Anyway, this *did* put me in a difficult position--one where I felt forced to simply send papers to journals without much (or any) outside feedback.
For these reasons, I appreciate your concern: that an Author Score might put already disadvantaged authors (isolated early-career scholars) at an even greater disadvantage. However, let me explain why I'm not yet convinced.
One thing that 'social engineering' (that is, intentionally changing social practices) does is change *incentives*. And different incentives tend to lead to different behaviors. Bearing this in mind, consider what might happen if the Author Score system were put in place. Everyone--particularly grad students and other early-career people--would be incentivized to want a higher Author Score. They would also know that in order to do that, they probably need to get substantial feedback on their work before submitting.
What do you think this would lead to? Here's what I suspect it would lead to: a greater emphasis--both in graduate programs *and* among early-career philosophers--in developing "peer-feedback groups." In other words, it would incentivize people to do exactly what I DIDN'T do as a grad student or early-career scholar: get to know more people I could trade papers with, and so on.
In brief, I am optimistic that if the Author Score system I am proposing were set up, the incentives it puts in place would motivate people to solve the very problem you're presenting: developing better peer-networks, both within and outside of their grad programs. Indeed, I suspect websites and online groups might even get set up by people looking to expand their "feedback groups."
Finally, I suspect that the system of incentives would not only solve the problem you're mentioning: it might even have incredibly positive consequences, mitigating the disadvantages people have who are not in fancy grad programs. How? By leading people who are currently isolated in the profession to develop GROUPS to help each other!
And in fact something like this is what led me to start this blog, for instance. I was sick of being isolated in the discipline, and hoped to get help and feedback--so I started a blog. I think the Author Score system would incentivize early-career people to do more to create connections that give them better opportunities in the profession. And I think that would be a good thing, a real equalizer to some extent.
Posted by: Marcus Arvan | 01/09/2019 at 02:30 PM
I want to register a minor worry.
Pendaran describes minimally publishable papers as follows: "does it have have proper sections, paragraphs that aren't pages long, sentences that are readable, a good amount of references, original arguments and material that seems to make sense/be comprehensible."
This is one sensible (and very minimal) standard, but there are others (which are slightly more demanding): does it ovoid really obvious objections any specialist could think of? does it miss any seminal references which are relevant and any specialist should know? is the paper of interest and worth publishing?
An author score will be fair only if we have some agreed on standard for what counts as "minimally publishable" and there's at least something approximating consensus on which papers count as meeting it.
I worry that without such standards, an author score will turn pernicious fast.
Posted by: Postdoc | 01/09/2019 at 03:32 PM
"What do you think this would lead to? Here's what I suspect it would lead to: a greater emphasis--both in graduate programs *and* among early-career philosophers--in developing "peer-feedback groups." In other words, it would incentivize people to do exactly what I DIDN'T do as a grad student or early-career scholar: get to know more people I could trade papers with, and so on.
In brief, I am optimistic that if the Author Score system I am proposing were set up, the incentives it puts in place would motivate people to solve the very problem you're presenting: developing better peer-networks, both within and outside of their grad programs. Indeed, I suspect websites and online groups might even get set up by people looking to expand their 'feedback groups.'"
No doubt it would motivate people to solve the problem I'm presenting. However, your ability to solve this problem depends again on your prestige and connections. The great thing about the current system is that you can use the peer review process itself to get feedback, so you don't have to entirely depend on your personal connections. I think this is a great aspect of the current system and wouldn't want to change it. It allows anyone no matter their status in the profession to have a realistic chance of publishing their ideas.
Posted by: pendaran | 01/09/2019 at 05:59 PM
Hi Pendaran: maybe - but I guess I'm more optimistic, both about (1) the ability of people without connections and prestige to *make* connections (including connections with other otherwise-isolated people) in a system that incentivizes it, and (2) such a system thereby reducing the advantages that people from prestigious backgrounds have relative to the status quo.
Here's an analogy (from the music business): In the olden days, bands had to get signed to a major label in order to get exposure and listeners. Small bands were at an extreme disadvantage. They had to scrounge up their own money to get gigs, put out records, and so on--and, after all that effort, if they were lucky, they might get an exploitative record deal from some label or other. It was a terrible system. Then the internet came to be, and streaming services like Soundcloud, Pandora, and Spotify came into existence--as well as places like iTunes. Can major acts exploit these services to their own advantage--given their major labels and prestige? *Absolutely.* But, I will tell you this: the new system has dramatically empowered smaller artists to get heard--by giving them platforms to share their work, recruit listeners and so on.
In a nutshell, I am optimistic a peer-review system that is more public, more transparent, and which has better incentives for authors and reviewers will have similar effects in academic publishing. Nothing will ever eliminate prestige and connection advantages. However, systems that give otherwise-isolated authors/artists platforms to share their work--and which provide them incentives to make their own groups and connections--may help reduce prestige advantages and connections. It did in music...and I see no reason why it couldn't do something similar in academia, and I see some reasons to think it might!
Posted by: Marcus Arvan | 01/09/2019 at 06:29 PM
If we could make an online peer commentary system and could get experts in the various fields to agree to participate, then that would be a great way to increase the quality of submitted articles while making philosophy more inclusive.
However, I don't see how the author score system alone will be enough to accomplish all of this, in part because the people in need of comments are least able to give them. So, we have to rely on people who don't benefit much from the system to comment.
Posted by: pendaran | 01/09/2019 at 08:04 PM
Postdoc: Thanks for chiming in. I (mostly) agree! Although I think consensus is too high of a standard (consensus is infeasible on most things), I think the system should set some pre-established standard for what counts as minimally acceptable. Perhaps a standard that a plurality of journal editors support--or perhaps, better yet, a standard that a clear majority in the profession might support in some kind of well-carried-out poll.
Posted by: Marcus Arvan | 01/10/2019 at 06:24 PM
Pendaran: I don't think the system I'm proposing would eliminate the ability of authors to get good feedback through peer-review. The entire point of the broader proposal is to incentivize reviewers to write better and more timely reviews. So authors would get that benefit, as *well* as incentives to create "feedback groups" to improve their papers prior to submission. Also, I think it's false that one needs feedback from "experts", at least if this is understood as Super-Famous people. Sure, someone at NYU might benefit tremendously from getting feedback from Kit Fine. My system wouldn't get everyone access to Kit Fine. It would, however, incentivize metaphysicians with PhDs--including otherwise isolated junior people--to create feedback groups of their own. That's not as good as having Kit Fine, but it's about 1000% better than what isolated early-career people have right now, which (as you mention) is basically no feedback at all except from journal reviewers.
Posted by: Marcus Arvan | 01/10/2019 at 06:28 PM
I like your reviewer system, but I'm hesitant about the author system. It strikes me that it might hurt the most vulnerable in the field, young scholars who are trying to make it and don't have good connections for feedback.
As far as what counts as an expert, I don't think we need to go to the level of famous. I've published 10 articles on the ontology of color. That makes me an expert as far as I'm concerned. But I'm certainly not famous!
However, in general, the more of an expert you are the less you need feedback. When I was a PhD student I required a lot of feedback on my work from my advisor and tried to get feedback from other colleagues.
These days I don't really bother with feedback past whatever the reviewers say, and honestly most of the time I ignore the majority of what they say too. The reviewers usually know less about the subject than I do.
I know that sounds arrogant, but that's just my experience.
I guess if I were to write a paper in a new subject area though I might find an online peer commentary system useful. But in general I think the points stands: the more able to give feedback the less you personally need it.
So what we need is an incentive for those who don't need feedback so much to help on the system.
This aside, if editors are going to be required to judge submitted work based on whether it is minimally good enough for submission, why not just desk reject?
Posted by: pendaran | 01/11/2019 at 09:17 AM