Our books






Become a Fan

« The Secret Lives of Search Committees - Part 7: AOS and project choices | Main | Reader query on teaching utilitarianism »

04/18/2018

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Amanda

There is a lot in this post.

A few thoughts:

1. I agree the peer review process is broken in some way, and favors boring, non-innovative, safe papers. I have had decent success with publishing, but I think my better papers are not published. When I read journals 75% of papers are boring.

2. As much as everyone agrees with me about the problems of peer review, nearly EVERYONE disagrees when I offer my solution. I think referees must be motivated by something other than good will, so we either need to pay them, given them free books, or make referring a big part of promotion. We can pay them by charging for submissions, and yes, it is easy to find a way to do this that puts no burden or the smallest of burdens on those in adjunct/grad student, or financially insecure positions. Those are the people, currently, who suffer the most from the problems in peer review, so it would overall help them a lot, not hurt them. I was in a very rough financial situation for many years, and I have no doubt that this change, even in my situation, would have been welcome. Suppose a grad students submits 10 papers a year at $5 each. That is a cost the vast majority could afford with no problem, and we could even make it free for grad students/adjuncts if that would make everyone happy. When you look at how other disciplines run conferences, it is amazing how cheap it is to be a philosopher.

3. I am curious if you are applying for jobs world wide...?

4. I disagree that quantity and prestige of publications is all that matters. In the US, I would say what matters the most is prestige of PhD school and connections. Many people with high Leiter rank prestige or connections get hired with few, if any, publications in good journals. This is what matters for research schools, anyway. For teaching schools what matters is experience teaching and competency in publishing.

I might say more latter, that is it for now.

Muhammad

I'm never going to not post this when this kind of discussion comes up. While it's not a silver bullet that will solve everything, I think it would especially help with the need for overproduction to get hired and tenured as well as the unbelievable amount of submissions that journals get

http://dailynous.com/2015/12/31/a-modest-proposal-slow-philosophy-jennifer-whiting/

Muhammad

Also I just read Amanda's post just now and completely agree with point number 2:

http://digressionsnimpressions.typepad.com/digressionsimpressions/2016/10/if-the-system-is-broken-blame-the-referees.html?cid=6a00e54ee247e3883401bb09451a65970d#comment-6a00e54ee247e3883401bb09451a65970d

D

Amanda.
1. I am happy you agree with 1.

2. I have also played with the thought of payments to editors, referees, and/or with fees for submitting, and am still fairly positive about it, but only enough to think we need to brainstorm about the consequences. I am little bit scared it would only become one more way to get more money for publishers. But it would at least acknowledge (if only symbolically) that refereeing is valuable.

3. Yes, I apply for jobs worldwide, but I have a family so it has depended on our family situation over the course of the last 16 years how far afield I have applied.

4. And, yes. I suspect you are right in thinking the US is different, although I think in the UK an American high-ranked PhD school also matters highly. But am I right in thinking that this only relevant for junior positions? Again I agree that for teaching schools what matters is experience, but also they tend to favour the persons already in place.

Finally, the article generalises and exaggerates for provacative purposes, but I still think there is a grain of truth in it (perhaps more ;) )

Amanda

So actually, from what I know, in the UK prestige of PhD institution matters a lot less. There are a number of accomplished philosophers with degrees from low Leiter ranked schools who went to the UK because their publishing records could get them a job, where in the US their not prestigious PhD institution kept them out.

As far as only for Junior positions, I don't think so. I mean, it is a bit more complex. For a senior position at a research school you must be a major player in your field. But the way prestiges bias works, it is unlikely that you will be a major player unless you went to a prestigious PhD program. This is because what your current institution happens to be has a major role in whether you are acknowledged as a major player (despite of 'blind' peer review...hmmm) and whether you are at a currently prestigious place is closely connected to PhD institution. So, yeah, turtles all the way down.

All this said, I will repeat that of the measures we have for judging philosophical ability, publishing is one of the better ones, in spite of how broken it is. Guess we know what that says for the other measures.

I want to be fresh

Amanda -- I like the idea of incentivizing referees, but I wonder: is it necessary? There are a number of high-profile journals that seem to somehow provide good comments in a timely fashion. I'm thinking of AJP, Philosophy of Science, BJPS and, to a lesser extent, Phil Studies (add your own to the list or disagree if your experience has been vastly different than mine). Incentivization may help, but I do wonder how these journals are able to identify sensible referees and provide reports in such a timely fashion!

Amanda

Well, unless there is a way to get all journals to work like the few good ones, it is necessary. I am sure some journals are better than others. But tbh, what mostly I have noticed is some journals are faster than others. This, admittedly, is a big deal. Personally I haven't noticed a difference in the quality of referee comments from journal to journal. No matter the journal, I have gotten the impression (at times) that referees are rushed and not reading my paper carefully. And given the lack of any incentive to read it carefully, this is not surprising. If I am honest, there are times I myself have rushed through a paper, and I am someone with unusually high guilt motivation about these types of things.

Pendaran Roberts

I have to agree with Amanda that we need to pay referees. I know people who never referee, because they see no benefit to them. They see the dire job market, and all the work and stress that causes as relinquishing them of supererogatory tasks. I have to say I agree. The current peer review system was designed during and for a very different world. It doesn’t function in a world with adjuncts comprising such a large percentage of the labor force, short term contracts galore, with many never finding permanent work or only after half a decade or more. However, I caution that the submission fees that would therefore be required be proportional to the salary of the submitter. Also, I caution that referee pay must be high enough To actually motivate good work, probably around 200 dollars per report.

JR

Some thoughts.

It is very different to determine quality of one's papers/ideas/arguments. Some might think that a paper X is genius, other might think it is obviously wrong and bad philosophy. And some other might even think it is not philosophy at all... It is almost impossible to evaluate quality when hiring applicants, because philosophers disagree so much. One way, I assume, this could be done is to see where (in how prestigious journals) one's papers have appeared in. Better the journal, better the quality of the paper (of course this is not obviously true but this is how many people think it is).

There are problems in referee system but those problems mainly occur in top general philosophy journals. I have always received good feedback from specialized journals (whether rejection or acceptance). And I believe those journals are more willing to publish different kind of work. Sometimes editors of top journals don't even seem to be aware of who reviews the submissions and they seem to accept anyone for review if one is willing to do it. For example, I have reviewed for top-10 general journal even if I do not have a PhD and at the time I only had 1 publication in a specialised journal.

I think, in general, we should appreciate reviewing more (when it is done properly). Now, at least in principle, one could be a successful academic without reviewing even one paper: That is because no-one is demanding us to review. No-one is not fired for reviewing too rarely (or writing bad review reports!) and nobody is not getting a job because "one has not reviewed enough". Reviewing is seen as supererogatory although it should be obligatory.

I don't think referees should be paid though. But reviewing should be understand as a merit in hiring. (for example lets use Publons website more).

75 % of the papers might be boring but so what? Okay if people who write boring papers are hired instead of people writing interesting papers, it might be a problem. But nowadays philosophers seem to think we are publishing too much because "I don't have time to read it all!" this is so wrong way to see it. For example, if one is criticizing some paper, it might be worth publishing even if no-one would eventually going to read it. Publishing research is inherently valuable and if I am able to show mistakes in someone's else argument then of course I should do it, that way I expand the knowledge in the world. Recently David Velleman raised concerns that people are publishing too much and too early. I believe it would be very difficult to see these sort of comments in other disciplines. Philosophers seem to be only ones that think we should only publish A) something that makes major contribution B) every philosopher should have time to read every philosophy paper and understand them.

One problem with the discipline is that there is not clear measure how to evaluate candidates. For example, it seems that in the research schools in U.S. hiring decisions are based on the prestige of PhD school and connections (as Amanda said). Teaching schools based their decision on teaching experience. In the Nordic Countries it seems that hiring is based more on the mere quantity of one's work. And yet there is another option, that I guess is also used: impact on one's work = how much citations you have gotten.

So "why don't you apply to somewhere else in the world" might not be a good advice because the criterion for hiring might be very different elsewhere.

Referee

Scientists have been complaining about the vast research literature since the 1960s. This is not a problem unique to philosophy, nor a new problem. I referee a lot. One key problem is that philosophers are sending papers out to journals when they are no where near ready. Also, I was shocked to learn that early career people are sending out 10 papers a year. This shows very bad judgment. One way to cut the rejection rates of journals in half is if we all sent out 1/2 the number of papers.

Tom

I am entirely *not* on board with the "Most Papers Are Boring" bit.

I *am* entirely on board with the "Most Work is Deeply Specialized and Only of Interest to Specialists" bit. But I think it's obvious on reflection that that's as it should be.

Marcus Arvan

Referee: You write, "Also, I was shocked to learn that early career people are sending out 10 papers a year. This shows very bad judgment."

Bad judgment with respect to what? Bad philosophical judgment perhaps - but not, in my experience, bad strategic judgment. For what it is worth, early in my career when I was struggling to publish, I was told by two *very* successful and well-respected people in the field that their secret was (and I quote) "always having ten papers under review." They both gave me the same advice independently. I followed it. I got a job. Then I got tenure.

Look, I would be willing to agree with you that there is an obvious sense in which this is non-ideal and regrettable. We have, for all intents and purposes, entered something analogous to a publishing arms-race. But unfortunately, that's where we are - and one can hardly blame people desperate for jobs for playing the game that exists. It may be the wrong game--and I am all for changing the game if we can--but for now it's the game we have.

NK

JR says: "Better the journal, better the quality of the paper." But they also say: "It is very different [i.e., difficult, I assume] to determine quality of one's papers/ideas/arguments. Some might think that a paper X is genius, other might think it is obviously wrong and bad philosophy. And some other might even think it is not philosophy at all..." Isn't there some tension here?

More generally, what exactly is the evidence for that first claim?

Also even if there's truth to it, I, for one, worry that this kind of view about the relation between journal reputation and paper quality is, to a large extent, self-confirming, in that papers in journals with good reputations are more likely to be thought to be good, and (this even more so) papers in journals with poor reputations are more likely to be thought to be bad (when anyone even deigns to read them). (I think something similar is probably true of the relation between PhD program reputation and student "ability.")

Pendaran Roberts

‘Also, I was shocked to learn that early career people are sending out 10 papers a year. This shows very bad judgment. One way to cut the rejection rates of journals in half is if we all sent out 1/2 the number of papers.‘

I have to agree with Marcus. It’s the unfortunate game we have to play, given that there are nowhere near enough jobs for the graduating PhDs.

Also, many of us get little or no help from our departments. I was mainly on my own. So, referee reports were the only way of getting expert feedback.

Valdi Ingthorsson

Interestingly, a paper was published today in the journal "Ethical Theory and Moral Practice", presenting an argument about the flaws of the current practice of peer-review, that seems to agree very well with my argument above.

It is open access: https://link.springer.com/article/10.1007%2Fs10677-018-9891-9

The author writes: "Recent years have borne witness to a mainstreaming within scientific publications that is unprecedented in history since the emergence of scientific journals. This is especially true of philosophical journals and even more so of ethics journals." also "Reviews, of course, have an important filtering function in the exclusion of poor articles. But they also relentlessly screen out much of what would be more than worth publishing, I’m afraid just many of the most innovative and original contributions. And they let pass a lot of conventional papers that lack substantive value. It is this kind of review that I would like to discuss."

All the best, Valdi

Marcus Arvan

I think NK makes some important points, and that they might actually point a way toward (at least part of) a solution to the problem(s) the profession faces here (viz. the publishing arms-race, overwhelmed journals, etc.).

It is, I think, no secret that our profession seems to attach great value to two things: (1) publication venue, and (2) quantity (i.e. research output). As Valdi's cases illustrates (and many other sources of information seem to me to broadly corroborate), people in our profession are routinely evaluated on these grounds--not only in hiring and tenure and promotion, but also in terms of prestige in the discipline. I cannot count the number of times I've heard remarks to the effect of, "Oh, s/he's published in *Mind*" or "X is a crap philosopher. Look at the journals they've published in", or "X only has two publications, whereas Y has 14!"

Here's the thing, though. First, just about everyone recognizes that highly-ranked journals sometimes publish bad work and low-ranking journals sometimes publish good or great work. Second, just about everyone recognizes that publishing more is not necessarily better than publishing less. Third, there seems to me to be a *better way* to measure the worth of someone's research than either venue or quantity: namely, the kind of *impact* their work makes.

Look at Valdi's Google Scholar page. Valdi may not have published a ton, but his work is plainly making a a substantial and increasing impact in his field. Of course, citations-alone need not be indications of merit (some bad articles are cited a lot despite being widely recognized *as* poorly argued). But if you combine citations with how one's work is received (i.e. whether others defend one's work, develop it further, etc.), then that would seem to be the best measure of someone's merit as a researcher.

Indeed, it seems to me *far* better than measuring research merit on venue or quantity. As while back, Kieran Healy and Nick Bloom did a study of citations in four top-ranked journals (JPhil, Phil Review, Mind and Nous): http://philosopherscocoon.typepad.com/blog/2015/02/healey-and-blooms-citation-data.html ).

Of articles published in those journals from 1993-2013, Healy and Bloom found that "almost a fifth of them are never cited at all, and just over half of them are cited five times or fewer."

Under the standard way of "ranking" people as researchers (viz. venue and quantity), someone who publishes three articles in Mind and one in Nous would be taken to be a fairly spectacular researcher (a "star") even if their papers were hardly ever cited or engaged with in any way. Contrast someone with that research profile to Valdi, who has a publication in Metaphysica with 18 citations, one in Dialectica with 10, a book with 9, and so on. Despite having published in "worse" venues, Valdi has clearly made a greater impact in the discipline than our hypothetical star.

Suppose, then, we moved away from evaluating researchers on the basis of venue or quantity of publications and instead moved toward evaluating them on the basis of *positive impact* (citations, level of engagement in the literature, and how well their work is received). That, it seems to me, would incentivize people to (A) send fewer things to journals (focusing on quality rather than quantity), (B) send more original and ambitious work to journals (given that such work seems more likely to stimulate further discussion), and (C) lead people to be hired and evaluated not primarily on the basis of what two reviewers at a journal say, but how a person's work is received by specialists in their area and/or philosophers in the discipline at large.

All of these seem to me to be good things--moving us away from the publishing "arms race" as it exists today, incentivizing more exciting work, and leading people like Valdi (who have made a real impact) more likely to be hired.

Marcus Arvan

Actually, let me amend that last comment a bit.

It doesn't take much digging in the history of philosophy or science to notice that particularly innovative work is often initially met with a mixed or hostile reception, and conversely, that some work that is received well in the short-term is forgotten long-term. Kant's First Critique received a mixed reception for several years; Darwin's Origin and Einstein's relativity were savaged by some critics; and so on. On the other hand, during their time, Filmer (who is now forgotten) received notoriety for defending the divine right of kings; Thomas Reid was wildly influential as a moral theorist (not so much anymore); etc.

As such, although I think it might help (in many ways) to move toward a standard prioritizing 'impact', there are potential traps to beware here as well. All things being equal, I think it is probably better for work to well-received than poorly received. Still, it also seems to me better, at least all things being equal, for work to be controversial than not engaged with.

Peter

Marcus-I worry about looking toward impact. Impact in general, and citations in particular, can depend in large part not only or even primarily on the quality of work but on already being known in the field. Famous people get cited because they are famous. People cite the work of those they know. This would become even worse if citation counts came to matter more. We could even have unstated (or in some cases stated) agreements between philosophers to cite each other every chance they have.

One problem with blind reviewing is that it is not completely blind. But at least it is sometimes blind. Citation is not.

Let's look at this another way. Suppose we have an early career philosopher from a low or unranked school. In the current system, publishing in Mind will help this person be taken seriously as a philosopher (even if the lack of pedigree still hurts). In the system you propose, people could still justify ignoring this philosopher until the paper in Mind gets some number of citations. Will this paper get many citations? Well, all things being equal, it seems to me that the paper would get much more attention if the philosopher comes from a highly ranked school and has many connections. This need not even be nefarious--I suspect people in general cite the work of people they know. I think this sort of switch would give even more power to networking.

I also wonder about your claim that this will inspire people to "send more original and ambitious work to journals (given that such work seems more likely to stimulate further discussion)." This might be right, but I am not so sure. Small papers written to make small moves in discussions with many participants may well be a safer bet to ensure at least some level of engagement. They will not light the world on fire, but at least they have an audience. Truly novel work might not have any built in audience, and so it may very well be ignored.

Of course, maybe I am wrong--I would be curious to hear what you think of this problem.

Amanda

I would stand by my claim that 3/4 of philosophy papers in top journals are not just narrow and specialized, but boring, and boring even to those specialists. The problem I have with papers that are highly specialized within my own specialty, is oddly, the papers are often 3/4 literature review of that very literature of which I already know especially well (so boring) and then making a tiny point at the end.

And of course it matters if papers are boring. I am not buying it "adds something" to some correction of some arguments. Tight arguments are not all that matters. It also matters that the arguments are on worthwhile topics. If it is not immediately obvious to you that an argument about some topics are more worthwhile than other topics - well, then our values are so far apart I am not sure we can come together

NK

I share Peter's concerns about using impact as a measure of quality.

Also, I'd love it if there were a workable solution to this problem, but I'm inclined to think that, at bottom, the problem is just that we're competing with one another (for jobs, for grants, for prestige, etc.): because this competition requires rankings, and because we can't rank one another directly by quality, we need some kind of proxy for quality; but any proxy is going to create perverse incentives; and so we end up with the kinds of problems at issue here. Changing the structure of the competition, by focusing on impact rather than publication venue and/or number of published papers, just doesn't seem to address the fundamental issue.

It's also pretty clear to me that citation *is* self-reinforcing: once your cited once, your chances of being cited again increase; and the more you're cited, the more likely you are to be cited more.

Looking to the kind of engagement you're getting makes some difference, but who's really going to check that? Wouldn't you have to read all the work in which the person was cited? Wouldn't that take way too long?

Marcus Arvan

Peter & NK: Thanks for your comments. I get your worries. They are worth taking seriously. Here are three general thoughts in reply.

(1) No approach to "measuring research" is going to be perfect. Everything can be "gamed" in one way or another. People game the "anonymized" peer-review system (see e.g. Google reviewing, people posting unpublished papers publicly, etc.). The question is whether focusing on impact will be better, on the whole, than focusing on venue and quantity.

(2) On that note, my sense is that people already preferentially cite their friends and people in positions of privilege (i.e. famous people) - so I'm not sure that evaluating research on the basis of impact would make things substantially worse (maybe it would - I'm just not sure). However, for several reasons, I think it might make things better.

First, people at small schools may not have the time, resources, or incentives to send stuff to top journals. I can say more about this if you like, but I can say this: the incentives to send stuff to top-journals are much heavier for people at R1's (they need it for tenure & promotion. People at smaller schools do not, and given the costs involved with waiting for top-journals, people at smaller schools may avoid them. I mostly do, though not entirely). This is one reason why I think the status quo--focusing on venue--heavily tilts things *away* from people in smaller places toward people in R1's.

Second, although people do cite their friends, etc., my sense is that people are surprisingly willing to cite and engage with work by "small names" they genuinely find interesting. Take Valdi. He *hasn't* published in Mind, or Nous, or PRR, or JPhil. He's not a tenure-track person, or at a ranked program. Yet he, despite all this, has made a sizable impact. Valdi's not alone. I could give you a lot of "small names" who have published in "bad journals" whose work people engage with.

Third, I want to emphasize that I am not just suggesting we focus on citations. Citations are one measure of impact, but far from the only one. There's a big difference between just being cited and actually having your work engaged with (i.e. discussed at length). Here again, I could give examples.

Long story short, although no measure of "research quality" is perfect--and sure, people probably would (and probably already do) seek to game citations--I still can't help but wonder whether impact, broadly construed, is a better way to go.

(3) Maybe I'm totally wrong. :) In that case, here's another alternative. Perhaps instead of changing how we evaluate research, people should instead attach more prestige to journals (such as the Journal of the APA) that specifically aim to publish bold, risky, unconventional work. I guess this was sort of Valdi's suggestion, and for what it's worth, I'm all for it.

Valdi Ingthorsson

I think a lot of good things have been said in this thread. Of course there are differences of opinion because we all have slightly different experience of the process, but also because all suggestions about a 'one-size-fits-all' system is inevitably going to disappoint somebody. But of course, in order to have some chance of designing the 'one-size-fits-most' system, we need to openly vent all our crazy ideas with out frustrations. Indeed, in this thread some are discussing what should be done if nothing changes, and others discuss what should change. Nobody is quite happy with the system, but some think we are complaining about the wrong aspect of it.

At this point I will have to say that I think Marcus has a point of suggesting that impact should be given weight. But, I also share NKs worries about it, wherefore it will have to be done in the right way. So, perhaps impact should be a part of the equation, but not by sheer quantity (and I don't think that is what Marcus is suggesting). For instance, I have been a co-editor of a volume of papers that feature a rare collection of superstars. That volume has 21 citations (I am actually a bit disappointed by the quantity, but they are all quite 'heavy' citations), but those are not really citations that reflect my abilities as a researcher. I shouldn't take too much credit for them, but obviously I try to as long as I am hunting for jobs. They reflect more my luck in being around the famous for a while. But still, by having a little look at whether and where somebody has been cited (also how they are cited, they could be cited for expressing stupid views) can give some idea of whether ones research is appreciated by those doing similar research. So, quality in the assessment of impact is they key. Indeed, I am sure Marcus didn't spend too much time doing that excellent and nuanced analysis of my citations with a quick look at Google Scholar (and if you haven't signed up for Google Scholar do it today).

I will add one further reflection. In the UK, when they perform their regular assessments of the excellence of the various departments, approximately every 6-8 years (next one is in 2020 and called the REF) then officially the only criterion is quality. Each member of staff submits up to 4 pieces of research for a period of 6-8 years (early career people submit only 1) to be evaluated by quality (average number of submission is less than 3 per person). On the surface it looks like the ideal quality assessment; nothing to do with quantity, citations, and impact is supposed to be involved. Indeed, for 3* and 4* publications (the highest ratings), the criteria are only about quality: "Quality that is world-leading in terms of originality, significance and rigour". However, when it comes to 2* publications there is the phrase "recognised internationally" in the statement "Quality that is recognised internationally in terms of originality, significance and rigour". A publication in a high ranking journal is today taken to be an objective criterion for 'recognised internationally'. So no paper published in a high ranking journal will ever get a lower rating than 2* no matter how bad it is. In the end, ranking sneaks its way into everything.

anonymous TT

Anedoctal episode in support of Valdi's point.

My best article criticizes the received opinions of all big shots in my field about one very relevant topic. (My co-author and I have a different solution to the puzzle - perhaps our solution is wrong, but no referee actually brought any argument against our proposal).
We sent the paper out to the best journals in our subfield and it was systematically rejected (even though no referee found any flaw in our proposal; somebody was bold enough to write "I don't think this is true" with no argument to support the claim). It ended up being accepted in a French journal whose editor liked the paper and (we believe) trusted us not to send the paper for review to one particular scholar whom we knew was hostile to our work.
I am not marketing myself on social media and I think our article will soon be forgotten. But I still think it's my best thing (I can't speak for my co-author, of course).
I am submitting this anonymously because I am still on the tenure-track. Once I will become famous (which I won't), I will submit a longer Stanley-like post to the Cocoon with all the details.
In the meantime, I agree with Valdi's main idea. The peer review system kills originality if there are no correctives (like, in our case, an editor who really believed in the paper and steered the review towards a positive outcome).
Sometimes it's not the reviewers' fault: we just have no time to publish&referee other scholars' works.
The first step towards a solution is to claim better work conditions: we should all be working less.

Stuck, PhD

I think that making reviewer names public upon publication, as Frontiers journals do, or at least available to the author at the end of the review process, would be cheaper and more effective than paying referees. I won't get into what I perceive some of the problems of paying referees to be, but I do think there are some obvious reasons to make reviewer names known to authors at the end of the process. Here are just two of them.

First, I have received some downright nasty, egotistical, and (most important to me) not-helpful reviews that I don't think I'd have gotten if reviewers knew they'd be named. Second, I've faced an average review time of about 6 months, including one review that after 14 months I was told that the reviewer was not responding to the (top-10) journal editor's e-mails. I think accountability would motivate good behavior here more than money.

Peter

I think that making reviewer's names public would be a bad idea. If we publish the names of those who gave positive reviews along with the published article, then reviewers will become even more hesitant to take a chance on ambitious work. If the work is not well received, the reviewers may be shamed (this is especially true if the ambitious and risky work is on a controversial topic). It would be safer for reviewers to accept safe boring work. If authors found out about the names of those who rejected their work, younger and less-well established reviewers could become afraid of rejecting papers out of fear of backlash. The thought might be that if they reject the work of someone famous, they might face backlash that could hurt career chances. I suspect this would make title-googling even worse than it is now.

We clearly have a problem with nasty and sloppy refereeing, but I think the solution to this must be found elsewhere. One place to look first is to journal editors. If they gave more guidance, required comments suitable for discussion between peers (that is, respectful language), and refused to use referees with a track record of sloppy reviews, the situation might become better. Of course, since editors are not experts in every sub field, they might not always be able to tell when a review is sloppy, but surely they can sometimes tell and use this information to inform future referee requests. Even better would be if area-sub editors were used (as some journals have), and these editors could make determinations on whether reviews were sloppy or not.

I should note that by "sloppy" I do not mean to discuss whether the written comments are helpful to the author, but instead whether the review demonstrates a careful review of the work. In many cases careful reviews will also be helpful for revisions, but this is not always the case.

Pendaran Roberts

I have enjoyed this discussion. However, I wonder whether we shouldn’t be aiming to address the more fundamental problem: way too many PhDs for way too few jobs. The current peer review system might work fine if we adjusted the demands on it by decreasing competition for jobs. I will here assume we can’t do much to increase the number of jobs and focus on decreasing PhDs awarded.

Only two simple steps are required:

1. Departments should be required to have accurate placement rate pages, checked by a third party.

2. Departments should not be allowed to admit many more PhDs than they can place in permanent jobs, checked by a third party.

The third party should probably be the APA. They should have an accreditation program, and the punishment for failing to meet 1 and 2 should be a loss of accreditation. Only accredited PhDs should be worth anything on the job market. We should just band together and make it impossible to hire a non-accredited PhD. This is more or less what happens in psychology and other disciplines.

Consequences,

1. Probably a quarter of PhD programs would be shut down.

2. Programs not shut down would be reduced in size.

3. The number of PhDs awarded per year would be cut in half at least.

4. So, competition would plummet and so the publishing war stopped. With the publishing war stopped, the peer review system could function. Referees would be less stressed with their own obligations (due to the dire job market) and so able to properly review articles in a timely fashion.

5. Also with a decrease in supply, adjuncts would disappear, fixed-term contracts too, and salaries would go way up. We’d altogether be treated much better, as we should be. This is pretty basic economics!

Is something like what I’m suggesting practical to implement? I don’t know. However, it’s not obviously less practical to do than anything else suggested.

Objections

1. What about trust fund kids who just want to study philosophy? A: They can study it at a non accredited program.

2. What about departments that rely on slave, umm I mean student, labor? A: They would have to stop taking advantage of students.

3. What you suggest would just concentrate even more power in a handful of elite programs. A: Not necessarily. Some top programs don’t do better at placements than some less prestigious programs. Some top programs have poor placement and hide this fact currently. Of course probably more lower ranked programs would be shut down. But there just arent enough jobs for all these programs!

Amanda

Thats an interesting idea Pendaran. One of the better ones's I've heard. The main snag, I think, is getting the accreditation to mean something,. The APA doesn't have universal respect, to put it mildly. And my guess is if this happened over night no one would care if the PhD program was accredited by the APA.

I want to be fresh

I think this is all correct – the review system we have in place needs to change. As a proponent of incremental change, my first thought is to identify those journals that *are* performing (for the most part) well and learn from them. What is it about these journals that allows for timely and not awful reports? How do editors of successful journals approach their work? I know of at least one high-profile journal that was a total mess and completely cleaned up its act in a matter of 18 months due to a change of the guard (I’m sure there are lots of other journals people can think of that have also gone through a similar transformation). Incentivization or curbing the number of PhD students entering the discipline may be part of the solution, but we should also just try to understand how a journal like AJP or PhilSci or PhilStudies is able to do what they do – handle a huge number of submissions in a (for the most part) thoughtful, considerate and timely fashion.

(Once again, this list may be way off. Perhaps readers have routinely had negative interactions with these journals. I've had negative interactions (in the sense that my paper was rejected) many times with these journals but even in these cases I thought my paper was handled for the most with a fair amount of care and attention. I’m completely willing to accept that there just are no ‘exemplars’ to learn from. But in the off-chance that there are such journals we should perhaps try to better understand their success)

anonymous TT

I am not sympathetic to Pendaran's proposal (even though I am sympathetic to his concerns). In general, I don't like top-down bureaucratic solutions. That's why I think the most effective reaction would be an awareness among faculty members that we all have to work less. (Also, look at the number of faculty members with kids: they are a small minority in the new generation because they don't think they'll time to be parents).
Alas, people seem to have given up the hope of obtaining better working conditions, but there is no reason why we shouldn't organize ourselves to obtain them.

Amanda

I am also skeptical of bureaucratic top-down solutions. Very skeptical. On the other hand, I am very sure that a "awareness among faculty that we need to work less" is never going to happen. Well, there is this awareness, but it means nothing without some measure of enforcement.

Also, not sure where you are getting this idea that less faculty members have kids - that certainly isn't my experience. And after tenure, how much you work is up to you. I am sure all of us know tenure faculty members who work less than a part-time job. I know them, anyway.

Pendaran Roberts

The weekend is due to a top-down solution.

'In 1929, the Amalgamated Clothing Workers of America Union was the first union to demand a five-day workweek and receive it. After that, the rest of the United States slowly followed, but it was not until 1940, when a provision of the 1938 Fair Labor Standards Act mandating a maximum 40-hour workweek went into effect, that the two-day weekend was adopted nationwide.'

https://en.wikipedia.org/wiki/Workweek_and_weekend

anonymous TT

Lots of things are granted by state law as a result of the action of unions etc.
So let's put it differently: it is obvious that some sort of rule or guideline has to be introduced and enforced, but nobody will ever introduce it if there is no pressure to do so from the bottom, i.e. from unions etc.
Unfortunately, unions don't care anymore about these issues, since we are threatened in a number of ways.
And I don't know where you work Amanda, but where I work (I mean: in the area where there are several philosophy depts.) it is much rearer that philosophy junior faculty members have kids than other junior faculty from other depts.
(And

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Subscribe to the Cocoon

Current Job-Market Discussion Thread

Philosophers in Industry Directory

Categories

Subscribe to the Cocoon