The following is a post submitted by Rögnvaldur ('Valdi') Ingthorsson (Lund University):
Academic Frustrations: Publications and Jobs
—Valdi Ingthorsson
I recently came across an old post by Jason Stanley in The Philosopher’s Cocoon about the frustration of getting papers through the peer-review process (Stanley on Peer-Review). The post put its finger on a couple of things that have frustrated me for a while. First, why is it so difficult to get philosophically interesting papers published in high-ranking journals? To my mind, these journals seem to be mostly filled with papers that don’t do much more than recirculate received views, albeit in a competent way. Second, why does quantity and ranking still rule over quality in the hiring process? I have come to think the two are closely related. As you can figure, I am looking at this from the perspective of someone who struggles to get published and to get a job. I very much expect that journal editors and tenured faculty in hiring committees will not recognise themselves in the description. But I think it is healthy for them to find out how the whole thing looks from the other side (even if it is all biased and exaggerated, a possibility I cannot rule out).
It is important for the credibility of this piece, to establish firmly that my difficulties in getting published are not due to my bumbling incompetence as a writer. My publications occur in seven SEP entries, my recent book on McTaggart’s Paradox was nominated for the APA Sanders Book Prize, and I have been awarded research grants to the combined sum of approximately $550.000. My book was actually described by a reviewer as “superbly written, exhibiting exceptional clarity, concision, and flow” (J. Mozersky, Notre Dame Philosophical Reviews). Indeed, the only consistent feedback I receive about my applications for jobs and funding is that my research is highly original and interesting. And I have received more than just a handful of quite detailed evaluations (we’re talking +50) because in Scandinavia applicants for university positions and research grants must be assessed by external reviewers (sometimes as many as 3), and their verdicts are by law public documentation. However, despite praise about originality and quality I am still consistently ranked low on the list of candidates because, the argument goes, I just haven’t published enough and not in the highest ranking journals.
As far as I can see, focusing on quality over quantity is the worst thing you can do as a young academic, despite everyone’s claim that it is quality that matters. My cynical advice to young scholars must be: cancel (for a while) your ambitions for true philosophical excellence and write some competent rehash of mainstream views (with some minute tweaks) for publication in the highest-ranking journals. Basically, write some stuff everyone can understand on first reading and therefore doesn’t require any effort on behalf of referees. I have heard many enough state out loud that it’s not their job as referees to make an effort to understand the papers they review; it is the author’s job to make it understandable from the first syllable. We are seeing here something comparable to the application of the ‘7 second rule’ of pitching; if your presentation doesn’t capture the critic in 7 seconds, they stop listening. Is philosophy dumbing down to that level? In a 7 second world, posterity would never have heard of Kant.
Don’t get me wrong. It is meant to be difficult to get papers published and I think we need the peer-review system. However, the system should be meant to weed out the incompetent and uninteresting but not also the interesting and innovative to favour mainly the competent and bland. My worry is that this is what the system actually does and I am not alone. Jason reports in his post that his best pieces meet with the most resistance and Tom Cochrane reveals in another recent post in the Philosopher’s Cocoon that his best paper went through 20 rejections. John Heil, in the inaugural editorial of the Journal of the American Philosophical Association, says that one of the main reasons for initiating the journal is the concern that the peer-review process, in its current shape, “can stifle unconventional papers, papers that take chances, papers that go out on a limb: interesting papers”. Now, others argue that this is all a good thing, because for each rejection the papers improve. That may be true, but don’t forget that it is the fact they were interesting from the beginning that they were rejected.
Going back to Jason’s post, ultimately he offers advice for frustrated people like myself: don’t get hung up on peer-reviewed publications, just get your papers ‘out there’ any which way you can. In his view, “whether your papers are accepted to leading peer reviewed journals doesn’t much matter, even for tenure, as long as they come out somewhere”. In his experience papers somehow “take on a life of their own” and find a way to their audience no matter where they are published.
Jason is right to say we shouldn’t underestimate other options, and perhaps he is right to think that peer-review publications are on the whole overrated, but I think he is wrong to think that publications in high-ranking journals really doesn’t matter that much for getting a job. I venture to say that Jason—who has had a straight run from a PhD from MIT, with Robert Stalnaker as advisor, to Oxford, Cornell, Michigan, Rutgers, and now Yale—lacks the perspective of the struggling young scholar. Getting tenure once you are in the living room is something altogether different from those trying to get a foot in the front door. And this is not to say Jason has had it easy or didn’t deserve it; all I am saying that he has not been in the position of applying for jobs in Philosophy when working as a temporary lecturer in Food and Nutrition at an unranked university.
Indeed, Marcus Arvan comments that Jason’s story is perhaps best understood as one more indication that the peer-review system needs tweaking; that we should not just accept it for the troubled beast that it is and find other ways to get our papers out there (well, we can do that anyway). And, to repeat, my concern is not with the fact that it is difficult to get published, but the fact that the system seems to make it extra hard to publish a certain kind of papers, those that argue for novel and interesting perspectives that challenge the mainstream.
So, as you may have guessed, I really struggle to get published. The record is 13 rejections for a paper that nevertheless (unbeknownst to me) became a central subject of a PhD thesis from Helsinki University in 2016 even before it came out in print. To put it in perspective, the thesis contrasts my take on truth with Crispin Wright’s alethic pluralism and the author argues in favour of my take on the issue. The author found the draft of my paper on my Academia website where it had been available since 2008, which actually proves Jason’s point that papers can take on a life of their own regardless of how they ‘come out’.
Also, like Jason, some of my best papers have not found their way into any journal, not even after 10 years in the reviewing process. The difference is that Jason finds it comforting to have a couple of papers up his sleeve in case of a dry spell, while for me they are potentially what stands between me and my first permanent job (after 16 years in academia).
My experience is that it is quantity and journal ranking that ultimately decides the outcome of job-applications, because those criteria rule supreme in the stage where it really matters, notably in the short-listing stage. Obviously, the rank of your grad-program will matter too, independently of the content of your thesis. And it does so in a way that basically excludes the possibility that someone with a few really good papers (good by content, not by where it is published) could ever best someone who only outdid them in quantity (except, of course, if some celebrity philosopher declared this someone a genius). I draw on my experience from more than 250 job applications in the last 15 years, so this is more than mere anecdotal evidence. I’ve estimated that I have spent a whole year of my career applying for jobs and grants. The grant writing has paid off, but that seems to have no implications for my job-seeking. Which is weird, considering that success in grant acquisition is always mentioned as an important criterion.
Outside of Scandinavia I have yet to come across a hiring committee that reads more than a writing sample during the shortlisting process (typically capped at 10.000 words), and in a vast majority of cases they read nothing at all. Ergo, quality only matters in the choice between those who have already been selected as quantitatively supreme on the basis of their CV.
As I said earlier, Scandinavian universities are required to appoint external reviewers to judge applicants. The reviewers are required to become acquainted with the applicants’ research (often capped to the 10 best works), and their verdicts are by law public documentation. In some countries it is only required that reviewers read the research of the shortlisted group (this is the case in Iceland and Norway), in which case shortlisting is performed on purely quantitative basis. Whenever I have received the verdict of such reviews, referees consistently stress the originality of my research, but without exception place me towards the lower end of the short list because I have too few publications and none in the highest-ranking journals. The only times quality beats quantity in Scandinavia too, is in comparison between quantitative equals.
I find it ironic (in a desperately depressing way) that while the philosophical community always reacts with horror and shock whenever anybody mentions the introduction of quantitative measures anywhere, then it seems to me that the entire tenured faculty across the globe is mainly selected on the basis of quantitative measures and rankings. Whatever qualitative assessment enters the process, it enters at such a late stage that it only serves to distinguish between those that excel quantitatively. In other words, when tenured faculty preaches quality over quantity, they are perhaps flattering themselves a bit too much. And of course this doesn’t mean that they aren’t doing excellent research, it just means that it is unlikely they got their job by outdoing everyone else on purely qualitative grounds (and don’t come back at me citing the exceptions).
Nothing of what I have said so far excludes other ways of beating the competition, ways that have nothing to do with your publications, i.e. nepotism and corruption. Let's not go down that sordid road.
The point is that if you are lucky enough to hit the ‘fast track’ early, then by all means focus on quality and just get your papers out there in any way you can, but you might have to consider another strategy if you still are looking to get a foot in the door. This latter group should not, and cannot afford to, be happy with the peer-review process as it works today.
But what can be done with the peer-review system? Let it be clear that (i) I am in favour of it, and rather think referees should be more selective than anything else, (ii) that I perfectly well understand that journals are swamped with an increasing number of submissions, and (iii) I know the refereeing side of the system, so I am not looking at this merely from the perspective of someone who doesn’t get published. I referee for 14 journals of which half count as high-ranking, and I review book manuscripts for major publishers. I definitely referee at least as many submissions as I submit (and resubmit) myself. During this academic year alone I have refereed 7 papers and one monograph.
I am in fact in two minds about whether anything can or should be done to the system. Maybe it is the best of all possible systems. Indeed, I suspect the main problem isn’t the peer-review system, but the philosophical community. Take for instance the increasing volume of submissions. One major cause for this increase is arguably the focus on quantity and ranking of journals in the admissions and hiring process at all stages; everything today hinges on publications, even getting into a PhD program may depend on how many publications you have. It used to be about getting the chapters of your thesis published after you finish, but now the aim seems to be to publish before you apply for a PhD program. Who administers this process? It is tenured members of the philosophical community. If what you wrote really matters, rather than how many you wrote, or where you published it, there would immediately be more focus on quality among writers and number of submissions would drop. But that presupposes that admissions and hiring committees start to rely on their own assessment rather than quantity and rankings. I suspect it is naïve to think that will ever happen, because everyone seems obsessed about quantity and ranking. I am still going to complain about it.
Obviously, any complaint about the peer-review system is automatically a complaint about the philosophical community that operates the system. Referees and editors are all part of the philosophical community. WE do it all and if anything is to change, WE need to look in our own bosoms. There is a need for a serious discussion in the philosophical community about the role of a referee, editor, etc. Clearly there is no consensus about it. Many seem to see it as a chore and a bore (perhaps to the same degree they see teaching as a chore and bore). For my part I regard refereeing as an integral part of being an academic, and doing it well is to do your duty to guarantee the quality of the outputs from our profession. It also serves to return a favour to all those referees that have done a good job refereeing my papers. Referees have given me invaluable advice and feedback in addition to some grief. My role as referee is also a part of making the work easier for journal editors around the globe, who in my experience are as a rule people of high ideals and ambition. Finally, I find refereeing enormously rewarding. Being forced to really dig into a paper and pick it apart nearly always ends up in some illuminating thought. Often of course the reward is simply in seeing how much better the paper could be if such and such changes were made. And I am happy to contribute with that because I wouldn’t want to make my effort go to waste. I do however also try to convey the message that I am just doing my best as a colleague; I am not taking the role of a superior authority. The authors will have to make the call on whether my judgement is valuable or not, but I am forced to take a stand on whether to recommend publication or rejection; that is the purpose of refereeing papers.
Having now put a lot of blame on the referees (i.e. us), I still think journals can very easily improve the refereeing process by making an effort to improve the instructions to referees. Provide clear criteria about how to judge the papers, and perhaps even provide a set of standardised sentences from which referees can pick and mix a verdict. There are papers that can be constructively rejected simply by formulations such as: ‘The paper is very poorly written with clear gaps in the argumentation’. ‘There are obvious problems with references’. ‘The Author does not relate to well-known works relevant for the subject’. ‘The paper is competently written but does not contribute with anything new to the issue’. Providing the option to pick standardised formulations does not prevent anyone from writing their own well-reasoned judgement if they want to. But most journals provide no guidance to referees or practically nothing at all. And none of those I know about really tell you how the editors are making the calls. I think it would be enormously beneficial if referees got feedback on the consequences of their verdicts.
The streamlining of the referee process is easiest for the category of papers that are obviously flawed and/or add nothing new, and arguably that is the largest category. The most difficult papers are the one’s trying to say something interesting and new, and which are therefore also as a rule more difficult to penetrate. Again, it would make a difference if the referees are instructed to look out for such papers, urged to make an extra effort with them and to opt to a greater extent for ‘revise and resubmit’ rather than ‘reject’ if the paper is interesting but problematic. But that is an editorial decision that depends on the will of the editors to deal with an increased number of R&Rs. And of course if they really want to promote interesting papers or rather cater for the mainstream.
Finally, I am entirely unsympathetic to desktop rejections without comment, or worse, with an absurd rationale for not providing comments. The best one yet is: “We cannot provide comments on all rejected papers. We focus rather on arriving at a well-informed judgment without undue delay”. How well-informed is a judgement that is difficult to write down? In my own case, the effort of judging a paper is equal to the effort of formulating reasons for acceptance/rejection. Hence I can write those reasons down and submit it with my review protocol on the webpage. I can’t see that the situation can be different for an editor that decides in favour of a desktop rejection without sending it to referees. Either they form a reason in their minds—and it can be about the fit of the paper to the goals of the journal rather than the quality of the paper—or they do not have a reason for rejecting it other than “I don’t like the look of this”. But in what world is that a well-informed judgement? Desktop rejection is the tools of people who think they can “feel” the quality of papers. I don’t think I have that ability and I doubt other really have it. I have too many times found out that papers I once thought were brilliant are actually rubbish, and vice versa.
If a journal cannot process their submissions properly because they are too many, then return a portion of them to sender with the honest declaration that they have been arbitrarily rejected because the journal cannot handle the volume of submissions at this time. Or, in case of a cursory scrutiny, then say that this is what you did. I have been rejected with the explanation that there simply were too many submissions at this time, or that submissions were closed until such and such a date due to the backlog. Rejections like that don’t leave you wondering, and you don’t waste any effort other than finding another journal to submit to.
To my mind, journals that allow their editors and/or referees to give a verdict without formulating a reason for the verdict have decided not to participate in the peer-review system. I will not send papers to such journals nor will I referee for them.
So, what does all this amount to in the end? Perhaps only to a plea to the philosophical community to make quality and innovation the main selection criteria when you are refereeing/hiring, and be more willing to engage in a R&R procedure with the papers that are ‘interesting’. At the same time I think we need to be more critical of the merely competent. How many competent papers arguing the same point do we need?
There is a lot in this post.
A few thoughts:
1. I agree the peer review process is broken in some way, and favors boring, non-innovative, safe papers. I have had decent success with publishing, but I think my better papers are not published. When I read journals 75% of papers are boring.
2. As much as everyone agrees with me about the problems of peer review, nearly EVERYONE disagrees when I offer my solution. I think referees must be motivated by something other than good will, so we either need to pay them, given them free books, or make referring a big part of promotion. We can pay them by charging for submissions, and yes, it is easy to find a way to do this that puts no burden or the smallest of burdens on those in adjunct/grad student, or financially insecure positions. Those are the people, currently, who suffer the most from the problems in peer review, so it would overall help them a lot, not hurt them. I was in a very rough financial situation for many years, and I have no doubt that this change, even in my situation, would have been welcome. Suppose a grad students submits 10 papers a year at $5 each. That is a cost the vast majority could afford with no problem, and we could even make it free for grad students/adjuncts if that would make everyone happy. When you look at how other disciplines run conferences, it is amazing how cheap it is to be a philosopher.
3. I am curious if you are applying for jobs world wide...?
4. I disagree that quantity and prestige of publications is all that matters. In the US, I would say what matters the most is prestige of PhD school and connections. Many people with high Leiter rank prestige or connections get hired with few, if any, publications in good journals. This is what matters for research schools, anyway. For teaching schools what matters is experience teaching and competency in publishing.
I might say more latter, that is it for now.
Posted by: Amanda | 04/18/2018 at 03:12 PM
I'm never going to not post this when this kind of discussion comes up. While it's not a silver bullet that will solve everything, I think it would especially help with the need for overproduction to get hired and tenured as well as the unbelievable amount of submissions that journals get
http://dailynous.com/2015/12/31/a-modest-proposal-slow-philosophy-jennifer-whiting/
Posted by: Muhammad | 04/18/2018 at 04:07 PM
Also I just read Amanda's post just now and completely agree with point number 2:
http://digressionsnimpressions.typepad.com/digressionsimpressions/2016/10/if-the-system-is-broken-blame-the-referees.html?cid=6a00e54ee247e3883401bb09451a65970d#comment-6a00e54ee247e3883401bb09451a65970d
Posted by: Muhammad | 04/18/2018 at 04:10 PM
Amanda.
1. I am happy you agree with 1.
2. I have also played with the thought of payments to editors, referees, and/or with fees for submitting, and am still fairly positive about it, but only enough to think we need to brainstorm about the consequences. I am little bit scared it would only become one more way to get more money for publishers. But it would at least acknowledge (if only symbolically) that refereeing is valuable.
3. Yes, I apply for jobs worldwide, but I have a family so it has depended on our family situation over the course of the last 16 years how far afield I have applied.
4. And, yes. I suspect you are right in thinking the US is different, although I think in the UK an American high-ranked PhD school also matters highly. But am I right in thinking that this only relevant for junior positions? Again I agree that for teaching schools what matters is experience, but also they tend to favour the persons already in place.
Finally, the article generalises and exaggerates for provacative purposes, but I still think there is a grain of truth in it (perhaps more ;) )
Posted by: D | 04/18/2018 at 05:33 PM
So actually, from what I know, in the UK prestige of PhD institution matters a lot less. There are a number of accomplished philosophers with degrees from low Leiter ranked schools who went to the UK because their publishing records could get them a job, where in the US their not prestigious PhD institution kept them out.
As far as only for Junior positions, I don't think so. I mean, it is a bit more complex. For a senior position at a research school you must be a major player in your field. But the way prestiges bias works, it is unlikely that you will be a major player unless you went to a prestigious PhD program. This is because what your current institution happens to be has a major role in whether you are acknowledged as a major player (despite of 'blind' peer review...hmmm) and whether you are at a currently prestigious place is closely connected to PhD institution. So, yeah, turtles all the way down.
All this said, I will repeat that of the measures we have for judging philosophical ability, publishing is one of the better ones, in spite of how broken it is. Guess we know what that says for the other measures.
Posted by: Amanda | 04/19/2018 at 01:05 AM
Amanda -- I like the idea of incentivizing referees, but I wonder: is it necessary? There are a number of high-profile journals that seem to somehow provide good comments in a timely fashion. I'm thinking of AJP, Philosophy of Science, BJPS and, to a lesser extent, Phil Studies (add your own to the list or disagree if your experience has been vastly different than mine). Incentivization may help, but I do wonder how these journals are able to identify sensible referees and provide reports in such a timely fashion!
Posted by: I want to be fresh | 04/19/2018 at 03:10 AM
Well, unless there is a way to get all journals to work like the few good ones, it is necessary. I am sure some journals are better than others. But tbh, what mostly I have noticed is some journals are faster than others. This, admittedly, is a big deal. Personally I haven't noticed a difference in the quality of referee comments from journal to journal. No matter the journal, I have gotten the impression (at times) that referees are rushed and not reading my paper carefully. And given the lack of any incentive to read it carefully, this is not surprising. If I am honest, there are times I myself have rushed through a paper, and I am someone with unusually high guilt motivation about these types of things.
Posted by: Amanda | 04/19/2018 at 07:55 AM
I have to agree with Amanda that we need to pay referees. I know people who never referee, because they see no benefit to them. They see the dire job market, and all the work and stress that causes as relinquishing them of supererogatory tasks. I have to say I agree. The current peer review system was designed during and for a very different world. It doesn’t function in a world with adjuncts comprising such a large percentage of the labor force, short term contracts galore, with many never finding permanent work or only after half a decade or more. However, I caution that the submission fees that would therefore be required be proportional to the salary of the submitter. Also, I caution that referee pay must be high enough To actually motivate good work, probably around 200 dollars per report.
Posted by: Pendaran Roberts | 04/19/2018 at 08:03 AM
Some thoughts.
It is very different to determine quality of one's papers/ideas/arguments. Some might think that a paper X is genius, other might think it is obviously wrong and bad philosophy. And some other might even think it is not philosophy at all... It is almost impossible to evaluate quality when hiring applicants, because philosophers disagree so much. One way, I assume, this could be done is to see where (in how prestigious journals) one's papers have appeared in. Better the journal, better the quality of the paper (of course this is not obviously true but this is how many people think it is).
There are problems in referee system but those problems mainly occur in top general philosophy journals. I have always received good feedback from specialized journals (whether rejection or acceptance). And I believe those journals are more willing to publish different kind of work. Sometimes editors of top journals don't even seem to be aware of who reviews the submissions and they seem to accept anyone for review if one is willing to do it. For example, I have reviewed for top-10 general journal even if I do not have a PhD and at the time I only had 1 publication in a specialised journal.
I think, in general, we should appreciate reviewing more (when it is done properly). Now, at least in principle, one could be a successful academic without reviewing even one paper: That is because no-one is demanding us to review. No-one is not fired for reviewing too rarely (or writing bad review reports!) and nobody is not getting a job because "one has not reviewed enough". Reviewing is seen as supererogatory although it should be obligatory.
I don't think referees should be paid though. But reviewing should be understand as a merit in hiring. (for example lets use Publons website more).
75 % of the papers might be boring but so what? Okay if people who write boring papers are hired instead of people writing interesting papers, it might be a problem. But nowadays philosophers seem to think we are publishing too much because "I don't have time to read it all!" this is so wrong way to see it. For example, if one is criticizing some paper, it might be worth publishing even if no-one would eventually going to read it. Publishing research is inherently valuable and if I am able to show mistakes in someone's else argument then of course I should do it, that way I expand the knowledge in the world. Recently David Velleman raised concerns that people are publishing too much and too early. I believe it would be very difficult to see these sort of comments in other disciplines. Philosophers seem to be only ones that think we should only publish A) something that makes major contribution B) every philosopher should have time to read every philosophy paper and understand them.
One problem with the discipline is that there is not clear measure how to evaluate candidates. For example, it seems that in the research schools in U.S. hiring decisions are based on the prestige of PhD school and connections (as Amanda said). Teaching schools based their decision on teaching experience. In the Nordic Countries it seems that hiring is based more on the mere quantity of one's work. And yet there is another option, that I guess is also used: impact on one's work = how much citations you have gotten.
So "why don't you apply to somewhere else in the world" might not be a good advice because the criterion for hiring might be very different elsewhere.
Posted by: JR | 04/19/2018 at 08:19 AM
Scientists have been complaining about the vast research literature since the 1960s. This is not a problem unique to philosophy, nor a new problem. I referee a lot. One key problem is that philosophers are sending papers out to journals when they are no where near ready. Also, I was shocked to learn that early career people are sending out 10 papers a year. This shows very bad judgment. One way to cut the rejection rates of journals in half is if we all sent out 1/2 the number of papers.
Posted by: Referee | 04/19/2018 at 08:57 AM
I am entirely *not* on board with the "Most Papers Are Boring" bit.
I *am* entirely on board with the "Most Work is Deeply Specialized and Only of Interest to Specialists" bit. But I think it's obvious on reflection that that's as it should be.
Posted by: Tom | 04/19/2018 at 09:12 AM
Referee: You write, "Also, I was shocked to learn that early career people are sending out 10 papers a year. This shows very bad judgment."
Bad judgment with respect to what? Bad philosophical judgment perhaps - but not, in my experience, bad strategic judgment. For what it is worth, early in my career when I was struggling to publish, I was told by two *very* successful and well-respected people in the field that their secret was (and I quote) "always having ten papers under review." They both gave me the same advice independently. I followed it. I got a job. Then I got tenure.
Look, I would be willing to agree with you that there is an obvious sense in which this is non-ideal and regrettable. We have, for all intents and purposes, entered something analogous to a publishing arms-race. But unfortunately, that's where we are - and one can hardly blame people desperate for jobs for playing the game that exists. It may be the wrong game--and I am all for changing the game if we can--but for now it's the game we have.
Posted by: Marcus Arvan | 04/19/2018 at 10:21 AM
JR says: "Better the journal, better the quality of the paper." But they also say: "It is very different [i.e., difficult, I assume] to determine quality of one's papers/ideas/arguments. Some might think that a paper X is genius, other might think it is obviously wrong and bad philosophy. And some other might even think it is not philosophy at all..." Isn't there some tension here?
More generally, what exactly is the evidence for that first claim?
Also even if there's truth to it, I, for one, worry that this kind of view about the relation between journal reputation and paper quality is, to a large extent, self-confirming, in that papers in journals with good reputations are more likely to be thought to be good, and (this even more so) papers in journals with poor reputations are more likely to be thought to be bad (when anyone even deigns to read them). (I think something similar is probably true of the relation between PhD program reputation and student "ability.")
Posted by: NK | 04/19/2018 at 10:28 AM
‘Also, I was shocked to learn that early career people are sending out 10 papers a year. This shows very bad judgment. One way to cut the rejection rates of journals in half is if we all sent out 1/2 the number of papers.‘
I have to agree with Marcus. It’s the unfortunate game we have to play, given that there are nowhere near enough jobs for the graduating PhDs.
Also, many of us get little or no help from our departments. I was mainly on my own. So, referee reports were the only way of getting expert feedback.
Posted by: Pendaran Roberts | 04/19/2018 at 11:07 AM
Interestingly, a paper was published today in the journal "Ethical Theory and Moral Practice", presenting an argument about the flaws of the current practice of peer-review, that seems to agree very well with my argument above.
It is open access: https://link.springer.com/article/10.1007%2Fs10677-018-9891-9
The author writes: "Recent years have borne witness to a mainstreaming within scientific publications that is unprecedented in history since the emergence of scientific journals. This is especially true of philosophical journals and even more so of ethics journals." also "Reviews, of course, have an important filtering function in the exclusion of poor articles. But they also relentlessly screen out much of what would be more than worth publishing, I’m afraid just many of the most innovative and original contributions. And they let pass a lot of conventional papers that lack substantive value. It is this kind of review that I would like to discuss."
All the best, Valdi
Posted by: Valdi Ingthorsson | 04/19/2018 at 11:36 AM
I think NK makes some important points, and that they might actually point a way toward (at least part of) a solution to the problem(s) the profession faces here (viz. the publishing arms-race, overwhelmed journals, etc.).
It is, I think, no secret that our profession seems to attach great value to two things: (1) publication venue, and (2) quantity (i.e. research output). As Valdi's cases illustrates (and many other sources of information seem to me to broadly corroborate), people in our profession are routinely evaluated on these grounds--not only in hiring and tenure and promotion, but also in terms of prestige in the discipline. I cannot count the number of times I've heard remarks to the effect of, "Oh, s/he's published in *Mind*" or "X is a crap philosopher. Look at the journals they've published in", or "X only has two publications, whereas Y has 14!"
Here's the thing, though. First, just about everyone recognizes that highly-ranked journals sometimes publish bad work and low-ranking journals sometimes publish good or great work. Second, just about everyone recognizes that publishing more is not necessarily better than publishing less. Third, there seems to me to be a *better way* to measure the worth of someone's research than either venue or quantity: namely, the kind of *impact* their work makes.
Look at Valdi's Google Scholar page. Valdi may not have published a ton, but his work is plainly making a a substantial and increasing impact in his field. Of course, citations-alone need not be indications of merit (some bad articles are cited a lot despite being widely recognized *as* poorly argued). But if you combine citations with how one's work is received (i.e. whether others defend one's work, develop it further, etc.), then that would seem to be the best measure of someone's merit as a researcher.
Indeed, it seems to me *far* better than measuring research merit on venue or quantity. As while back, Kieran Healy and Nick Bloom did a study of citations in four top-ranked journals (JPhil, Phil Review, Mind and Nous): http://philosopherscocoon.typepad.com/blog/2015/02/healey-and-blooms-citation-data.html ).
Of articles published in those journals from 1993-2013, Healy and Bloom found that "almost a fifth of them are never cited at all, and just over half of them are cited five times or fewer."
Under the standard way of "ranking" people as researchers (viz. venue and quantity), someone who publishes three articles in Mind and one in Nous would be taken to be a fairly spectacular researcher (a "star") even if their papers were hardly ever cited or engaged with in any way. Contrast someone with that research profile to Valdi, who has a publication in Metaphysica with 18 citations, one in Dialectica with 10, a book with 9, and so on. Despite having published in "worse" venues, Valdi has clearly made a greater impact in the discipline than our hypothetical star.
Suppose, then, we moved away from evaluating researchers on the basis of venue or quantity of publications and instead moved toward evaluating them on the basis of *positive impact* (citations, level of engagement in the literature, and how well their work is received). That, it seems to me, would incentivize people to (A) send fewer things to journals (focusing on quality rather than quantity), (B) send more original and ambitious work to journals (given that such work seems more likely to stimulate further discussion), and (C) lead people to be hired and evaluated not primarily on the basis of what two reviewers at a journal say, but how a person's work is received by specialists in their area and/or philosophers in the discipline at large.
All of these seem to me to be good things--moving us away from the publishing "arms race" as it exists today, incentivizing more exciting work, and leading people like Valdi (who have made a real impact) more likely to be hired.
Posted by: Marcus Arvan | 04/19/2018 at 12:00 PM
Actually, let me amend that last comment a bit.
It doesn't take much digging in the history of philosophy or science to notice that particularly innovative work is often initially met with a mixed or hostile reception, and conversely, that some work that is received well in the short-term is forgotten long-term. Kant's First Critique received a mixed reception for several years; Darwin's Origin and Einstein's relativity were savaged by some critics; and so on. On the other hand, during their time, Filmer (who is now forgotten) received notoriety for defending the divine right of kings; Thomas Reid was wildly influential as a moral theorist (not so much anymore); etc.
As such, although I think it might help (in many ways) to move toward a standard prioritizing 'impact', there are potential traps to beware here as well. All things being equal, I think it is probably better for work to well-received than poorly received. Still, it also seems to me better, at least all things being equal, for work to be controversial than not engaged with.
Posted by: Marcus Arvan | 04/19/2018 at 12:35 PM
Marcus-I worry about looking toward impact. Impact in general, and citations in particular, can depend in large part not only or even primarily on the quality of work but on already being known in the field. Famous people get cited because they are famous. People cite the work of those they know. This would become even worse if citation counts came to matter more. We could even have unstated (or in some cases stated) agreements between philosophers to cite each other every chance they have.
One problem with blind reviewing is that it is not completely blind. But at least it is sometimes blind. Citation is not.
Let's look at this another way. Suppose we have an early career philosopher from a low or unranked school. In the current system, publishing in Mind will help this person be taken seriously as a philosopher (even if the lack of pedigree still hurts). In the system you propose, people could still justify ignoring this philosopher until the paper in Mind gets some number of citations. Will this paper get many citations? Well, all things being equal, it seems to me that the paper would get much more attention if the philosopher comes from a highly ranked school and has many connections. This need not even be nefarious--I suspect people in general cite the work of people they know. I think this sort of switch would give even more power to networking.
I also wonder about your claim that this will inspire people to "send more original and ambitious work to journals (given that such work seems more likely to stimulate further discussion)." This might be right, but I am not so sure. Small papers written to make small moves in discussions with many participants may well be a safer bet to ensure at least some level of engagement. They will not light the world on fire, but at least they have an audience. Truly novel work might not have any built in audience, and so it may very well be ignored.
Of course, maybe I am wrong--I would be curious to hear what you think of this problem.
Posted by: Peter | 04/19/2018 at 12:48 PM
I would stand by my claim that 3/4 of philosophy papers in top journals are not just narrow and specialized, but boring, and boring even to those specialists. The problem I have with papers that are highly specialized within my own specialty, is oddly, the papers are often 3/4 literature review of that very literature of which I already know especially well (so boring) and then making a tiny point at the end.
And of course it matters if papers are boring. I am not buying it "adds something" to some correction of some arguments. Tight arguments are not all that matters. It also matters that the arguments are on worthwhile topics. If it is not immediately obvious to you that an argument about some topics are more worthwhile than other topics - well, then our values are so far apart I am not sure we can come together
Posted by: Amanda | 04/19/2018 at 01:16 PM
I share Peter's concerns about using impact as a measure of quality.
Also, I'd love it if there were a workable solution to this problem, but I'm inclined to think that, at bottom, the problem is just that we're competing with one another (for jobs, for grants, for prestige, etc.): because this competition requires rankings, and because we can't rank one another directly by quality, we need some kind of proxy for quality; but any proxy is going to create perverse incentives; and so we end up with the kinds of problems at issue here. Changing the structure of the competition, by focusing on impact rather than publication venue and/or number of published papers, just doesn't seem to address the fundamental issue.
It's also pretty clear to me that citation *is* self-reinforcing: once your cited once, your chances of being cited again increase; and the more you're cited, the more likely you are to be cited more.
Looking to the kind of engagement you're getting makes some difference, but who's really going to check that? Wouldn't you have to read all the work in which the person was cited? Wouldn't that take way too long?
Posted by: NK | 04/19/2018 at 02:13 PM
Peter & NK: Thanks for your comments. I get your worries. They are worth taking seriously. Here are three general thoughts in reply.
(1) No approach to "measuring research" is going to be perfect. Everything can be "gamed" in one way or another. People game the "anonymized" peer-review system (see e.g. Google reviewing, people posting unpublished papers publicly, etc.). The question is whether focusing on impact will be better, on the whole, than focusing on venue and quantity.
(2) On that note, my sense is that people already preferentially cite their friends and people in positions of privilege (i.e. famous people) - so I'm not sure that evaluating research on the basis of impact would make things substantially worse (maybe it would - I'm just not sure). However, for several reasons, I think it might make things better.
First, people at small schools may not have the time, resources, or incentives to send stuff to top journals. I can say more about this if you like, but I can say this: the incentives to send stuff to top-journals are much heavier for people at R1's (they need it for tenure & promotion. People at smaller schools do not, and given the costs involved with waiting for top-journals, people at smaller schools may avoid them. I mostly do, though not entirely). This is one reason why I think the status quo--focusing on venue--heavily tilts things *away* from people in smaller places toward people in R1's.
Second, although people do cite their friends, etc., my sense is that people are surprisingly willing to cite and engage with work by "small names" they genuinely find interesting. Take Valdi. He *hasn't* published in Mind, or Nous, or PRR, or JPhil. He's not a tenure-track person, or at a ranked program. Yet he, despite all this, has made a sizable impact. Valdi's not alone. I could give you a lot of "small names" who have published in "bad journals" whose work people engage with.
Third, I want to emphasize that I am not just suggesting we focus on citations. Citations are one measure of impact, but far from the only one. There's a big difference between just being cited and actually having your work engaged with (i.e. discussed at length). Here again, I could give examples.
Long story short, although no measure of "research quality" is perfect--and sure, people probably would (and probably already do) seek to game citations--I still can't help but wonder whether impact, broadly construed, is a better way to go.
(3) Maybe I'm totally wrong. :) In that case, here's another alternative. Perhaps instead of changing how we evaluate research, people should instead attach more prestige to journals (such as the Journal of the APA) that specifically aim to publish bold, risky, unconventional work. I guess this was sort of Valdi's suggestion, and for what it's worth, I'm all for it.
Posted by: Marcus Arvan | 04/19/2018 at 07:39 PM
I think a lot of good things have been said in this thread. Of course there are differences of opinion because we all have slightly different experience of the process, but also because all suggestions about a 'one-size-fits-all' system is inevitably going to disappoint somebody. But of course, in order to have some chance of designing the 'one-size-fits-most' system, we need to openly vent all our crazy ideas with out frustrations. Indeed, in this thread some are discussing what should be done if nothing changes, and others discuss what should change. Nobody is quite happy with the system, but some think we are complaining about the wrong aspect of it.
At this point I will have to say that I think Marcus has a point of suggesting that impact should be given weight. But, I also share NKs worries about it, wherefore it will have to be done in the right way. So, perhaps impact should be a part of the equation, but not by sheer quantity (and I don't think that is what Marcus is suggesting). For instance, I have been a co-editor of a volume of papers that feature a rare collection of superstars. That volume has 21 citations (I am actually a bit disappointed by the quantity, but they are all quite 'heavy' citations), but those are not really citations that reflect my abilities as a researcher. I shouldn't take too much credit for them, but obviously I try to as long as I am hunting for jobs. They reflect more my luck in being around the famous for a while. But still, by having a little look at whether and where somebody has been cited (also how they are cited, they could be cited for expressing stupid views) can give some idea of whether ones research is appreciated by those doing similar research. So, quality in the assessment of impact is they key. Indeed, I am sure Marcus didn't spend too much time doing that excellent and nuanced analysis of my citations with a quick look at Google Scholar (and if you haven't signed up for Google Scholar do it today).
I will add one further reflection. In the UK, when they perform their regular assessments of the excellence of the various departments, approximately every 6-8 years (next one is in 2020 and called the REF) then officially the only criterion is quality. Each member of staff submits up to 4 pieces of research for a period of 6-8 years (early career people submit only 1) to be evaluated by quality (average number of submission is less than 3 per person). On the surface it looks like the ideal quality assessment; nothing to do with quantity, citations, and impact is supposed to be involved. Indeed, for 3* and 4* publications (the highest ratings), the criteria are only about quality: "Quality that is world-leading in terms of originality, significance and rigour". However, when it comes to 2* publications there is the phrase "recognised internationally" in the statement "Quality that is recognised internationally in terms of originality, significance and rigour". A publication in a high ranking journal is today taken to be an objective criterion for 'recognised internationally'. So no paper published in a high ranking journal will ever get a lower rating than 2* no matter how bad it is. In the end, ranking sneaks its way into everything.
Posted by: Valdi Ingthorsson | 04/19/2018 at 07:41 PM
Anedoctal episode in support of Valdi's point.
My best article criticizes the received opinions of all big shots in my field about one very relevant topic. (My co-author and I have a different solution to the puzzle - perhaps our solution is wrong, but no referee actually brought any argument against our proposal).
We sent the paper out to the best journals in our subfield and it was systematically rejected (even though no referee found any flaw in our proposal; somebody was bold enough to write "I don't think this is true" with no argument to support the claim). It ended up being accepted in a French journal whose editor liked the paper and (we believe) trusted us not to send the paper for review to one particular scholar whom we knew was hostile to our work.
I am not marketing myself on social media and I think our article will soon be forgotten. But I still think it's my best thing (I can't speak for my co-author, of course).
I am submitting this anonymously because I am still on the tenure-track. Once I will become famous (which I won't), I will submit a longer Stanley-like post to the Cocoon with all the details.
In the meantime, I agree with Valdi's main idea. The peer review system kills originality if there are no correctives (like, in our case, an editor who really believed in the paper and steered the review towards a positive outcome).
Sometimes it's not the reviewers' fault: we just have no time to publish&referee other scholars' works.
The first step towards a solution is to claim better work conditions: we should all be working less.
Posted by: anonymous TT | 04/20/2018 at 04:41 AM
I think that making reviewer names public upon publication, as Frontiers journals do, or at least available to the author at the end of the review process, would be cheaper and more effective than paying referees. I won't get into what I perceive some of the problems of paying referees to be, but I do think there are some obvious reasons to make reviewer names known to authors at the end of the process. Here are just two of them.
First, I have received some downright nasty, egotistical, and (most important to me) not-helpful reviews that I don't think I'd have gotten if reviewers knew they'd be named. Second, I've faced an average review time of about 6 months, including one review that after 14 months I was told that the reviewer was not responding to the (top-10) journal editor's e-mails. I think accountability would motivate good behavior here more than money.
Posted by: Stuck, PhD | 04/20/2018 at 06:44 AM
I think that making reviewer's names public would be a bad idea. If we publish the names of those who gave positive reviews along with the published article, then reviewers will become even more hesitant to take a chance on ambitious work. If the work is not well received, the reviewers may be shamed (this is especially true if the ambitious and risky work is on a controversial topic). It would be safer for reviewers to accept safe boring work. If authors found out about the names of those who rejected their work, younger and less-well established reviewers could become afraid of rejecting papers out of fear of backlash. The thought might be that if they reject the work of someone famous, they might face backlash that could hurt career chances. I suspect this would make title-googling even worse than it is now.
We clearly have a problem with nasty and sloppy refereeing, but I think the solution to this must be found elsewhere. One place to look first is to journal editors. If they gave more guidance, required comments suitable for discussion between peers (that is, respectful language), and refused to use referees with a track record of sloppy reviews, the situation might become better. Of course, since editors are not experts in every sub field, they might not always be able to tell when a review is sloppy, but surely they can sometimes tell and use this information to inform future referee requests. Even better would be if area-sub editors were used (as some journals have), and these editors could make determinations on whether reviews were sloppy or not.
I should note that by "sloppy" I do not mean to discuss whether the written comments are helpful to the author, but instead whether the review demonstrates a careful review of the work. In many cases careful reviews will also be helpful for revisions, but this is not always the case.
Posted by: Peter | 04/20/2018 at 09:34 AM
I have enjoyed this discussion. However, I wonder whether we shouldn’t be aiming to address the more fundamental problem: way too many PhDs for way too few jobs. The current peer review system might work fine if we adjusted the demands on it by decreasing competition for jobs. I will here assume we can’t do much to increase the number of jobs and focus on decreasing PhDs awarded.
Only two simple steps are required:
1. Departments should be required to have accurate placement rate pages, checked by a third party.
2. Departments should not be allowed to admit many more PhDs than they can place in permanent jobs, checked by a third party.
The third party should probably be the APA. They should have an accreditation program, and the punishment for failing to meet 1 and 2 should be a loss of accreditation. Only accredited PhDs should be worth anything on the job market. We should just band together and make it impossible to hire a non-accredited PhD. This is more or less what happens in psychology and other disciplines.
Consequences,
1. Probably a quarter of PhD programs would be shut down.
2. Programs not shut down would be reduced in size.
3. The number of PhDs awarded per year would be cut in half at least.
4. So, competition would plummet and so the publishing war stopped. With the publishing war stopped, the peer review system could function. Referees would be less stressed with their own obligations (due to the dire job market) and so able to properly review articles in a timely fashion.
5. Also with a decrease in supply, adjuncts would disappear, fixed-term contracts too, and salaries would go way up. We’d altogether be treated much better, as we should be. This is pretty basic economics!
Is something like what I’m suggesting practical to implement? I don’t know. However, it’s not obviously less practical to do than anything else suggested.
Objections
1. What about trust fund kids who just want to study philosophy? A: They can study it at a non accredited program.
2. What about departments that rely on slave, umm I mean student, labor? A: They would have to stop taking advantage of students.
3. What you suggest would just concentrate even more power in a handful of elite programs. A: Not necessarily. Some top programs don’t do better at placements than some less prestigious programs. Some top programs have poor placement and hide this fact currently. Of course probably more lower ranked programs would be shut down. But there just arent enough jobs for all these programs!
Posted by: Pendaran Roberts | 04/20/2018 at 10:37 AM
Thats an interesting idea Pendaran. One of the better ones's I've heard. The main snag, I think, is getting the accreditation to mean something,. The APA doesn't have universal respect, to put it mildly. And my guess is if this happened over night no one would care if the PhD program was accredited by the APA.
Posted by: Amanda | 04/20/2018 at 05:27 PM
I think this is all correct – the review system we have in place needs to change. As a proponent of incremental change, my first thought is to identify those journals that *are* performing (for the most part) well and learn from them. What is it about these journals that allows for timely and not awful reports? How do editors of successful journals approach their work? I know of at least one high-profile journal that was a total mess and completely cleaned up its act in a matter of 18 months due to a change of the guard (I’m sure there are lots of other journals people can think of that have also gone through a similar transformation). Incentivization or curbing the number of PhD students entering the discipline may be part of the solution, but we should also just try to understand how a journal like AJP or PhilSci or PhilStudies is able to do what they do – handle a huge number of submissions in a (for the most part) thoughtful, considerate and timely fashion.
(Once again, this list may be way off. Perhaps readers have routinely had negative interactions with these journals. I've had negative interactions (in the sense that my paper was rejected) many times with these journals but even in these cases I thought my paper was handled for the most with a fair amount of care and attention. I’m completely willing to accept that there just are no ‘exemplars’ to learn from. But in the off-chance that there are such journals we should perhaps try to better understand their success)
Posted by: I want to be fresh | 04/20/2018 at 07:03 PM
I am not sympathetic to Pendaran's proposal (even though I am sympathetic to his concerns). In general, I don't like top-down bureaucratic solutions. That's why I think the most effective reaction would be an awareness among faculty members that we all have to work less. (Also, look at the number of faculty members with kids: they are a small minority in the new generation because they don't think they'll time to be parents).
Alas, people seem to have given up the hope of obtaining better working conditions, but there is no reason why we shouldn't organize ourselves to obtain them.
Posted by: anonymous TT | 04/21/2018 at 04:23 AM
I am also skeptical of bureaucratic top-down solutions. Very skeptical. On the other hand, I am very sure that a "awareness among faculty that we need to work less" is never going to happen. Well, there is this awareness, but it means nothing without some measure of enforcement.
Also, not sure where you are getting this idea that less faculty members have kids - that certainly isn't my experience. And after tenure, how much you work is up to you. I am sure all of us know tenure faculty members who work less than a part-time job. I know them, anyway.
Posted by: Amanda | 04/21/2018 at 09:20 AM
The weekend is due to a top-down solution.
'In 1929, the Amalgamated Clothing Workers of America Union was the first union to demand a five-day workweek and receive it. After that, the rest of the United States slowly followed, but it was not until 1940, when a provision of the 1938 Fair Labor Standards Act mandating a maximum 40-hour workweek went into effect, that the two-day weekend was adopted nationwide.'
https://en.wikipedia.org/wiki/Workweek_and_weekend
Posted by: Pendaran Roberts | 04/23/2018 at 05:46 AM
Lots of things are granted by state law as a result of the action of unions etc.
So let's put it differently: it is obvious that some sort of rule or guideline has to be introduced and enforced, but nobody will ever introduce it if there is no pressure to do so from the bottom, i.e. from unions etc.
Unfortunately, unions don't care anymore about these issues, since we are threatened in a number of ways.
And I don't know where you work Amanda, but where I work (I mean: in the area where there are several philosophy depts.) it is much rearer that philosophy junior faculty members have kids than other junior faculty from other depts.
(And
Posted by: anonymous TT | 04/25/2018 at 11:57 PM