I am happy to announce that my article, "A Unified Explanation of Quantum Phenomena? The Case for the Peer-to-Peer Simulation Hypothesis as an Interdisciplinary Research Program", is now forthcoming in The Philosophical Forum. Here is the paper's abstract:
In my 2013 article, “A New Theory of Free Will”, I argued that several serious hypotheses in philosophy and modern physics jointly entail that our reality is structurally identical to a peer-to-peer (P2P) networked computer simulation. The present paper outlines how quantum phenomena emerge naturally from the computational structure of a P2P simulation. §1 explains the P2P Hypothesis. §2 then sketches how the structure of any P2P simulation realizes quantum superposition and wave-function collapse (§2.1.), quantum indeterminacy (§2.2.), wave-particle duality (§2.3.), and quantum entanglement (§2.4.). Finally, §3 argues that although this is by no means a philosophical proof that our reality is a P2P simulation, it provides ample reasons to investigate the hypothesis further using the methods of computer science, physics, philosophy, and mathematics.
Kripke's discovery of a posteriori necessity is often invoked as a great discovery in 20th Century Analytic Philosophy. I think it was an important discovery--just not what some seem to have thought it to be. Allow me to explain.
Recently, I have been arguing--along with people like Avner Baz and Mark Balaguer--that it is important not to confuse conceptual analysis with metaphysics. The traditional story of Kripke's discovery, if I have it right (and I may not), is that Kripke made a metaphysical discovery: that he discovered really interesting modal metaphysical facts (e.g. water is necessarily H2O) that we come to grasp through empirical discovery (i.e. water's molecular structure).
I want to suggest that this is not quite right.
Rather, I want to say, first, that Kripke only discovered (1) a couple of ordinary empirical facts which, when combined with (2) a simple a priori law (the law of identity), results in "a posteriori necessities." As such, I want to say that when looked at carefully, "a posteriori necessities" aren't very metaphysically interesting. They are nothing more than a couple of ordinary emprical facts combined with an ordinary a priori law of logic.
Second, I want to suggest that what Kripke found is akin to what Quine found with the analytic/synthetic distinction: namely, that just as there is no clean line between the analytic/synthetic, there is no clean line between the a priori and a posteriori.
Allow me to now explain both points.
Some commenters over at the Leiter thread on Searle's interview with Tim Crane have expressed some puzzlement over Searle's claim that he thinks philosophy of language should stick really close to the "psychological reality" of language usage and processing. What, some have asked, does he mean? Doesn't philosophy of language already do that (insofar as it is based on our judgments of what propositions, what names refer to, etc.)? Well, I don't know what Searle has in mind, but let me hazard a guess.
What, exactly, are "a posteriori necessary truths" supposed to be? Here is an example: "Hesperus is Phosphorous." Here is another: "Water is H2O." Why are these supposed to be necessary truths? Well, 'Hesperus' and 'Phosphorus' both refer to Venus, and it is necessary that Venus=Venus (since identity is necessary). Similarly, 'water' and 'H2O' both refer to the same molecular substance, H2O, and H2O=H2O. Okay, then, why are they supposed to be "a posteriori"? Answer: because we had to learn empirically that 'water' is that particular substance. We couldn't deduce it from the meaning of 'water.'
Okay then, so it looks as if Kripke's thought experiments have shown something very interesting. Whereas people had previously thought that necessary truths had to be analytic (viz. 2+2=4 is necessarily true analytically in virtue of its concepts), Kripke showed that some necessary truths are not analytic (again, we could not analytically deduce that water was H2O prior to studying the world). But now this looks metaphysically very interesting! We seem to have necessary truths about the world--modal metaphysical facts!
Or do we? Let's think about how a scientist might explain Kripke's discovery. That is to say, if we are looking at the empirical world--at brains, at water, how brains think, etc.--what is the best explanation of Kripke's discovery?
Here, I submit, is the answer.
First, Kripke discovered an ordinary empirical fact about language usage: namely, that we use the terms 'water' and 'H2O' as rigid designators (that is, to refer to the same self-identical stuff) across all possible worlds. Again, this is just an emprical fact. It is what we do. It is how, if Kripke is right, we modally use names.
Second, Kripke drew attention to an ordinary empirical fact about that stuff (the stuff we use 'water' and 'H2O' to refer to): namely, that its physical nature--its molecular structure--can be only known through empirical means (perception, microscopes, chemistry, etc.).
Third, and most obviously, Kripke invoked an ordinary logical/a priori modal fact: that the stuff 'water' and 'H2O' refer to across possible worlds--WATER!--is self-identical, and therefore necessarily self-identical (for Kripke, "water is H2O" is semantically equivalent to H2O=H2O, and A=A a priori entails necessarily A=A).
Putting all this together, we see the following. Kripke's discovery of "a posteriori necessities" was nothing other than a discovery of:
If this right, then several things follow.
First, although Kripke discovered something philosophically interesting (two empirical facts plus the law of identity), he didn't discover anything metaphysically interesting. Again, the facts here are interesting! It is interesting that we use names as rigid designators. It is also interesting that we have to learn what water is by perception and chemistry. Finally, identity is an interesting logical notion. But none of these are metaphysical issues. They are simply two empirical points combined with a law of logic.
Second, this would seem to support Moti Mizrahi's view about essentialism in his paper, "Essentialism: Metaphysical or Psychological?". On the explanation of "a posteriori necessities" that I have just given, a posteriori necessities don't get their modal properties from the world outside of how we use language (plus, again, the law of self-identity). They get their modal properties from how we use language (and the law of self-identity)--which is to say, metaphysical "essences" aren't metaphysically in things (above and beyond self-identity), but rather constructed by us through rigid-designator-ish language usage.
Finally, on my analysis, "a posteriori necessities" are not fully a posteriori. Their modal properties are comprised by two interacting things: (1) an a posteriori fact (that we use names as rigid designators), plus (2) an a priori logical fact (all things are self-identical as a law of logic). What Kripke showed, in other words--like Quine's demolition of the analytic/synthetic distinction--is that there isn't any clean a priori/a posteriori distinction. "A posteriori necessities" aren't purely a posteriori. They have an a priori element (the law of identity). What does this mean? In some sense, it seems to me, it means that there is no clear division between what is "a priori" and what is "a posteriori." Many things that we want to call "a posteriori" truths --e.g. "Water is H2O" -- are partly a priori and a posteriori, thus in some sense "blurring the line" between which propositions are a priori and which are a posteriori. A better way to put it, perhaps, is that on the analysis I have just given, the proposition <Water is H2O> is necessarily true partly because of an a posteriori feature it has (we use names as rigid designators) and partly because of an a priori feature of it (the law of identity)--which is just to say that is not a "a posteriori necessity": it is an a priori/a posteriori necessity, having both elements at once.
Anyway, maybe all of this is old hat. I don't follow the philosophy of language literature very closely, but this was stuff that bugged me in graduate school, and it just occurred to me again, so I figured I'd share it.
In his interview with Tim Crane, John Searle suggests that grad students are sometimes "bullied" into working on topics that don't really interest them. Searle then advocates the following:
Well, my advice would be to take questions that genuinely worry you. Take questions that really keep you awake at nights, and work on them with passion…. We bully the graduate students into thinking that they have to accept our conception of what is a legitimate philosophical problem, so very few of them come with their own philosophical problems. They get an inventory of problems that they get from their professors. My bet would be to follow your own passion. That would be my advice. That’s what I did.
Over at Daily Nous, Justin Weinberg writes:
I can’t figure out what to make of this. I don’t think that the main reason “so very few” students come up with their own philosophical problems is that their professors have been bullying them into accepting their conception of what counts as a legitimate philosophical problem. Two other explanations come to mind. One is that their professors have a pretty good idea of what counts as a legitimate philosophical problem and so it is no surprise that some of their students would want to take up these problems, too. And since it seems that there are many extant philosophical questions on which more work could be done, it is not clear that this is a problem. The other explanation is that coming up with new philosophical problems is not easy. So it is not surprising that fewer students do so.
I think there's some real truth to Weisberg's alternative explanations: (1) that professors sometimes have a better idea of what a good philosophical problem is, and (2) coming up with new philosophical problems is super-hard! I myself was deterred from persuing a dissertation on an idea that excited me, but which my committee members thought a bad idea--and now, in retrospect, I am thankful. Once I finally came up with a good idea--one that satisfied them and me--they gave it the go-ahead!
Be that as it may, I think is a lot of pressure in the discipline these days to work within established problems rather than create new ones. As many people have noted online recently, journal editors--particularly editors at "top journals"--appear to have a tendency to be very conservative. Consider, after all, this striking statement from no less the editor of the Journal of Philosophy:
Barry also asked whether I feel that submissions today are safe and a little boring. Well, let me tell you that a very distinguished philosopher...has been contemplating stepping down from the board because, as he puts it, the submissions are getting tedious and merely moving counters around...I was speaking to both Tim Scanlon and Sam Scheffler...and they were lamenting the fact that the recent papers in journals were too much in the business of commenting on each other's work, and not enough of stepping back from this clubby discussion, and presenting the issues on their own terms as matters of importance to consider quite apart from the latest move by someone in one's professional circle of philosophy. It's hard not to feel sometimes sympathy with that reaction too...
Fortunately, I think things may be changing for the better. I've not only read more and more articles recently that do go out on a limb. There is also the Journal of the American Philosophical Association's editorial policy which explicitly states that its aim will be to publish papers that "go out on a limb...that start trends rather than merely adding epicycles to going trends." Hopefully, if this trend continues, graduate students will have more incentive to work on projects they are truly passionate about. I will say that I think it makes all the difference in the world. For several years, I lost all my passion for philosophy--as I found myself "forced" to work on problems that I had little interest in simply for the sake of advancing my career. It was only when I said to myself, "To hell with it!", and began to trust myself as a philosopher--in part as a result of learning lessons from my mentors about how to formulate good philosophical questions!--that I truly began to love philosophy again. And love it I do. :)
Richard Brown and Pete Mandik have posted a fun short discussion of Zombies over at SpaceTimeMind entitled, "Zombie Fight!". One of the things that I find most fascinating about the discussion--and which I had never heard of before--is that some people apparently (Richard, as well as William Bechtel) report having no imaginative capacity for visual imagery. When asked, Richard literally says that he has no idea what people are talking about when they say to "visualize" something. He says that he experiences himself as entertaining words and propositions, but that he has no clue "what it is like" to visually imagine something. And apparently Bechtel said similar things. According to Richard, he knows what it is like to imagine auditory experiences--he can hold a harmony in his mind--but he can only understand visual imagination by extrapolation. Fascinating!
I was reading through Ruth Millikan's amazing Dewey Lecture and was struck by her description of how she got into philosophy, particularly her early courses as an undergraduate and graduate student. Millikan describes taking an amazing array of courses: a course by Kuhn on scientific revolutions, a course on Kant's first and second critiques, a course on ordinary language philosophy with Stanley Cavell (who does a fascinating and bewildering blend of analytic philosophy, continental philosophy, and literature), an entire year on Wittgenstein with Wilfred Sellars, a course on "existence" covering Sartre and Kiekegaard, etc.
Three things struck me about her discussion here.
First, it struck me just how different--and more diverse--her courses were than the typical grad student curriculum today. My experience is that grad school courses today--at least in analytic departments--tend to have, say, a history requirement (take some ancient, some modern, etc.), a logic requirement, 20th Centure Analytic Philosophy (Frege, Russell, Kaplan, Quine, etc.) and then a ton of courses on cutting-edge metaphysics, epistemology, etc. It struck me, in other words, just how much narrower grad school offerings seem to be nowadays.
Second, it struck me that I was initially drawn to philosophy as an undergrad precisely because, as an undergrad, I was offered such a diverse array of fascinating courses. My very first philosophy course was a summer-school course at Stanford I took as a high-school junior with Taylor Carman called, "Philosophy and Literature." We read some philosophy--Nietzsche, etc.--but for the most part we read literature. We read "The Myth of Sisyphus" and Camus' Stranger (both of which informed a discussion of the absurd), Voltaire's Candide (which informed a discussion of the problem of evil), Italo Calvino's The Baron in the Trees, and a bunch of other stuff I can't remember. It was awesome. I was totally "bitten" by philosophy. I wrote my first philosophy paper ever on the Problem of Evil, and have never forgotten it or lost interest in the problem. Philosophy seemed so relevant to human existence--to the "big questions", about value and meaning, that we face as human beings.
When I got to Tufts University, I knew I wanted to take philosophy, but I knew nothing about the profession. I asked to be placed into an intro to philosophy course and was placed in an honors intro course of six students with Dan Dennett. What a way to start an undergrad experience! We did Descartes, Hume, Wittgenstein, and some other people I can't remember, and Dan talked about robots and everything else he was interested in. It was awesome. I had five term papers to write, and Dan let us rewrite them all as many times as we wished--giving us pages of comments every time. Again, I was "bitten."
I then took a bewildering array of courses. Logic with Mark Richard--which was the coolest and most-difficult-by-far course I ever took. Groups of students pulled all-nighters several nights in a row together to solve his impossible problems. I once did a hundred step proof that apparently could be done in 40 steps. I was proud of myself nonetheless. I've always liked to make a mess of things. :) Anyway, one day Mark had set up an elaborate toy farm on a desk in the front of the room. We all wondered what in the world it was there for. Then, at the end of the day, he put an incredibly difficult problem on the board for our final homework (there were no tests in his class, only 5 insanely difficult homeworks). Then, he pointed to the board and said, "Okay now, who is...willing to BET THE FARM that they can solve this?" The whole class erupted in laughter. It was completely awesome. He set up the toy farm just for that. :)
Anyway, I then took a course on existentialism with Stephen White on phenomenology existentialism, and we studied Husserl, Heidegger, Sarte, and Merleau-Ponty. White showed up 5 minutes late for class everyday, and responded to every question by stroking his beard, saying, "Hmm...", pausing for what seemed like an eternity...and then by asking a question in reply. I don't recall him ever giving an answer to anything. It was totally frustrating and totally awesome all at once. He was 100% committed to getting us to think about the problems for ourselves, like a psychoanalyst eliciting the unconscious by just sitting there. In any case, it was fascinating. I was sure that Heidegger's Being and Time was either the most brilliant thing ever written or complete bullshit, or both.
My grad school courses had their allure too. My first grad school course (at Syracuse, before I moved to Arizona) was a team-taught course on vagueness taught by John Hawthorne and the mercurial Jose Benardete, who is to this day the most hysterically funny person I have ever met (there is even a facebook page dedicated to recounting "Benardete stories"). We studied many-valued logics and, of course, Tim Williamson's book, which at that time had just come out (I recall my initial reaction to the book being, "The negative arguments are great, but I can't believe someone would spend so much time and energy trying to defend such a patently-false positive view"; something I still wonder about!). It was a great course. John would just show up and talk off the top of his head for three hours, and Jose was...well, Jose! I then took a Modality course with Ted Sider where we did David Lewis' On the Plurality of Worlds, as well as an epistemology course with Bill Alston where he savaged coherentistm and sang philosophy songs he had written to us in his ridiculous voice. It, too, was awesome.
As awesome as some of my grad school courses were, however, I quickly soured on a lot of it. Gone, it seemed, were the "deep questions." The questions I was interested in as a real flesh and blood person--questions about love, forgiveness, evil, literature, etc.--were almost nowhere to be seen. I had the experience that Eric Campell recounts here:
I think the idea that much contemporary philosophy has lost sight of the questions is exactly right. At least it has lost sight of the questions that tend to draw interesting, passionate and talented people to philosophy. Phil language isn't my specialty, but I was very excited to take a course on it in grad school, only to be bored to death and deeply disappointed by what contemporary phil language amounts to (at least as it was taught to me). I think the kind of mappings Searle talks about are an excellent example of how objectivity, rigor and difficulty squeeze out insight and general interestingness. This seems to be very widely the case as far as I can tell.
That's why I've been writing so much on my concern that contemporary analytic philosophy has been misled by language. I want philosophy to be less about tables and chairs, counterfactuals and indicatives, queer moral facts and modal semantics, and more about the things that made me fall in love with philosophy in the first place. I understand, of course, that my loves aren't everyone's--that some people wake up in the morning itching to work on formal topics. My worry, however, is that formal topics have crowded out the stuff that I (and others, it seems) care about. Recently, whenever I've, say, picked up a copy of Mind, it's been full of equations and formal modeling that I have no interest in. It wasn't always like this. When Ryle was in charge of mind, its priorities were very different. In any case, I long for a thousand flowers to bloom again--for serious philosophy to become more engaged again with art, literature, science, and flesh-and-blood life: love, forgiveness, hope, and yes, faith (not merely religious faith by the way, but faith in humanity, etc.).
Anyway, what drew you to philosophy? Does it still draw you? And, does professional philosophy still have it, whatever it is?
For those of you who might not be Rawls scholars, Rawls argued that his principles of domestic justice might be satisfied by two types of regimes: (1) liberal socialism, and (2) property-owning democracy. While the former regime-type has been long discussed, the latter regime-type has been the recipient of increased interest lately, thanks to the work of Samuel Freeman and others.
Anyway, Kevin Vallier has just posted a great new article to philpapers, "A moral and economic critique of the new property-owning democrats: on behalf of a Rawlsian welfare state" (forthcoming in Phil Studies). I think Vallier's argument is pretty devastating. It is also one that invokes a point that I invoked in the comments-section of this post: namely, that moral and political philosophy need to make and be evaluated partly on the basis of empirical predictions related to human psychology and behavior. Vallier is up to precisely this, arguing that the Rawlsian case for property-owning democracy is based on (1) pure, unadulterated speculation about human psychology and behavior, that (2) neither Rawls nor his followers have provided any evidence for, and which (3) contradicts a lot of what we do know about human psychology and behavior. As Vallier writes,
I have a number of general concerns about...[difference principle] arguments [for property-owning democracy]. First, they are... based on psychological claims about the bases of self-respect that seem highly speculative...Perhaps in some cases employees, say, would respect themselves more and receive more respect from others if they were partial owners of their workplace. But then again, maybe not. Plenty of people have a healthy sense of self-respect apart from their jobs, say based on the other social roles they play and the relationships they have throughout the course of their lives...Maximizing the participation rights and social bases of self-respect available to the least advantaged is a messy empirical matter, so it’s not at all clear what the different principle requires in this case. Given that these arguments amount to little more than hand-waiving, it is hard to justify giving the state the authority to monitor capital stocks to a degree sufﬁcient to realize and protect individual and collective capital rights...The general worry I have about difference principle arguments for POD is that their success depends on empirical claims that Rawlsians have to my knowledge never defended. (pp. 17-18)
Vallier, I believe, is plainly right. Political philosophers have no business speculating on which principles of justice should govern society--or what a just, well-functioning society's institutions should be like--in the absence of deep and broadly well-informed understanding of human psychology and behavior (note: such speculation is not innocuous. Rousseauian speculation about human nature arguably contributed to the French Revolution's "Reign of Terror" and Marx's speculations led to communism, mass famine, mass murder, etc.). Doing empirically informed political philosophy takes a ton of work, and indeed, education in psychology, sociology, and politics. It requires far more than abstract philosophical reflection on normative notions of freedom and equality or Rawlsian original positions. It requires situating those normative philosophical arguments within background knowledge from those other areas. A mature political philosophy must, in other words, be deeply interdisciplinary--far more than than political philosophy has often been (though, I should add, significant parts of political philosophy have been long informed by empirical stuff!). Anyway, I congratulate Kevin (a former grad school colleague) on writing such a great article. Although I disagree profoundly with just about everything he has to say on public reason and religion, I think his empirically-informed approach to political philosophy is important and, hopefully, the wave of the future!
Given that I'll be entering the job market for the first time this fall I have been reading and discussing different approaches to having success. Today I read an article that struck me as a bad approach to achieving such success. As I was taking a break from dissertation writing (contructive procrastination?), I came across a "Negative CV" (see here). A negative CV is apparently a list of your failures related to your field rather than your sucesses as is the case with a positive CV. Here is a quote from AIden Horner that captures what it is:
"Before I start, I should probably note that I’m not doing badly at present. I had a successful PhD, and am in my third year of a five year post-doc. I have several publications, and was even lucky enough to win a prize for my PhD work. It would be dishonest to claim I’m not doing reasonably well, but I certainly know individuals with ‘stronger’ CVs – prestigious fellowships, publications in ‘big’ journals etc. My point in opening up my CV is more to show the extent of rejection that has gone with the successes I have had. This might offer hope to PhD students, suggesting that rejections don’t spell the end of their career, or it could provoke anxiety, wondering how they could put up with so much rejection (or even that they've been at the recieving end of a lot more rejection). Regardless, I hope that the information is useful for some. Whether or not potential future employees will regard it as ‘useful’ is another matter, but one I will have to cope with when the time comes".
I immediately thought a few things. First, a negative CV would be much longer than this if Aidan was a philosopher. Would others agree? With only 14 failures out of 21 or so attempts, it seems that a 33% success rate would be record-breaking in Philosophy (admittedly I am appealing to 5 years worth of anecdotes with my "record-breaking" claim). Second, I don't see how such a CV could be helpful (overall) for the person making it. I suppose it could show that one was determined and dedicated to the discipline (in a striking out 50 times but getting up to bat again sort of way).
It's safe to say that I won't be making a negative CV any time soon, not one that I share publically anyway. How do others feel about this? Outisde of the hope it gives to others who will no doubt strike out in their careers is there any value in making one? I'm thinking that it would do more harm than good.
Enjoy the rest of your weekend, folks. If any cocooners are in town for the annual meeting for the Society for Applied Philosophy Conference (June 27-29) do be in touch for some coffee (or beer) as I will be in attendance. I have yet to meet any of you in person and look forward to it.
Given that I'm finishing up a book manuscript to send off for review, I've been reflecting on "Anonymous Book Referee"'s story in the comments section here recounting a several year saga getting a book published (a rejection after a 9 month review process, 3 years from finishing the book to getting it accepted, 4 years to getting it actually published, etc.). To anyone who has tried publishing journal articles, let alone a book, the basic story Anon tells is a familiar one. It is a saga of sending off a manuscript, waiting 9 months to a year for a decision (rejection), rinse, and repeat. Fortunately for Anon, s-/he ultimately found a home for the manuscript (i.e. 3 referees and an editor who liked it!).
The thought of going through a similar experience with my own manuscript is not a pleasant one to entertain--but I recognize the probabilities are in its favor. Rejection-rates being what they are, it is more likely than not that I will have to shop the book around to more than one press before (hopefully!) finding a home. Who knows? Maybe I'll get lucky--but again, the probabilities clearly suggest otherwise. And, I have to say, at least psychologically, the prospect of waiting several months at a time -- one at a time -- for an editiorial decision (and referee comments) just seems awful. It's bad enough waiting to hear that long for a journal article. But an 8 chapter, 100,000 word book? After putting so much time into something of such magnitude, the prospect of going through Anon Book Referee's "waiting saga" just seems really, really awful! Anyway, maybe that's just the price we pay as academics. Nobody ever said publishing a book is easy. That being said...
Anon Book Referee's story got me thinking about the very strange publishing/review rules we operate under--rules which make the publishing process far more drawn out and difficult than it might otherwise be. As with philosophy journals, or so Anon Book Referee conveyed to me in the comments section here, one is expected to have one's manuscript under review only at one place at a time. Thus, if one submits one's manuscript to a press and it takes 9 months to hear back, you basically have to "sit" on the manuscript all that time--time during which other people are publishing articles, books, etc., that may in certain ways "steal your book's thunder."
Again, this is bad enough in the case of journal articles. Some top journals are well-known for having 1-2 year turnaround times on decisions--something that early career scholars in particular (people like you, I, and typical readers of this blog) cannot always risk wasting time on. But, my feeling is, as bad as it is in the case of journals, it is far worse in the case of books. Here I have what I think is a good book on my hands...and yet I may (if I'm lucky!) have to "sit" on it for several years until it finds a home. Again, this just seems awful.
Does it have to be this way? Why does our discipline have such strange norms (e.g. the norm of only submitting to one journal/book press at a time)? Two initial points I want to make are (1) these kinds of rules literally do not exist in any non-academic area, and (2) there are academic disciplines that do not have these rules that seemingly function far better precisely as a result of not having them. Let me explain.
The idea of "sending a manuscript" or product to one place at a time is--as far as I can tell--utterly unique to (some areas of) academia (i.e. ours). So, for instance, I used to write screenplays and was a semi-professional musician. There was no expectation in either of these areas to, say, send your screenplay or demo to one agent, production company, or music label at a time. Such a norm would seem utterly bizarre to people in these (and really, just about any) lines of work. Quite the contrary, these--indeed, almost all occupations--are driven by competition. Music labels, for instance, face pressure to sign the up-and-coming artist before someone else does. Similarly, production companies seek to lock down a good screenplay before competitors do. This makes the entire review processes very efficient. In contrast, if they knew that they could just sit on a screenplay or music demo for, say, 9-12 months before the artist could even send their product to anyone else, how efficient do you think they would be in arriving at decisions?
Now, some might say, such a model works for non-academic industry--but surely it's unrealistic for academic publishing. This, however, is where I get very confused (if someone can set me straight here, I'd be very appreciative!). First, some academic disciplines--law, for instance--do not have the norm of "one journal at a time." Authors can submit manuscripts to multiple journals, and turnaround times are fast! Second, I see no reason why it couldn't work, and work very well.
One obvious worry is that if people can send their stuff to multiple journals at once, it would inundate reviewers with a ton more manuscripts. I actually doubt this. Sending a manuscript to, say, every journal at once (or even a significant #) would be a disastrously stupid thing for a person to do. When I send a paper to a journal, I usually recognize that there is some significant, non-zero chance that it sucks or needs major changes...and so part of what I'm doing when I'm sending it out for review is "testing the waters" with reviewers (seeing how reviewers respond to the manuscript as-is). I suspect that just about everyone else does this too. Thus, even if there were no rule against sending manuscripts to multiple places at once, I suspect that most people might send any given manuscript to, like, 2 or 3 journals at a time. This would, obviously, significantly increase the number of manuscripts in circulation, requiring more reviewers/etc., but I see no reason why this alone is a reason not to do away with the "one at a time" rule. For, in my own experience as an author and reviewer, (1) reviewing papers does not take a ton of time, and (2) a significant number of reviewers do not seem to put much effort into things as is! (We have all by now, I think, have gotten that one-paragraph long referee comment advocating rejection...after 9 months! At least we wouldn't have to wait so long for bad reviews if we could submit to more places at once)
Further, it is a mistake to focus only--or even primarily--on the potential drawbacks of a practice without considering the potential benefits, and whether the benefits might outweigh the drawbacks. For what, I ask, would plausibly be the result of a rule allowing one to send an article/book manuscript to multiple places at once? Well, I think one obvious thing is that it would stand to make the entire process more efficient: publishers would have incentive to require reviewers to get back to them far more quickly, etc.
One final thing, though, is that we need to not only think about the costs and benefits of different policies, but who specifically bears the various costs and benefits within different schemes. As I've explained before (and alluded to above), it seems to me that the status quo (e.g. the rule to submit to only one place at a time) benefits mostly those in a position of privilege (publishing companies, reviewers, etc) to the detriment of those in far more vulnerable positions (early-career scholars who do not have the time to wait for a 9-18 month review-time, etc.).
Anyway, maybe the status quo (submit one place at a time) is somehow optimal. Maybe. However, it is difficult for me--at least offhand--to believe this. Yes, it is like true: a more permissive rule--of letting authors send stuff to more than one place at once--might have serious costs. But the status quo has serious costs, costs imposed disproportionately (I believe) on more vulnerable members of the profession--people who are just starting out, or from a marginalized group, etc.
Yes, allowing authors to submit to multiple places would likely have costs--but 12 month turnaround times at journals and some book publishers don't? "Surely", I think to myself, "there has to be a better way that that!" But maybe there isn't. Who knows? But I don't think I'm the only one who has these concerns. I have a number of facebook friends who have bemoaned their experience with attempting to publish books. Maybe we all need more patience. But maybe academic publishing needs less patience--with reviewers, with rules on submitting to only one place at once, etc. :) I leave it to you to discuss!
A while back, Nina Strohminger gave a legendarily brutal review of Colin McGinn's 2011 book, The Meaning of Disgust. One of the many things Strohminger criticizes McGinn for is his near-complete lack of engagement with the empirical literature on disgust, bypassing the most widely accepted theory of disgust by people who have done empirical work on it -- the view that disgust functions to help us avoid contaminants and disease. (p. 1)
Anyway, as luck would have it, now that Strohminger's review has been published, I came across an article by someone else in a top philosophy journal -- Wendell O'Brien's "Boredom" (Analysis, 2014) -- that I think is guilty of the same kind of error. Allow me to explain.
In "Boredom", O'Brien attempts to derive metaphysical conclusions about boredom from conceptual analysis of "boredom." He writes, "My chief aim here is to try to gain some understanding of what boredom essentially is -- to take a stab at an analysis of the basic concept of boredom..." (p. 236-7; my emphases) Notice what O'Brien is explicitly asserting here. He is asserting that he thinks he can derive boredom's essence from analysis of the concept of "boredom". I think this is plainly fallacious, and that O'Brien, like McGinn, has been misled by language. The concept of "boredom" is one thing. The physical/metaphysical phenomena the concept picks out are another. Allow me to explain.
O'Brien begins by claiming that although "the multidisciplinary literature" on boredom mentions different types of boredom, his analysis will subsume all of the different types under the concept of "boredom." I don't think anyone in the empirical literature would deny this, as different types of boredom are all recognized to be just that. Anyway, let us take a closer look at what O'Brien does in his article. The first thing to notice is that none of the multidisciplinary literature that O'Brien mentions includes any empirical psychological studies or reviews of empirical findings on boredom. O'Brien simply cites one article from a sociology journal (Aho 2007), a New York Times Sunday Book Review (Schuessler 2010), a book on the literary history of boredom (Spacks 1995), a book on the "lively history" of boredom (Toohey 2011), and finally, a philosophical book on boredom (Svendsen 2004).
O'Brien's ignoring the psychological literature on boredom is surprising for many reasons -- not the least of which is the fact that boredom has been very widely studied in empirical psychology, is known to have three quite different types, and is known one of the single biggest predictors of depression and a vast array of psychological, physical, educational, and social problems. But we'll come back to this in a moment.
O'Brien's article is devoted to defending the following analysis of the concept of boredom (p. 237):
(1) a mental state of
(3) restlessness, and
(4) lack of interest in something to which one is subjected,
(5) which is unpleasant or undesirable,
(6) in which the weariness and restlessness are causally related to the lack
Interestingly, despite the fact that O'Brien only purports to analyze the concept of boredom, he infers from his analysis of the concept that boredom itself (the phenomenon(-a) the concept picks out) "has no grand metaphysical significance." This is strange and, I believe, fallacious. Although the concept of boredom may not have any grand metaphysical significance, the phenomena it refers to very well might (for more on what is problematic about inferring metaphysical conclusions from analysis of concepts, see this new paper by Avner Baz). Or again, consider the following analogy, which I take from Mark Balaguer's paper, "The Metaphysical Irrelevance of the Compatibilism Debate (And, More Generally, of Conceptual Analysis"). Suppose two philosophers are debating whether Pluto is a "planet." Suppose they have a hard time coming to an agreement, and they then say, "Well, I guess there is nothing metaphysically significant about planets -- for it doesn't matter how we categorize it!" This is silly. There is nothing semantically signifcant about what we call Pluto;...but Pluto, for all that, has metaphysical significance. It is a massive object in our solar system will kinds of metaphysically significant properties (gravity, mass, etc.).
By the same token, even if the concept of "boredom" has no intrinsic significance, there is plenty of empirical evidence suggesting that boredom -- the phenomena the concept (vaguely) picks out -- may indeed have great metaphysical and/or moral significance. Boredom, as I mentioned earlier, is known empirically to be one of the most destructive mental states a person can suffer from. It predicts a vast array of negative psychological, social, physical, and educational outcomes -- and can be experienced by those who suffer it as having profound metaphysical significance. As O'Brien himself points out, many writers, artists, and philosophers have experienced it in just this way:
Several literary and philosophical writers have felt that the phenomenon of boredom shows us something important and deep about our human condition in the world. Two of these are Pascal and Schopenhauer. Pascal regards boredom or ennui as a sense of our own helplessness, an infinite void within ourselves, and our dependency on God, the only thing that can fill that void (1958: 14–61). Schopenhauer in one place defines boredom as the sensation of the worthlessness of existence (1973: 54). Such definitions of boredom seem to me to be sensationalistic and farfetched.
I see no reason why analysis of the concept of boredom provides us with any reason to doubt any of the metaphysical claims that people like Pascal and Schopenhauer make about boredom. All O'Brien argues is that none of these metaphysical implications are contained in the concept of boredom. But, again, so what? It is fallacious to argue from the lack-of-signifance of a concept to the lack of significance of the phenomena the concept picks out.
Or so say I. Am I right? Wrong? As always, I'm happy to listen (and, of course, argue!;)
Daily Nous has posted a link to a short piece by Brian Frances, "Why I Think Research in Non-Applied, Non-Interdisciplinary, Non-Historical Philosophy is Worthwhile", in which Frances responds to (broadly) kinds of worries that Unger, Searle, I and others have raised about analytic philosophy recently: namely, its focus on abstract, conceptual "puzzles." Again, (and I can't stress this enough!) I do not think all--or even most--philosophy is guilty of falling into the errors I have been pressing. That being said, I think the worries are worth worrying about--both philosophically, but also sociologically (insofar as the problems professional philosophers prioritize arguably has effects on who is/is not attacted to or included in the discipline).
Anyway, I read Frances' piece with interest, and imagine that many readers are sympathetic with it. In brief, Frances argues that philosophy's "abstract, non-applied, non-interdisciplinary, non-historical" puzzles are genuine puzzles, and thus, that time on them is well spent. Ultimately, Frances does three things:
I remain unpersuaded on all three points.
First, on (1), Frances attempts to motivate the problems of skepticism and material composition by reference to sets of inconsistent statements that "seem intuitive." Following Moti Mizrahi, I do not think philosophy should be based on intuition mongering. When it comes to many philosophical problems--zombies, composition, whatever--some people have the relevant intuitions, others don't. I, for one, don't have any clear intuitions about mereology, and following Chalmers, I don't find myself at all moved by skepticism: I think the world we perceive around us is a world, whatever a world may be (in which case we plainly have knowledge of it, and the skeptical problem in epistemology does not so much as arise). A mature philosophy, in my view, should not be based on intuitions but on data from the world. One reason I think this comes from this article by M.B. Willard and this one by Gian-Carlo Rota (and actually, this one from Willard as well) Philosophy not tethered to the world--philosophy based on intuition alone--suffers from a profound undetermination problem similar to that of (but more serious than) philosophy of science. According to the undetermination problem, there are always, in principle, an infinite number of theories consistent with observation. In science, we have pragmatic grounds for resolving the underdetermination problem (e.g. simple theories have been more accurate in the past, etc.). But, in the case of abstract philosophical arguments, there appear to be no such grounds. Different people have different intuitions, and there are always a vast array of disparate theories--mereological nihilism, compositionalism, etc.--can always be rendered consistent with "the data" (whatever one's favored intuitions are).
Second, I don't think it's true that critics of philosophy's focus on abstract puzzles have failed to make a cogent case. Quine made a case: that philosophy is based on the supposition that words and concepts have "meanings", when in fact they don't. Wittgenstein made a case: that ordinary everyday concepts are fundamentally vague and possessing many possible uses, and that philosophy that aims at clarifying or settling "the meaning" of these fundamentally vague/multi-use concepts fundamentally misunderstands language. (Important note: Wittgenstein was on board with the idea that philosophy should be as clear as possible in the Tractatus, but he came to believe in his mature philosophy that this is all a mistake--that philosophy must operate within vague language, understanding it as a natural phenomenon (language as use), and leave it as it is). Unger, apparently, will also be making a case. And others have made the case too. Rota, a mathematician (not a philosopher), argues in his Synthese paper--just as I have argued previously--that analytic philosophy has been based in part on a false analogy with mathematics. Mathematics aims at clarity, Rota says, because its concepts are clear. It does not aim to make unclear things clear. It starts with clear ideas and aims to derive mathematical truths from them. But philosophy aims to make vague concepts clear--which, Rota says (and I agree), is simply to alter them, attempting to impose clarity where there is none (and which cannot be done in any non-arbitrary way, in my view, without tethering concepts to the empirical world).
Finally, however, I really want to focus on Frances' third claim: (3) that philosophy is mostly--and properly--about "PAINTS: problems, arguments, ideas, notions, theories, and solutions." My main worry about this is that Frances--in line with the title of his article--seems to take "problems, arguments, ideas, notions, theories, and solutions" as fundamentally perspective- and context-independent, as though problems, arguments, etc. are just "out there" as Platonic Forms or whatever waiting to be discovered. But this, I think, is just wrong. What one sees as a problem, or argument, idea, plausible theory, or solution depends in large part on perspective. Indeed, I was really struck by the fact that "insight", "perspective", and "understanding" were not on Frances' list of what philosophy "mostly is." A person who suffers injustice faces a moral (and possibly existential) problem that someone who has never faced injustice does not face: the problem of how to respond to oppression. Such a person--the oppressed--may not see mereology as a problem, or, if they do, as one not worth spending much time on. This is not to say that mereology isn't a problem. What it is to say, however, is that problems are not just "out there" waiting to be discovered by people in philosophy rooms. Many philosophical problems are experienced, and take perspective to pose, theorize about, and solve. The true psychopath, for instance, sees no reason at all to consider human suffering a problem. They are emotitionally and cognitively insensitive to it as a philosophical problem. Similarly, when I make a mistake a desire forgiveness--or when I wonder if I should forgive those who wrong me--that strikes me as problem. Does it strike everyone that way? No! The vengeful person thinks the solution is obvious: forgiveness is a waste of time, and revenge feels good. What counts as a philosophical problem, worthy idea, theory, or solution depends, in large part, on perspective; on who one is, and how is one is situated in the world. We ignore this, I believe, at great (philosopical) peril.
Thomas Nadelhoffer over at Flickers of Freedom (where I will be guest-blogging in August!) drew my attention to Peter Unger's 2002 paper in PPR, "Free Will and Scientificalism." Whatever you might think of Unger's recent interview, this is a cool paper, one that I think sits very well--at a very broad level--with the new theory of reality I defend in, "A New Theory of Free Will."
In essence, Unger suggests that physicalism ("Scientifical Metaphysics") may be incomplete--that the physical world of quantities/relations may well be built out of qualities (something I think is absolutely correct)--and that true, libertarian free will may well be the result of us fundamentally being infinitely nested qualities (i.e. conscious, self-determined freedom "all the way down"), something that I think fits very well with my Peer-to-Peer (P2P) Simulation Hypothesis: the hypothesis that quantum mechanics, the mind-body problem, problem of free will, etc., are all the result of the world fundamentally being a massive, infinitely nested functional equivent of a peer-to-peer simulation where consciousness itself is a basic "hardware" component.
Anyway, whatever you might think of Unger and his interview, it's an interesting paper!
I came across this interesting blog post by Alex Dunn, "By Definition, Philosophy is for White Men", which takes up a very different aspect of Peter Unger's now-notorious 3 quarks daily interview: namely, Unger's treating everything that has any clear connection to the empirical world as "not philosophy." Basically, Unger says that Bertrand Russell wasn't really doing philosophy when he was arguing for peace, that Tim Maudlin is doing "adulterated" philosophy in doing empirically-based philosophy of physics, etc. For Unger, philosophy proper (we might call it) is "what analytic philosophers do" (something which he considers empty).
Now, although Unger thinks analytic philosophy is empty, Dunn says, Unger is, whether he recognizes it or not, playing into a perniciously exclusionary conception of what counts "as philosophy." (I would also add, along with others, that Unger displays a problematic and exclusionary obsession with deeming "who's smart", but I digress). Anyway, what's the problem, according to Dunn? The problem, very roughly, is this: for Unger, philosophy is basically inquiry that has nothing to do with the concrete world--the world in which people suffer from war, poverty, injustice, discrimination, etc., the implication being it is a discipline by and for white men to talk about meaningless things rather than things that matter to non-whites, non-men, etc.
"But wait a minute", you say, "doesn't philosophy investigate those things? Hasn't there been a whole lot of theorizing about justice, injustice, morality, discrimination, etc.?" Well, of course there has, but let us take a deeper look.
I was at a conference several months ago having a discussion with a very famous (and nice!) senior philosopher who, out of the blue, said the following to me (I paraphrase):
I can't help but be struck by the fact that the areas of analytic philosophy that carry the greatest prestige -- analytic metaphysics, epistemology, philosophical logic, metaethics, etc. -- are almost to a "t" fields that are about as far as one can get from being relevant to everyday life. Even in ethics, the most prestigious area--metaethics--is concerned with entirely abstract questions about "moral facts."
He then sort of went through different sub-fields of philosophy, and suggested that the more relevant to real life (particularly injustice!) the sub-field is, the less prestige the area seems to have. The most prestigious areas, he suggested, were something like the following:
The next most prestigious areas, he implied, were probably:
Next, the least prestigious areas include:
Further, he said, it's really no wonder our field is dominated by white men. Our discipline has basically taken the areas of philosophy that are as far removed as possible from the daily experiences of injustice, exclusion, etc., that non-white/non-males experience and given those areas the greatest prestige, whereas the areas that speak most directly to the interests and experience of non-white/non-males are given far less prestige, and often derided publicly and privately.
Finally, I would add, if we look at areas that deal with justice, etc., the most prestigious figures--Rawls, Nozick, etc.--have tended to be those who focus on ideal theory (or describing perfectly just social-political systems), all but ignoring nonideal theory (the area of social-political philosophy that deals with injustice).
Are all of these trends simply accidents, or are they, more peniciously, reflections of discipline that favors the interests of a dominant majority? I leave it to you to think about and discuss!
Peter Unger's 3 quarks daily interview about his forthcoming book, "Empty Ideas: A Critique of Analytic Philosophy", has generated an immense amount of interest around the philosophy blogosphere--and for a number of reasons (if you don't know the reasons...read the interview). ;) Anyway, I don't want to get hung up on what Unger says. I want to think more about the general thrust of his forthcoming critique of analytic philosophy.
Now, obviously, I haven't read Unger's book. It's pretty clear from the interview and book description, however, what Unger means to argue. He contends that analytic philosophy has almost entirely been dominated by "concretely empty ideas", where such an idea is one that does not make a difference to how things are with concrete reality (this is his implied definition from the book overview here).
There is, I think, a simpler way to put this: analytic philosophy has, by and large, not tended to make any predictive empirical claims about the world around us. So, for instance, whether we are "nihilists" about physical objects (viz. "there are no tables, only particles-arranged-table-wise") or compositionalists (viz. "there are tables, and they are composed by particles!"), neither view makes empirical predictions about the world (viz. there is no empirical experiment we could run to verify/falsify either view). I think this is right. Even brain scans, I believe, would simply reveal that our concept of "tables" and "composition" are indeterminate between these two conceptual schemes.
But now if this is all Unger is saying--if all he is saying is that analytic philosophy does not make empirical predictions--then, I imagine some philosophers will say (as some have implied here), what's so bad about that? Did we ever think philosophy made empirical predictions to begin with? Isn't that science's domain, not philosophy's? Indeed, some over at the Leiter thread have said, essentially, "Look, I'm happy if philosophy doesn't deal with what Unger calls 'concretely substantial ideas', ones that have concrete, empirical implications. For I still think philosophy that doesn't make concrete contributions to science is valuable!" Further, I've seen many on social media say, "Unger's view presupposes a perversely scientistic, reductionistic theory of reality."
I believe this kind of response to Unger is wrong--and indeed, that it is no good as a defense of analytic philosophy. Allow me to explain why.
Richard Brown and Pete Mandik posted a fun discussion of the relationship between philosophy and science, hilariously entitled "The Unger Games", over at SpaceTimeMind. One of the things Brown and Mandik touch upon is why philosophers aren't taken very seriously by scientists or the broader public--and they do so by analogy with the history of psychology. Prior to the "scientization" of psychology, psychology was a joke. There was Freud speculating on penis-envy, Maslow speculating on hierarchies of needs, Humanists speculating on "self-realization", and so on. It was, we now know, mostly a bunch of largely baseless speculation. Science, gods-bless-it, gave psychology respectability by...you know, actually making determinate predictions about the concrete world around us.
Now, before I go any further, let me be clear about one thing. I do not endorse any kind of simple, reductionist physicalism. As I argued in "A New Theory of Free Will", I think--along with people like Chalmers, etc.--that physical science (as we currently understand it) cannot be the full and correct story of the world. Physics, in brief, deals only with quantities and relations between things (electrons do such-and-such), and there are some things--consciousness, the passage of time, etc.--that, as qualities, cannot be neatly incorporated into a scientific worldview. I not only think that physicalism (as traditionally understood) is probably false; I think is has to be false in order to explain quantum mechanics properly. Indeed, although I only briefly mention this in my paper, I actually think it is a necessary truth that all worlds are dualistic--that to be a world at all is to comprised by "software" and "hardware", which are two fundamentally different types of things.
But I digress. I just wanted to be clear that I do not subscribe to scientism (viz. "physics is all there is--leave it to the scientists!"). I think philosophy has a real role to play in human inquiry, viz. the parts of reality that science cannot get at.
Anyway, what I agree with Unger on--and what I want to make the case for--is this: just as psychology was "mere speculation" prior to actually making predictions, the same is true of philosophy. As I argued in "Misled by language?", I think that philosophy that does not make any predictions at all is mere concept manipulation--manipulation of fundamentally indeterminate concepts that, by virtue of conceptual indeterminacy/vagueness, cannot in principle provide answers to the conceptual questions analytic philosophers ask. Allow me to explain why.
I believe analytic philosophy tacitly presupposes a flat-footed Williamsonian epistemicism about vagueness that is empirically false. For those of you who don't know Williamson's theory, Williamson thinks there is some unknown fact of the matter about precisely how many hairs makes a person bald (or balding), and about precisely how many grains of sand comprise a heap (viz. N grains is not a heap, N+1 is a heap). Now, almost no one who works on vagueness (besides Williamson) endorses this crazy view--and for good reason: it is almost certainly empirically false. There is nothing in the world that could possibly make Williamsonian epistemicism about vagueness true. Our brains don't code a specific number of hairs for "bald", or a specific number of grains for a "heap", or a precise concept of "material constitution", etc.--nor do our behaviors entail sharp-boundaries.
So, flat-footed epistemicism about vagueness is false. But, I believe, one of the most central practices of analytic philosophy--conceptual analysis, thought-experiments to "clarify" such-and-such--presupposes that epistemicism is true. People who debate whether tables are "composed" of their particles, or whether there are no tables (just particles arranged "table-wise") are assuming--just like the epistemicist does about baldness (N hairs is bald, N+1 is not)--that there is some answer..and that we just might find it if we just look (and argue) carefully, clearly, and rigorously enough. Indeed, just about all conceptual questions in analytic philosophy can be seen to trade on vagueness. "What exactly is baldness? How many hairs are bald/not bald?", is the same sort of question as, "What, exactly, are tables? Are they composed by their particles? Or, are particles just arranged table-wise?", or, "Does free will require actions to be reason-responsive, or not?" These types of "deep" philosophical questions are, I think, precisely what Wittgenstein thought: us being bewitched by vague language. Our language is irreducibly vague. We "want" answers to what exactly a table is, or what exactly composition is, etc., but there is no determinate answer...any more than there is a determinate answer as to how many hairs make a person bald or heap of sand a heap.
So, I say (or at least worry), one of the central practices of analytic philosophy--conceptual analysis--is predicated on a false conception of vagueness. We are seeking determinate answers where, conceptually, there are none. Hence, interminable philosophical debates--debates that never resolve themselves because there is nothing in our concepts that can determinately resolve the issue in question one way (e.g.tables are constituted by particles) or the other (e.g. there are only particles arranged table-wise).
Another way to put this is as follows: in the absence of determinate predictions, philosophy is little more than an exercise in battling argumentatively over how to interpret vague concepts for which there is no determinately correct interpretation (i.e. no truth at all). But now if this is right, then Unger is right. In the absence of predictions, philosophy is not about any determinate truths at all--for it is only concrete predictions that give determinacy to our concepts, latching them onto determinate phenomena in the world (electrons are determinate, so are protons, so are neurons, etc.; that's why there are determinate answers in science).
Return now to psychology. Prior to its scientific turn--prior to psychologists' insistence on making verifiable predictions--there were a vast array of entrenched "camps": Psychoanalysts, Behaviorists, Humanists, etc., all defending their own views of psychology by way of "intuitions", "impressions", and "arguments." It is really striking how similar the situation was to philosophy today. It is only by tethering itself to the natural world that psychology began to make demonstrable progress.
The same insistence on tethering theory to predictions, in fact, spurred the greatest scientific innovations of all time: Einstein's theories of relativity and quantum mechanics. Prior to Einstein, everyone had assumed--yep, you guessed it, philosophically--that space and time had to be absolute (the background against which all things in reality play out). Einstein essentially said, "You know what? What if we actually take seriously the phenomena, and follow them where they lead, come what may?" When he followed the phenomena, he found that the constancy of the speed of light in every reference-frame entails the relativity of space and time. Etc.
Indeed, philosophical speculations on the basis of concepts alone do not have a very good record. Aristotelians believed everything in the heavens must move in circles, since circles are "perfect"...yet we discovered through empirical inquiry that they don't. Phlogiston theorists speculated that fire and heat must be caused by the a substance, "phlogiston"...except we discovered through empirical inquiry that there is no such thing.
I am increasingly coming to believe that philosophy must follow suit. Analytic philosophy deals with concepts...but concepts are indeterminate, and determinate only insofar as they make predictions. "Continental philosophy" deals with...well, consult your favorite definition (I have no horse in this race). I'm not sure we should be doing analytic or continental philosophy anymore--at least not as traditionally conceived. The above reflections suggest that we should be doing something very specific: Natural Philosophy, engaging with scientists, but also with those things (e.g. consciousness, time's passage, normativity) that science cannot get at directly completely, but which can and do entail empirical predictions. We should be philosopher-scientists, and leave a priori speculation to religious sages.
One final point: in arguing for Natural Philosophy--in insisting the philosophy needs to make verifiable predictions to be fruitful--I am not assuming any form of "verificationism" about meaning, etc. For my claim has not been that conceptual debates in philosophy are meaningless. I am happy to admit (contrary to verificationism) that they are meaningful. My claim, rather, is that without making definite predictions, conceptual debates have no determinate answers. Predictions, and the world those predictions are about, lend determinacy to questions. Without them, there isn't any.
But, who knows?...maybe I'm wrong about all this. It wouldn't be the first time. I'm happy to listen! :)
Routledge publishing let me know that during the rest of June, they have over 170 philosophy books available for free viewing during the month of June (click here):
Please allow me to introduce myself, I am Hattie, the Marketing Assistant for Routledge Philosophy books. Here at Routledge, we have recently digitized over 15,000 of our older titles that were previously only available in print, and to celebrate this we have launched a campaign that makes 6,000 of them freely available to view during the month of June.
The ‘Century of Knowledge’ campaign includes titles that span a century of research across the social sciences and humanities, and includes over 170 of our Philosophy books.
I normally don't advertise stuff, but these are free to view, and there are a lot of good titles!
Jakob is a truly interdisciplinary researcher, who works at the intersection of philosophy, neuroscience and psychology. He was educated in Aarhus and St Andrews and holds a PhD from the Australian National University as well as a Dr. Phil from Aarhus University. His research deals with the traditional mind-body debate as well as with more interdisciplinary topics in philosophy of cognitive neuroscience and philosophical psychopathology. Far from doing his research from the comfort of his armchair alone, Jakob is involved in a number of experimental, interdisciplinary research projects with neuroscientists and psychiatrists and has even built up a laboratory where he and his team conduct experiments using neuroscience and psychology methods to address philosophical issues, and vice versa.
In his recent book “The predictive mind”, Jakob discusses the theory that the brain is essentially a hypothesis-testing mechanism, that is, that it is constantly engaged in attempts to minimise the error of its predictions about the sensory input it receives from the world. He applies this principle to a range of phenomena, including consciousness, attention, emotion, mental illness and introspection and argues that it can account for the multifaceted character of our conscious experience and provide an account of its relation to attention and action.
Jakob has also published numerous articles on topics including consciousness and attention, delusions, bodily self-awareness, mind-brain identity, and social cognition; and he co-edited a volume onreduction, explanation and causation and edited a special issue of Synthese on functional integration and the mind.
I do hope everyone checks it out. Jakob is doing precisely the kind of empirically relevant theorizing that (I believe) philosophy could use more of!
Hilary Putnam has posted a really interesting analysis of what he thinks is wrong with some "ubiquitous" interpretations of Quine's "Two Dogmas of Empiricism". According to Putnam, the standard story is that Quine's paper defends (1) meaning holism, and (2) confirmation holism. According to Putnam, this is to totally misunderstand what Quine is doing--and the cause of the error is trying to fit Quine into traditional conceptions of meaning and epistemology (i.e. there must be "a meaning" for any given linguistic term). According to Putnam, this is precisely what Quine means to deny. He means to deny that there is any determinate meaning to expressions in natural language. As Putnam writes:
And similarly with the notion of “meaning”: one of the main claims of “Two Dogmas” (and of Word and Object and subsequent publications) is that there are no acceptable [to Quine] identity-conditions for “meanings”. Yes, there are “translation manuals” (Word and Object), and the purpose ofWord and Object is to show how communication (speaking with members of one’s community as well as translation of alien languages) is possible without positing such entities as “meanings”, indeed, without going beyond Fred Skinner’s behaviorist account of “verbal behavior”. But there are no “meanings”, neither of single utterances nor of whole theories. In short, Quine was already practicing “naturalized epistemology” (and language theory) long before he wrote “Epistemology Naturalized”.
This is exactly what I was going on about in my post, "Misled by language?". Analytic philosophy has been driven by conceptual analysis...which presupposes that concepts have determinate meaning to analyze. But they don't, and Quine knew it, just as Wittgenstein did. We use words and concepts in a myriad of ways, none of which are uniquely "correct." And, Quine says--and I agree--it is the job of empirical science to explain to us what our messy concepts are, how they are indeterminate, etc. The job of the philosopher is, therefore, not to muck around with conceptual analysis, settling what "free will" is, or "moral responsibility", etc.--for these conceptual questions can, in principle, never be settled (we simply have a wide array of different conceptions of "free will", "moral responsibility", etc.). The task of the philosopher and scientist is not to seek artifical levels of clarity that cannot be achieved (qua the kinds of interminable debates that dominate analytic philosophy, with separate camps defending their own favored concept). The task of the philosopher and scientist are to cooperate together, within the limits of natural language, to make clear those things that can be made clear (are there electrons? Do brains obey the laws of the physics?) and to set those things that cannot be made clear aside.
This, in my view, requires a vast rethinking of the methods of philosophy. It requires doing philosophy in ways that work with science to make verifiable predictions--predictions that can then serve to settle philosophical and empirical debates, as opposed to conceptual analysis, which tends to result in little more than ongoing debates that never resolve themselves (and really, how could they resolve themselves if they make no empirical predictions?). It requires, in other words, a return to natural philosophy. Now, of course, some might wonder, if philosophy must make predictions, isn't it just science? To which I answer: no. Philosophy can, and should, go hand-in-hand with science, as philosophy can direct science, and science can direct philosophy (see e.g. here and here). Or so say I (again). :)
Finally, however, just to clarify, some kinds of conceptual analysis may well be useful in natural philosophy. We might think of philosophy on the lines of "cartography", teasing out different conceptions of "free will", "moral responsibility", etc., and leaving it to empirical science to determine which are the most empirically useful, and fruitful, conceptions to work with...just as Josh Shepherd and James Justus contend!
While we're on the subject, there's an interview with Peter Unger on his forthcoming book, Empty Ideas: A Critique of Analytic Philosophy over at 3 quarks daily. I'm not as pessimistic about philosophy as Unger is. I think we can do valuable metaphysics, epistemology, moral philosophy...as long as we take care not to fall into fruitless conceptual debates over fundamentally indeterminate concepts (something that, alas, I think happens too often). However, I also agree with Unger that in order to avoid doing this, philosophy needs to be more empirically informed--indeed, as much as possible. For again, in my view, it is largely empirical inquiry that gives our concepts determinacy, latching onto real (or what Unger calls "concrete") features of the world. Anyway, Unger's interview is a hoot.
In related news, if you haven't read any of her work, M.B. Willard's following two articles in Phil Studies are a hoot, too:
(2014). "Against Simplicity".
They're an absolute blast to read, and Williard has--how shall I put it--a real gift for turning a phrase.
I've been thinking more and more lately about a worry about analytic philosophy that traces back at least to Wittgenstein, and which is enjoying a resurgence (see e.g. Millikan's Dewey Lecture, Avner Baz' recent paper which I commented on here, and Balaguer's paper on compatibilism and conceptual analysis, which I commented on here). The worry is simply this: analytic philosophy is, by and large, predicated on a systematic misunderstanding and misuse of language.
Analytic philosophy, broadly speaking, is dominated by conceptual analysis. I do not mean to say that this is all analytic philosophy is (I take myself to be doing analytic philosophy here, for instance, though I am not analyzing concepts). The point is simply that, in large part, analytic philosophy has been the practice of philosophers aiming to rigorously tease out -- through thought-experiments, definitions, etc. -- our concepts of free will, justice, morality, etc. But this is not all. In engaging in conceptual analysis, analytic philosophers take themselves to be doing a second thing: namely, getting at the referents of the terms, i.e. what free will is, what justice is, etc.
I increasingly think -- and so do Millikan, Baz, and Balaguer -- that this approach to philosophy is doubly wrong. First, it is based on a misunderstanding of language. I think Wittgenstein (and Millikan) were both right to suggest that our words (and concepts) have no determinate meaning. Rather, we use words and concepts in fundamentally, irreducibly messy ways -- ways that fluctuate from moment to moment, and from speaker/thinker to speaker/thinker. A simpler way to put this is that our concepts -- of "free will", "justice" etc. -- are all, in a certain way, defective. There is no determinate meaning to the terms "free will", etc., and thus philosophical investigation into what "free will" is will be likely to lead, well, almost everywhere. At times, we use "free will" to refer (vaguely) to "reason-responsiveness", or to "actual choices", or whatever -- but there is no fact of the matter which of these is really free will. Similarly, as Balaguer points out in another paper, there is no fact of the matter whether Millianism, or Fregeanism, or whatever about the meaning of proper names is right. All of these positions are right -- which is just to say none of them are uniquely right. We can, and do, use proper names in a myriad of ways. The idea that there is some fact of the matter about what "free will" picks out, or what names mean, etc., all fundamentally misunderstand natural language.
And there is an even deeper problem: all of it is hollow semantics anyway. Allow me to explain. In his paper on compatibilism and conceptual analysis, Balaguer gives the following example. Two psychologists, or linguists, or whatever are trying to figure out what a "planet" is. They then debate to no end whether Pluto is a planet. They engage in philosophical arguments, thought-experiments, etc. They debate the philosophical implications of both sides of the debate (what follows if Pluto is a planet? What follows if it is not?). Here, Balaguer says, is something obvious: they are not doing astronomy. Indeed, they are not really doing anything other than semantics. And notice: there may not be a fact of the matter of what "planet" refers to, and it does not even matter. What matters is not what the concept refers to (what is a planet?), but rather the stuff in the world beyond the concepts (i.e. how does that thing -- Pluto -- behave? what is its composition? etc.).
Turn now to free will. I do not think there is an answer as to what free will is. I think it is entirely indeterminate what the term "free will" picks out, and that any decision we make for how to settle the referent of the term will be just that: a decision (much as we may decide to call Pluto a planet or not). Not only that: I do not think any of this matters one bit. What ultimately matters are the phenomena. Are our actions "reason-responsive"? Are our brains governed by physical laws? Or, are they not? These are the questions that matter, and they are the only questions that promise determinate answers. All other questions -- about what "free will" is -- are just language games: games that we should call on account of fog. There is simply no way through the fog. Our language is fundamentally, irreducibly foggy, and it is not language that matters anyway.
You have only three options: One, you listen to this episode of your own free will. Two, you listen to this episode as a matter of pure chance, with neither cause nor reason. Three, you were predetermined since the big bang to listen to this episode. One way or another, you're going to hear philosopher Gregg Caruso join Pete Mandik as they gang up on Richard Brown, who intermittently operates under the illusion that he has libertarian free will.
Needless to say, I agree with almost everything Richard has to say in defense of libertarianism -- but anyway, it's a fun episode!
I've been deep in the weeds/up to my eyeballs revising my book manuscript the past several weeks. Although things are going pretty well -- with the usual ups and downs, frustrations and solutions, major and minor revisions, etc. -- I've been wondering about how to best proceed in terms of (hopefully) getting the manucript published. And so I was hoping to solicit tips/advice from you, the Cocoon's readers, especially those of you who have published books, about how to go about things.
The questions I have fall into broadly two related categories:
Given that I am asking these questions from a particular practical position -- namely, my position! :) -- I suppose it may help to convey the position I'm in. Briefly: I drafted a very early version of about 1/2 of a book manuscript last summer. I was then approached by an academic press last winter, put together a complete book proposal, met with the acquisitions editor to discuss the proposal, and finally, was invited to submit a full manuscript for review. Initially, I agreed to an overly optimistic deadline of getting the manuscript in, and then requested (and was granted) an extension to get the manuscript in by the end of this June. In early Spring, I drafted the rest of the manuscript. Then, during the past two months or so, I have been revising the mansucript from beginning to end, with major revisions along the way. Over the next few weeks, my plan is to work myself to the wall to whip the manuscript into shape so that I can submit it by my end-of-June deadline. Although I could of course use a lot more time to get feedback and revise -- manuscripts are never perfect (though I am feeling rather rushed!) -- I feel good about the manuscript as a whole, and about the major arguments, and think the ms. will be worth reviewers' time.
My questions then are these: should I approach a few other presses with my book proposal to see if they are interested, too? Should I approach lots of presses en masse? Or, should I just submit the ms. to this one press (the one that requested it) and otherwise give the manuscript time to "sit", and perhaps send out chapters for feedback while the ms. is under review at the first press? I realize these are broad questions, and good answers probably depend a lot on context (i.e. on how good the manuscript is in its current form), but I could really use some impressions from those in the know! Any constructive tips/advice would be much appreciated! :)
I recently came across a study purporting to provide evidence against the hypothesis that our preferences exist prior to action (viz. revealed preference theory) and in favor of the alternative hypothesis that we construct our preferences in the very process of deliberating to decisions. Does anyone know of this study or of any like it? I could really use a reference or two. Thanks!
I can't wait to read this (thanks to David Killoren for the pointer!). I've long been a fan of Unger's, and I will be curious to see how the book is received. For those too lazy to click links ;), here is the book's self-summary:
Peter Unger's provocative new book poses a serious challenge to contemporary analytic philosophy, arguing that to its detriment it focuses the predominance of its energy on "empty ideas."
In the mid-twentieth century, philosophers generally agreed that, by contrast with science, philosophy should offer no substantial thoughts about the general nature of concrete reality. Leading philosophers were concerned with little more than the semantics of ordinary words. For example: Our word "perceives" differs from our word "believes" in that the first word is used more strictly than the second. While someone may be correct in saying "I believe there's a table before me" whether or not there is a table before her, she will be correct in saying "I perceive there's a table before me" only if there is a table there. Though just a parochial idea, whether or not it is correct does make a difference to how things are with concrete reality. In Unger's terms, it is a concretely substantial idea. Alongside each such parochial substantial idea, there is an analytic or conceptual thought, as with the thought that someone may believe there is a table before her whether or not there is one, but she will perceive there is a table before her only if there is a table there. Empty of import as to how things are with concrete reality, those thoughts are what Unger calls concretely empty ideas.
It is widely assumed that, since about 1970, things had changed thanks to the advent of such thoughts as the content externalism championed by Hilary Putnam and Donald Davidson, various essentialist thoughts offered by Saul Kripke, and so on. Against that assumption, Unger argues that, with hardly any exceptions aside from David Lewis's theory of a plurality of concrete worlds, all of these recent offerings are concretely empty ideas. Except when offering parochial ideas, Peter Unger maintains that mainstream philosophy still offers hardly anything beyond concretely empty ideas.
Every semester, after I receive my student evaluations, I update my teaching portfolio. The thing is, it is now getting really long (it's 60 pages now). Now, I suppose there's nothing wrong with having a long, comprehensive portfolio on one's website -- but what about job applications? What is an optimal length? And how long is too long?
Well, as they say, here goes nothing! I've uploaded a draft of the introduction to my book manuscript, Rightness as Fairness, to dropbox for those who have emailed me indicating an interest in reading and commenting on it. Truth be told, I've struggled with the introduction a lot -- with hitting the right tone, giving enough detail but not too much, etc. Writing a good book introduction, I've found, is really tough. On the one hand, the aim of a good introduction is to excite readers/reviewers about the project -- something that, all things being equal, calls for brevity. At the same time, I want to give readers a fairly detailed picture of the book's arguments, particularly its arguments early on (as they are the ones that really motivate the whole book). Have I gone into too much detail? Am I raising issues that might sound "alarm bells" with reviewers that I might not want to before actually giving the arguments in the chapters themselves? Also, have I hit the right tone? It's an ambitious project, and I've tried not to overstate what I'm doing -- but have I gotten it right?
In any case, I could really use some feedback! Thanks again for those who have emailed me with interest, and thanks in advance to anyone and everyone who comments. If there is anyone else who would still like to get in on it, feel free to email me at email@example.com for a link to the dropbox folder. :)
I received my first invitation to be a referee for a book manuscript today -- an invitation I provisionally intend to accept. This is actually a fortuitous development, given that I am finishing up a book manuscript of my own to send off for review. For it occurred to, after receiving the invitation, that I don't really know what book refereeing standards/norms are! I've never reviewed a book manuscript, after all, nor I have been the recepient of one. And, while I have a good idea of what reviewing standards/norms are for journals, I don't have any real sense whether they are the same/similar for book manuscripts. Quite the contrary, I have at least some reason to believe that reviewing standards may be different for books (in comment 10 on this thread, for instance, David Chalmers suggests that acceptance rates are much higher for books than for articles -- something along the order of 50% acceptance, compared to >10% for good journals). Accordingly, getting a better idea of referee norms/standards for book manuscripts would really help me out, both in terms of doing my job properly once I accept the assignment, but also in terms of my own book (viz. evaluating when it is "good enough" to send off).
So, then, does anyone with experience in these matters have any relevant advice? Here are some very broad questions I have:
Obviously, these are really broad questions -- but any and all informed answers to them (or any other relevant questions) would help me, and I expect other members of the Cocoon community, immeasurably!
As I mentioned a few days ago, I have an end-of-this-month deadline to revise and send off a book manuscript to a publisher, and could really use some constructive feedback (I had some people read some very early chapter drafts last year, but haven't had outside readers recently). Because the Cocoon is, among other things, a place to share, discuss, and improve each other's work, I would like to try sharing chapter drafts with any interested members of the community for constructive feedback. Should the book ever see the light of day, I will by all means thank everyone who helps out in its Acknowledgements section!
Anyway, because it is still a draft -- it is rough in places, and I am not entirely prepared to share it with the world at large -- my plan is to post chapter drafts in my personal dropbox.com account every several days for the rest of the month, along with brief posts on each chapter here describing elements of the chapter that I feel like I could use some help on (though of course commenters should feel free to provide feedback on other stuff too!). If you are interested in taking part, please just email me at firstname.lastname@example.org, and provided you don't seem sketchy ;), I'll give you the login info! There are no formal or informal requirements to take part besides these (A) please do not share the manuscript or any of its ideas with anyone, and (B) if you do provide feedback, please try to be helpful. There are, as I'm sure we all know, problems with every philosophy paper and book. My aim is to write the best darn book I can, warts and all -- so, if you take part and find what you believe to be problems, all I ask is that you try in a positive to help me solve them (as opposed to providing "definitive refutations" of my arguments:).
Okay, then, now that I've asked for your help, I suppose I should give potentially interested parties some idea of what the book is about. At least offhand, I think the title and table contents should suffice. The book is tentatively entitled Rightness as Fairness: A New Moral and Political Theory. Its table of contents is tentatively as follows:
Chapter 1. Moral Philosophy: Extraordinary Claims, Unextraordinary Evidence
Chapter 2. A Simple, Intuitive Normative Argument for Reformulating the Categorical Imperative
Chapter 3. A Simple, Intuitive Motivational Argument for Obeying the Reformulated Categorical Imperative
Chapter 4. From the Reformulated Categorical Imperative to a Compelling Moral Psychology
Chapter 5. From the Reformulated Categorical Imperative to the Unity of Several Other Reformulations
Chapter 6. From the Unity of the Reformulations to a Moral Original Position
Chapter 7. From the Moral Original Position to Four Principles of Rightness as Fairness
Chapter 8. From Rightness as Fairness to a New Political Theory: Libertarian Egalitarianism as an Ideal, and a Sketch of a Nonideal Theory of Justice
Fair warning: some of the chapters are very long (though I think I've written them in an accessible way). Many thanks in advance to anyone and everyone who emails me and takes part. I will be incredibly thankful for whatever help you see fit to provide. Provided enough people email me to take part, I will begin posting chapter drafts in the next couple of days!
Given that the job market starts a few months from now, and many followers of this blog will be on it looking for jobs, I thought it might be a good time to initiate some early job-market discussions, beginning with the question of what you should be doing right now, this summer, if you are going on the job market.
On this issue, I'll mostly defer to Mark Alfano's post from a couple of years ago. Here is what Mark wrote:
Obviously, this is a lot to do (which is why I am mentioning it now!). Additionally, the last part, "amass a small fortune", seems to becoming less important, thanks to (mostly) free online applications, as well as Skype interviewing (APA interviewing seems to be dying a long but thankful death!). Finally, of course, some of these things are a bit different for people who have been on the market a few times. One important thing I would note -- especially because I have heard many people say it is important -- is for people who have been out a few years to get good reference letters from people who were not on your dissertation committee or from your grad program. "And how do you do that?", you ask. Well, it's a bit tricky. The easiest way to do it is just to get to know people, and ask them if they seem to like your work! And if you don't know people? One thing I've heard some people say is that it can't hurt to send out "feelers" to people whose work you admire -- and perhaps have met personally, if only briefly -- to see if they might mind reading some of your work (then, if all goes well, you might consider asking for a rec!).
In any case, does anyone have any questions or comments -- either for me or for the community at large -- on any of this stuff? Now's a great time to ask!
Richard Brown and Pete Mandik have posted SpaceTimeMind's newest episode, "The Extended Mind (with Lara Beaty)." Here's the capsule summary:
Vygotskian developmental psychologist Lara Beaty joins philosopher-scientists Richard Brown and Pete Mandik to tackle questions such as: Is the mind bigger than the brain? Does conceptual thought and even consciousness require the use of language or other sorts of social interaction? Which is morally preferable: making animals smarter or making humans stupider? Would it be totally cool to eat somebody who volunteered for it?
The following papers were either published or uploaded to philpapers by Cocoon contributors during the month of May:
Trevor Hedberg (forthcoming). "Epistemic Superegogation and Its Implications", Synthese.
Moti Mizrahi (forthcoming). "Essentialism: Metaphysical or Psychological?", Croation Journal of Philosophy.
Congrats! Anyone I missed? Let me know and I'll post them below!
I've been thinking a little bit lately about learning how to publish. I distinctly remember, prior to publishing my first piece, thinking, "I'll never publish." It seemed like such a daunting task. To that point, I had received plenty of rejections and only a couple of revise-and-resubmits, neither of which I revised sucessfully. Then, after I finally got my first publication, I remember worrying that I'd never publish again. Several years later, however, publishing seems so much easier. What changed?
If you're having trouble publishing, I have to recommend Thom Brooks', "Publishing Advice for Graduate Students." Although, obviously, it's targeted at grad students, I found it to be a great resource -- one that really provides a lot of unexpected insights into what it takes to publish things. In what follows, I'm going to briefly comments on what I agree and disagree with Brooks on (mostly the former!), and then say a few additional things about how to frame papers.
Brooks begins his piece by advocating book reviews as an introduction to publishing. This is really one of the only things I disagree with Brooks on. Book reviews are a lot of trouble to write (you have to read an entire book!), and what it takes to publish them is really nothing like what it takes to publish an article (you don't have to convince reviewers that you have something valuable to say, etc.). If you want to learn how to publish, you need to learn how to write work that will pass peer-review, and book reviews don't do that.
What, then, do I think is a better way to proceed? After discussing book reviews, Brooks talks about "replies" -- 2,000-word or so responses to articles/arguments published elsewhere. I entirely agree with Brooks on the value of replies. My first two publications were replies, and they were a valuable experience in a number of ways. First, replies don't take much time to write. They don't take months of hard work to put together. All you have to do is find a problem with an argument someone has published, write up your counter-argument, and send it off. Second, and this is just my impression (but also Brooks' as well), peer-review standards are probably not quite as high with replies as with full-length articles (either that, or its just the case that good replies are easier to write!). Third, replies are a great way to gain confidence-- chances are that if you write some, you'll land one as a publication sooner or later. Finally, they are a great opportunity to get a feel for what reviewers and editors are looking for -- the thing that, in my experience, is the most important thing to get in order to publish effectively (more on this momentarily). Basically, the only down-side of replies is that journals typically only accept replies to their own articles. Although a few journals that accept replies to articles in other journals, most do not -- so, if you send your reply to the journal that published the original piece and it doesn't get accepted, you may be out of luck. Still, I don't think this should deter you -- for again, replies don't take much time, and if you churn a bunch out, chances are something will land!
One of the most striking things about the rest of Brooks' piece is how much he focuses on communicating with your audience. Although it goes without saying that your job -- if you want to publish -- is to convince reviewers to recommend accepting your work, in my experience it is easy to underestimate just how central getting a feel for reviewers is if you want to publish successfully. Prior to publishing a bunch of stuff, I thought (as I expect many people who have not published do) that your primary job is to write a good argument. Well, yes and no. Although having a good argument is important, it is neither necessary nor sufficient for publishing in a good journal. Yes, you read that right: neither necessary nor sufficient. Don't believe me? Read some journals. Of this I have little doubt: you will come across some articles with bad arguments. Which begs the question: how did they get there? The answer, of course, is simple: what all of the articles you read in journals have in common -- the good ones and the bad -- is that their authors were able to convince reviewers to accept them. Now, of course, you don't want to publish bad work. But this still begs the question: what does it take to convince a reviewer to accept your work?
Here are some general impressions I've gotten over the years. Although they are just my impressions, I also think they fit well with what empirical psychology teaches us. My most general feeling is that you need to get reviewers on your side from the outset. Empirical psychology strongly indicates that once we form a first impression -- either of a person, or a piece of work, etc. -- we tend to then search for and focus on evidence that confirms our first impression. It is, as they say, hard to change minds after minds are made up. So, then, how do you make the right first impression with reviewers? Here's my feeling: you need to frame the paper in such a way that (A) it's clear that you are doing something important, and that (B) your paper stands to inform further research. You need to excite the reviewer. You want their first thoughts to be, "Yes, this is important work!", and "Yes, this work will almost certainly generate discussion and further work!" Although this might sound obvious, in my experience you would be surprised how little authors prioritize it (indeed, before I got a feel for it, I didn't prioritize it myself!). All too often, papers go as follows: "Philosopher X has defended argument Y. This paper will show that argument Y has a problem, and then show that Z is the case." This may sound great to you, the paper's author, but notice what you haven't done: you haven't explained to a reviewer why they should care.
You can't assume that your reviewer cares about Philosopher X or argument Y, even if X and Y are well-known. Indeed, when it comes to some philosophers or topics, reviewers may well be hostile. They may think to themselves, "I despise Philosopher X's argument. I think it wasn't worth publishing to begin with", or even, "Y is a common argument in the literature, but I think it is a waste of time." Now, you might be thinking to yourself, "That really stinks. I shouldn't have to convince reviewers of that" -- but too bad, you do. There are some philosophers who work on free will, for instance, who hate compatibilism. If you're writing on compatibilism and one of these people is a reviewer, you have your work cut out for you. You have to convince them that, however much they might hate compatibilism, your paper still has enough value to be worth publishing.
This, then -- besides writing a good argument -- has to be your first priority: you need to convince your reviewers to give a damn! How do you do it? This part is a bit tougher, but, I propose, if you at least have it as a priority, you have taken an important first step. You'll write a more urgent introduction that doesn't just say, "This paper shows that argument Y is messed up"; you'll write an introduction that says, "This paper shows that argument Y is messed up and here is why you, Ms./Mr. Reviewer, should care." One good way to do this, in my experience, is to draw attention to how the paper stands to inform future research. In science papers, this is standard practice: you not only have to show that you made a new discovery, but also show how the discovery can be expected to open up fruitful avenues of new research. In my experience, if you want to get reviewers on your side in a philosophy paper, the same thing is true. Reviewers are likely to be far more sympathetic with what you are doing if you give them a distinct picture of why your thesis is likely to generate a lot of future discussion and interesting work, either by yourself or by others.
In short, publishing is about a lot more than having a good argument. You need to frame your paper in a way that is likely to get your reviewer excited on page 1, and maintains that sense of excitement until the end -- and you do this by not simply saying what your paper will do, but by saying why what your paper will do is important and worth accepting.