Our books






Become a Fan

« CFP: Linguistic Justice and Analytic Philosophy | Main | Guest post: Amy Berg on preparing for a "plan B" »

03/27/2017

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Pendaran Roberts

that since there is an infinite number of possible ways that moral behavior can benefit a person, and an infinite number of ways in can backfire, the total expected utility of moral behavior in problem-cases is zero as well (as illustrated in Table 3.2)?..." (pp. 100-1)

Are you saying infinity minus infinity is zero?

Why zero and not 1 or 2 or 100 or...

infinity - infinity = 0
add 10000 to both sides
10000 + infinity - infinity = 10000
10000 + infinity = infinity
infinity - infinity = 10000


Pendaran Roberts

I see now that this is perhaps an issue the reviewer raises, although it's hard to follow everything having not read the book.

Just curious what your response is...

Marcus Arvan

Hi Pendaran: Thank you for the very good question. I have a detailed response and will write it up as soon as possible. As I have 6 hours of teaching today and 2 hours of office hours, it may take a bit for me to respond--but I plan to post it later today. Thanks again for the good question!

Pendaran Roberts

Hey Marcus,

No need to spend that kind of time responding to my comment unless you have other reasons for doing so, i.e. defending your book from the reviewer etc

Best,

Pendaran

Marcus Arvan

Pendaran: Thanks, but I would very much like to respond. You raise good questions there, and they are indeed questions raised by the reviewer. The passages I quote above from Rightness as Fairness in response allude to how (I think) the book addresses the worry, and my hope is that readers will investigate further. However, because this is one of the trickiest parts of the book, there are further things I can say here in the comments section to clarify its argument, as well some additional things I would like to say on the issue that are not contained in the book.

Marcus Arvan

Hi Pendaran: Sorry for the delay in responding. It’s a busy time of the semester, and I wanted to compose a careful response. I also apologize in advance for how wide-ranging this response is, but your question raised in my mind a number of points I would like to discuss.

Although as I have said I think books should mostly speak for themselves, I also think it is important to have intellectual integrity, admit errors, and learn from them when they are made. Because no work is perfect—and even good books have a serious error or two (or sometimes more!)—I am happy to admit my errors. However, I also want to clarify the extent of them, as in this case I think I made an error in the book that does not affect the project in the manner that Dees’ review suggests.

First things first: Dees is correct that I messed up the math on infinity here. The actual result in the case he discusses is indeed indeterminate, not zero. I regret that I made that mistake, and while there is a story behind it, I will not bore you or anyone with it. The more relevant point, I think--hinted at obliquely in Dees’ review when he says, “In this case, I think, nothing important hangs on the mistake”, and addressed more explicitly in the passages from the book that I quote above—is that the math on that side of things plays no crucial role in the book’s master argument. Allow me to explain.

The way I set up the problem of possible future selves in Chapter 2, the problem is defined by an individual’s dominant interest (in specific isolated cases) of (1) knowing their future interests, (2) ordering their future interests with their interests in the present, and (3) advancing both sets of weighted interests for certain (with no possibility of error). Although one may raise skeptical concerns about whether everyone sometimes has this kind of dominant interest, I argue at length that all of us do from time to time, and that the claim that we do coheres very well with rapidly emerging empirical evidence on how people actually deliberate in moral cases. See http://philosopherscocoon.typepad.com/blog/2017/02/empirical-underpinnings-of-moral-judgment-and-motivation.html.

I will come back this set-up of the problem shortly, as the book’s master argument turns on it. However, a quick digression. Dees asserts that the problem I present in Chapter 2 violates one of my seven principles of theory-selection: the principle of ‘First Foundations.’ He never says how it supposedly violates that principle; he simply asserts it—and so I have quoted passages from the text in the original post to illustrate and clarify my argument that the problem does not violate that principle. I mention these points because I believe it highlights one of the book’s main innovations/”selling points.”

Unlike many meta-ethical and first-order normative ethical theories today, which make few or (more often) no empirical predictions—such as non-naturalist theories (e.g. Parfit and many others)—Rightness as Fairness makes a variety of distinct empirical predictions. One of the main claims in Chapter 1 of the book is that just like psychology went off the epistemic rails when its “leading theories” (Freudianism, etc.) failed to make any determinate predictions, so too have meta-ethics and normative ethics in philosophy. We have been working with deeply flawed methodologies (reflective equilibrium, testing cases by intuitions) which have been repeatedly shown to be flawed throughout the history of human inquiry. I argue in Chapter 1 that in order to reliably distinguish truth from what merely “seems true” or what we might merely want to be true, philosophy needs to adopt the scientific method.

Anyway, here’s the rub. I argue that we have pretty Firm Foundations for believing we have the motivations Chapter 2 ascribes to us. Whether I am right about this is a scientific matter. If my claims are verified, I’m right; and if they are disconfirmed, I’m wrong. I’m happy to accept that, for again, that is just what (I argue) any theory should do: it should be testable by reference to virtually universal observable facts. If more emerging science supports Rightness as Fairness, we will have all the more reason to take seriously. If the emerging science conflicts with it, we will have reason to doubt the theory. I am happy with that, and think again that predictions like these are necessary to ensure that our theories are dealing with facts, not just preconceptions.

Okay, now that we have all that out of the way, let us return to the math. Although the review does not mention this, I actually provide my main instrumental argument for the Categorical-Instrumental Imperative before I give any of the math. The argument (call it ‘The Master Argument’) is simply this:

1. Instrumental rationality requires adopting the best means for achieving one’s ends. (definition)

2. In problem-of-possible-future-selves cases one’s dominant end is the conjunction of (a)-(c):
a.Knowing one’s future interests before the future happens,
b.Ordering those interests with one’s present ones (in a way acceptable to both selves),
c.Satisfying those weighted interests for certain.

3. There is one and only one way to satisfy the dominant end described in (2): the Categorical-Instrumental Imperative.

4. Thus (from 1-3), instrumental rationality requires acting on the Categorical-Instrumental Imperative in problem-cases.

Notice that none of this relies on the mathematics of infinity. Rather, the Master Argument argues that the instrumental rationality of acting on the Categorical-Instrumental Imperative follows *directly* from the dominant interests that give rise to the problem of possible future selves. For the interests that motivate the problem are a *dominant* interest on the agent’s behalf (due to worrying about the future) to avoid *any* possibility of error, ensuring that one’s interests in the present and the future will both be satisfied for sure (which again, I argue, can only be ensured by a diachronic contract across time based on the CI imperative). So described, in problem-cases one has *no* interest (or, at least, much lesser interests) in weighing the utility of the Categorical-Instrumental Imperative against alternatives. I needn’t have appealed to the mathematics of infinity to do this. I could make the same point with finite possibilities, or by simply utilizing ordinary probabilities. In problem-of-possible-future-selves cases one’s dominant interest is to *ensure* that the probability of satisfying one’s future interest is =1, and the CI-imperative is by definition the *only* possible way that this probability =1 can be satisfied, since conformity to that principle (viz. its satisfaction-conditions) is a matter of one’s present and every possible future self cooperating to ensure that very probability equaling one.

Notice that the move I am making here is somewhat akin in structure to the move that Rawls (in)famously makes in formulating his original position and deriving his two principles of justice from it. In that case, Rawls designs the original position such that (in his view) the two principles of justice are the only two that can pop out of it (i.e. be rational to its parties). This, of course, has struck many of Rawls’ critics as question-begging (as building the derivation of the principles directly into the model). And indeed, in Chapter 7 of the book I argue that Rawls’ critics are right: his argument is question-begging. However, it is question-begging only because Rawls uses a flawed method of reflective equilibrium whereby the original position represents “our considered judgments” (when in reality the original position’s various assumptions are only the considered judgments of a certain type of liberal, whose considered judgments may well be erroneous). In contrast, I argue that in my case the argument is not question-begging, as people demonstrably (at least sometimes) have the very interests that give rise to the problem of possible future selves, and which (by definition) make conformity to the CI-imperative instrumentally rational.

This point recurs in the Infinity Argument that Dees takes me to task for. Although I do argue that an agent in a problem-of-possible-future-selves case could only expect zero utility if they violated the principle, I argue that this alternative is effectively *irrelevant* to the decider because their motivational interest is for nothing less than the infinite sum the CI-imperative promised in those cases. Another way to put this is that given their interests, acting on the CI-imperative always promises a better overall outcome than its violation—in which case, even if Dees is right that the argument just comes down to the sum of “the value of the moral action”, it’s still the case that the moral action (given the agent’s interests) necessarily outweigh the value of the immoral action (since again, their interest is in ensuring a probability =1 that they satisfy their interests, nothing less).

[Quick side-note: I think Dees is just wrong that our number of possible futures is “surely finite.” They may be finite in our life as mortals—though I am not even sure that is true—but part of what the problem-of-possible-selves expresses is a motivation to avoid all possible error, including the kind of infinite punishment we might endure in Hell, if such a thing exists. Notice that this isn’t to presuppose that Hell does exist. It is merely to suppose that in some cases, we have motivations to avoid all possible error, including the mere possibility that we may be punished with hellfire for all of eternity. I also imply in the book (pp. 105, 112-5) that for all intents and purposes, it is experientially plausible to think of the negative utility we associate with irredeemable regrets for past actions we cannot undo (viz. past wrongs to loved ones we can never undo and torture us all the way to the grave) as akin to “infinite” disutility, simply because the level and persistence of regret in these cases can be so permanent and punishing. Although this might seem like an exaggeration, I believe we should always think of formal arguments regarding utility calculations as simplifying/formalizing qualitative facts about human life that are not well-captured in numbers. Even if the deep, persistent regret of terrible mistakes one can never take back or get over never literally equals infinite disutility, as far as an actual human being’s experience of it, it may as well be: insofar as it *permanently* ruins their life—a very real thing].

In any case, in retrospect, perhaps I should have left out the Infinity Argument (though more on why I did not shortly). In some respects, I think I got myself in unnecessary trouble here in much the same way that Rawls got himself into trouble with his (in)famous “maximin” argument. And, as a quick-side note that I will return to below, I sort of *knew* that I was probably getting myself into unnecessary trouble by putting it into the book. But we will come back to that. First, a few points on Rawls’ maximin argument. This is one of Rawls’ most notorious failures. Rawls argues that three conditions make maximin rational, and all three conditions are exemplified by the original position (Rawls 1999, pp. 132-3). Yet as innumerable critics have argued (and rightly so), the original position doesn't seem to satisfy those conditions. Consequently, Rawls' maximin argument is widely thought to be a failure. At the same time, Rawls’ project doesn't really hang on it. Rawls is clear that the argument was intended as a mere “heuristic”, and (in my view) gives his real (and better) arguments for his principles of justice later in A Theory of Justice (in section 29 when he gives his strains-of-commitment, stability, and self-respect arguments). My aim with the Infinity Argument was similar. I first offer the Master Argument for the CI-imperative (mentioned at the outset of this comment), and then use the Infinity Argument primarily as a kind of heuristic to “make its complexities more intuitive” (p. 93) The point, in any case, is that even if the Infinity Argument fails (and I am still not sure), I don’t think the book turns on it.

Why then did I include the Infinity Argument at all, if I suspected it might be messed up? Allow me to share a brief autobiographical story about why I chose to include it, and then make a broader point about how I see philosophy, think it should be done, and what I think a good book should do.

Here’s the autobiographical story: I almost didn’t include the Infinity Argument in the book at all. I wasn’t sure whether it really needed the argument, and it was one of the very last things I wrote in the book just before the deadline for submitting the final manuscript. I included the argument for two reasons: (1) I thought it was interesting, and (2) I thought readers might want something more formal than the Master Argument. I was very much aware that I might be getting myself into unnecessary troubles with it (and suspected it might have some errors), but decided to roll with it anyway. Why?

The answer to this question is that unlike some philosophers, my dominant aim is not to avoid error. In contemporary analytic philosophy, “rigor” often seems to be considered the sine qua non of inquiry—as a highest value to prized above all else. I have always found this incredibly puzzling. Just about all of my favorite works in philosophy—the ones I like to read and think about, and the ones we tend to teach in our classes—are the ones whose author’s ambitions far exceeded their grasp. In my view, philosophy is more often pushed forward by interesting mistakes than it is by sound arguments. So, I think we should pursue both: sound arguments, while risking interesting mistakes.

There may be those out there who prefer careful books that avoid error. I am not among them. While of course one should strive for rigor--and I gave it my all in the book, and still feel good about its overall argument—I would much rather write an ambitious book that makes a few colossal errors than a middling/modest book that makes none (though again, I don't think the Infinity Argument's mathematical error impugns the book's Master Argument, and perhaps not even the Infinity Argument properly understood). Anyway, I expect there may be readers out there who do not share my philosophical priorities--and that's fine. We need not all have the same ones. However, I do think it is worth discussing philosophical priorities (including why we have the ones we have), and these are mine and why I have them.

I included the Infinity Argument in the book not because I was sure it was successful, but because I thought it was interesting and provided an intuitively compelling argument in defense of some of Kant’s most famous (and cryptic) remarks about the infinite value of moral action pursued for its own sake (some passages of which I have quoted in the post above). That, ultimately, is why I included the argument: because I thought (think?) it makes some progress on a fascinating problem that—like many philosophers sympathetic with Kant—I have been trying to make sense of all of my adult life: the “infinite” value of a truly good will, and how, on my Categorical-Instrumental Imperative, this value consists in it being a set of instrumental norms that *all* of one’s possible selves have grounds to uphold for their own sake.

Of course, the argument may be wrong, and if so I am happy to live with it (it was a choice I made as an author). At the end of the day, my only hopes—to the extent that I have any for the book—are that it has some value, people will find it interesting, and it will be judged on what it actually says. I wrote the book because I care about morality and want to know what the truth about it is. I hope (and still believe) that the book is onto something, but if it is not, then I will be all the better off for learning why. And that, for me, is what philosophy is all about: giving the best arguments one can, learning from one’s mistakes, and hopefully getting closer to some truth, wisdom, or goodness in the process.

On that note, I’ve probably said more than enough, and will leave it to the book to speak for itself.

Pendaran Roberts

Thank you for the thorough response. This is not my area and I haven't read the book, although it looks interesting.

I just asked about the infinity thing, because I have seen others say that infinity - infinity = 0, including in an Analysis article once.

I never thought this could be correct, given the argument I stated.

I was just curious if you had a response.

I'll look over your response more thoroughly tomorrow.

Don't let the negative review get to you! I'm sure it's an excellent book.

Amanda

Hi Marcus,

Thanks for responding to the reviewer. I have heard you mention a couple of times that a book should, "speak for itself." I disagree with that sentiment. There are a few problems with letting a book speak for itself,

(1) Many people read book reviews to gauge whether a book or one of its chapters is worth reading. If the review is uncharitable or misrepresents the book, some folks may never even read it and hence for them it cannot speak for itself.

(2) Most philosophers I know do not read entire books. Rather they read the sections or chapters that happen to be relevant or interesting to their personal philosophical interests. Hence a section of a review might say something false about a book, and an individual who reads one chapter but not the other would get the wrong impression. Given we all have limited time, it is a bit unrealistic to expect everyone to read the whole book.

(3)Some people involved in the periphery of a literature will only know about the book insofar as they hear things from others.

(4) Even when one reads the entire book, it is unlikely the points made will come across as clearly to the reader as to the author. A point the reviewer misunderstood might also be a point most readers misunderstood. Hence it is very helpful for authors to clarify these issues.

Pendaran Roberts

"One of the main claims in Chapter 1 of the book is that just like psychology went off the epistemic rails when its “leading theories” (Freudianism, etc.) failed to make any determinate predictions, so too have meta-ethics and normative ethics in philosophy. We have been working with deeply flawed methodologies (reflective equilibrium, testing cases by intuitions) which have been repeatedly shown to be flawed throughout the history of human inquiry. I argue in Chapter 1 that in order to reliably distinguish truth from what merely “seems true” or what we might merely want to be true, philosophy needs to adopt the scientific method."

I work in analytic and experimental philosophy, and have published quite a few experimental philosophy articles. So, I agree in so far as I think that empirical inquiry is relevant to philosophy. Philosophers often make empirical claims in their arguments about what's common sense and/or intuitive. Also sometimes philosophers make claims about some set of beliefs being core beliefs about x. In these cases, empirical inquiry is relevant to philosophy. In my area, perception and secondary qualities, I think experimental methods have a lot to offer.

However, empirical inquiry being relevant to philosophy isn't the same as saying that philosophy should just be another science, which is what I take your statement 'philosophy needs to adopt the scientific method' to amount to. Are you saying that moral philosophy should be subsumed under moral psychology?

I don't understand how we can get an ought from an is... Any answer that attempted to explain that we can would seem to have to be philosophical, not empirical. So, I don't think philosophy can just be another science.

More generally, my point is that there is more to the world than what can be discovered empirically. The famous examples are mathematics and logic. However, I'd say (somewhat controversially) that metaphysical truths also fall into this category. They too are synthetic a priori, and I think there are real questions in this realm that can only be answered with the help of philosophical methods, i.e. thought experiments, intuitions, and all that mess. ;)

Marcus Arvan

Hi Pendaran: Thanks for continuing the conversation, and for the very good questions!

Because the book addresses most of the issues you raise, I will respond by adopting a variation on the approach I took in the original post. In this case, I will juxtapose your queries against short passages from the book, and then follow those passages with some additional commentary. Sound okay?

You write: “However, empirical inquiry being relevant to philosophy isn't the same as saying that philosophy should just be another science, which is what I take your statement 'philosophy needs to adopt the scientific method' to amount to. Are you saying that moral philosophy should be subsumed under moral psychology?”

What Rightness as Fairness says:

“Section 1 of this chapter argues that moral philosophy currently lacks any method for reliably distinguishing what is true about morality from what merely ‘seems true’ to some investigators but not to others. Section 2 then argues that the following seven principles of theory selection adapted from the hard sciences are the best method available for reliably accomplishing this, and thus, for comparing moral theories…” (p. 9)

“Moral philosophy clearly cannot be based on precisely the same methods as the physical sciences. The sciences test descriptive hypotheses – about gravity, cell growth, and so on – against measurable observations. Moral philosophy cannot, however, be tested against predictions of how the world behaves – for moral philosophy is not concerned with describing the world, but with what ought to be: with how people ought to behave. Sciences, in a word, are descriptive, moral philosophy normative. Yet although moral philosophy deals with a different kind of phenomena than the sciences, the sciences utilize several reliable methods for distinguishing truth from ‘seeming truth’ that can, and should, be extended to moral philosophy.” (pp. 13-4)

Some additional follow-up commentary: I was speaking a bit roughly in my initial response to you when I wrote that “philosophy needs to adopt the scientific method.” What I meant by this—and what I argue in the book—is that philosophy should aim—as far as possible—to conform to the same epistemic standards as the hard sciences (standards which I argue are encapsulated in seven principles of theory-selection). In the book, I argue that, in the domain of moral philosophy, a simple instrumental theory of normative rationality and empirical facts about moral psychology best satisfy those principles. Thus, moral philosophy should be based on (A) one philosophical foundation (which satisfies the seven principles), and (B) scientific facts about human moral cognition and motivation. Accordingly, to a *large* extent, yes, I think moral philosophy should be subsumed to moral psychology—in the sense that moral philosophy should “answer” to it (i.e. be as based on scientific moral psychology as far as possible). Yet, as you can see, this still leaves room for philosophy—for what I do in the book is tease out the normative implications of instrumental rationality and facts of moral psychology, arguing that (when combined) they entail a very attractive *philosophical* (and empirical!) theory of ethics: one that is simultaneously descriptively compelling and normatively compelling. Part of my overall philosophical outlook today—one that I defend at length in the book—is that we should *not* see philosophy as discontinuous with science, but instead see philosophy in its once-traditional role as a kind of “handmaiden” of science, developing philosophical theories on the *basis* of our best science at the time, whatever it is. For, on the one hand, as we all know, scientists are not always the best at figuring out the philosophical implications of their theories; and, on the other, philosophers can reorient science by reconceptualizing phenomena (e.g. Einstein’s theory of relativity was above all a *philosophical* theory, one that turned science on its head by adopting a different philosophical approach to thinking about observed facts about space and time. Scientists before Einstein “knew” all the facts he did, but they all tried to fit them into a Newtonian paradigm. He was the first to say, “Hm…what if, as a philosophical matter, we assume the strange facts, *don’t* assume Newtonianism, and work backwards? Voila: relativity! A philosophical insight, with earth-shattering scientific implications).

Long story short, G.E.M. Anscombe famously argued six decades ago in “Modern Moral Philosophy” (1958) that “it is not profitable for us at present to do moral philosophy; that should be laid aside at any rate until we have an adequate philosophy of psychology, in which we are conspicuously lacking.” (p. 1) My argument in Rightness as Fairness is that Anscombe was absolutely right about this, and it is high time we learn the lesson.

You write: “I don't understand how we can get an ought from an is... Any answer that attempted to explain that we can would seem to have to be philosophical, not empirical. So, I don't think philosophy can just be another science.”

What Rightness as Fairness says:

“Recently, some philosophers have argued that all normativity in some sense must be a ‘queer,’ primitive part of reality. 98 I believe this to be a mistake, and that our concept of instrumental normativity can be used to reduce instrumental normativity to non-normative facts in a compelling fashion (I am, as such, proposing a ‘Humean reduction’ of the normative to non-normative 99–100 ). Here is how. Consider what a person playing tennis is asking for when they say ‘Why should I swing the racket that way?’ According to the instrumental conception of normativity, all they are asking for is an explanation of how swinging the racket in a certain way is an optimal instrument for achieving their motivational interests. If you show them this – by, for instance, showing them that it enables them to hit the ball more accurately – they will say, ‘Oh, now I see why I ought to swing that way.’ There is a simpler way to put this. Our instrumental concept of normative rationality contains certain satisfaction conditions . That is, we say any sentence, ‘X ought to do ɸ’ is true in an instrumental sense when and only when ɸ is, at a purely descriptive, factual level, the optimal means for X to achieve their motivational goals. In other words, our instrumental concept of normativity identifies prudential normativity with certain purely natural, non-normative facts about the world: relationships between motivational interests and instruments for satisfying them. Instrumentalism, as such, enables us to avoid introducing primitive normative properties in our ontology. It bridges the famous ‘is/ought-gap,’ which holds that no ‘ought’ can ever be validly inferred from an ‘is.’ 101 On the semantic analysis just presented – according to which the satisfaction conditions for sentences involving the instrumental ‘ought’ concept identify ‘oughts’ with purely natural facts – to say that someone instrumentally ought to do ɸ just is to say that ɸ is (descriptively) the best means for them to achieve their interests. While some theorists may raise objections to this sort of reductive proposal – arguing that it ‘eliminates’ genuine normativity altogether, positing nothing more than descriptive facts about optimal means for achieving goals 102–103 – I have two replies to this concern. First, although I do not have room to defend the above reductive semantics in detail, others have done so, 104 arguing persuasively in my view that such a reduction does not eliminate normativity but rather reduces it to natural facts (as, on such a reduction, there are true propositions of the form ‘X ought to ɸ’; it is just that the truthmakers for those propositions are natural facts about motivations and means)…” (pp. 28-9)

Additional commentary: In short, I think the so-called naturalistic fallacy (“you cannot derive an ought from an is”) is nonsense predicated upon a bad philosophy of language. ‘Ought’ is a word/concept. Like all words/concepts, it has certain satisfaction-conditions—conditions where we treat the concept as satisfied (i.e. true) and conditions where treat it as not-satisfied (i.e. false). Satisfaction-conditions, as such, are purely descriptive facts about human psychology: they are facts about *conditions* under which we, as a matter of psychology, apply a word/concept to a referent. So, we can reduce reference to psychology. That’s step 1 of the argument. Step 2 is that the concept ‘ought’--which may have several different interpretations—on at least one virtually universal interpretation (i.e. the instrumental interpretation) is understood by human beings as having certain satisfaction-conditions: we say sentences involving ‘ought’ are true (instrumentally interpreted) when, and only when, certain descriptive facts about agent’s ends and optimal means to satisfying those ends exist in the world. But these fact too are purely descriptive facts about the world. So, the semantics of ‘ought’—and, by extensions, truths about what people ‘ought’ to do—can be entirely reduced to naturalistic/descriptive facts. Does this “eliminate” normativity? I argue no. There are *facts* about what we ought to do, and these are normative facts, since our concept is normative (viz. “the best means for achieving a person’s ends”—an evaluative notion that we also have satisfaction-conditions for). Normativity is not eliminated on this account: it is *reduced* to descriptive facts about semantics and human psychology, including how we psychologically *evaluate* things. So, the naturalistic fallacy is a *fallacy*. You can derive an ought from an is. All you need to do is understand semantics properly.

One final note here: although I think this argument is sound, my project in Rightness as Fairness does not depend upon it.

Your write: “More generally, my point is that there is more to the world than what can be discovered empirically. The famous examples are mathematics and logic. However, I'd say (somewhat controversially) that metaphysical truths also fall into this category. They too are synthetic a priori, and I think there are real questions in this realm that can only be answered with the help of philosophical methods, i.e. thought experiments, intuitions, and all that mess.”

My reply: I agree about math and logic, but am unconvinced about metaphysics. On the whole, I am provisionally on the side of those (like Unger, Wittgenstein, etc.) who think analytic metaphysics is fundamentally epistemically problematic because there is no place where “the rubber hits the road” (in Ray Monk’s Wittgenstein biography, Wittgenstein is reported to have said that in metaphysics there is “no data”). I agree—which is why analytic metaphysical debates turn into thousand-year stalemates with entrenched positions and no clear method for resolving them. See e.g. https://philpapers.org/rec/WILGCO. Because of this, I think metaphysics should be naturalized, again as a “handmaiden to science” sort of in the manner I attempt to do here (https://philpapers.org/rec/ARVANT-2). At the same time, I am not entirely sure about this, and enjoy metaphysical debates—so, when I am in other moods, I am inclined to say that philosophy should be based on and answer to science as far as *possible* (as I argue in moral philosophy), but also, that when philosophy *cannot* answer to science (viz. logic, math, analytic metaphysics) we can keep on keeping on, but have to realize that what we are doing has serious epistemic limitations.

Pendaran Roberts

I found all this very interesting, and you are making me want to read your book. Although I am very skeptical that your project can succeed, I do find it interesting.

Marcus Arvan

Thanks Pendaran, that is very nice to hear and a kind thing to say! Given that the number of philosophical theories that have actually "succeeded" is close to zero, I would be more than content if it just turned out that I wrote something interesting and worth reading. ;)

Pendaran Roberts

You're welcome. It does legitimately sound like a very interesting book.

I hope you get some good reviews too and that it's read!

Marcus Arvan

Hi Amanda: I think you make a pretty strong case! I guess I was thinking about it in a longer-term, macro way. In many (most?) areas of human life--art, literature, philosophy, etc.--there tend to be two types of works that get attention: (1) flashes in the pan that make a big splash initially (perhaps because of sociological features such as popularity, short-term fads, etc.) but don't stand the test of time and eventually fade from view; and (2) works that get dismissed eventually but end up catching on later on (when audiences become more receptive). My intuition (and I confess that it is only that) is that which of these categories a work will fall into is likely to be a function of the actual quality of the work itself, not anything its author says about it. That is what I meant by the thought that work can, should, and usually does speak for itself: that, in the longer run, only audiences can decide--though of course I may be totally off on that too. :) I have no idea whether my book falls into any such category, or some other category (though, like any author, I hope that it falls into a good category of some sort). Anyway, that is what I meant by the claim you were responding to. That being said, I now agree with you that, at least in the shorter term, it may indeed be helpful for authors to clarify their works if they have the opportunity. It has, for instance, been enjoyable to chat with Pendaran here, and I hope some of my remarks make the book's ideas clearer for actual or potential readers.

In any case, thanks again for pressing me on this. As I explained last week (when I asked this community whether authors should respond to reviews), these are issues that I have felt very uncertain about! :)

Amanda

Interesting Marcus. I do think sometimes what you said happens. But I guess I am agnostic or maybe cynical on what I would consider the big picture. It seems at least possible, perhaps plausible, that there are a few excellent works of the past, comparable to the greats like Plato and Kant, etc., that due to the chance of history were not noticed nor appreciated, and drifted into the historic mist never to be loved as they ought have. (This seems especially likely for works written by women, or heretics, etc.) I sure hope I'm wrong about this though!

And yes, I am glad you are responding if only for short term reasons. Because as I see the short term, we will all be dead in the long term, so I have a selfish preference for the here and now :)

Scott Forschler

I'm about 1/3 through the book now, and have looked over some of your comments above. While a few of the reviewer's remarks seem a bit unfair, I am finding myself strongly agreeing with the criticism around the middle of your post above which says that you are using a very controversial set of assumptions about instrumental reasoning, and hence violating your own "firm foundations" principle.

Actually I'm not even quite sure what you mean by the latter, despite having read through this part of the book. The definition on p15 is that we should prefer "theories based on common human observation(s)...taken to be obvious, incontrovertible fact..." But what does *this* mean--and why should we accept it? I don't expect a good theory of quantum mechanics to be based on obvious, commonsense facts; to the contrary. Of course we may have separate reasons for thinking that *morality* should be based on commonsense, but this does not hold for theories generally. Furthermore, isn't it also important that the theories be based on facts *related* to what the theory is about, and obviously so? E.g., if my theory of morality (or QM) is that more purple there is in the world, the better (or that quarks are purple), then while "the presence/absence of purple" in a given reason is an obvious, commonsense fact, that doesn't make my theory a good one. Indeed, it is quite obvious that it is not related to either topic. (This may sound /crazy/, but it is actually close to something sometimes proposed for morality: that, e.g., since the commands of God are *utterly* beyond our control, they are objective, giving morality the objectivity we intuitively think it has; but of course, this is a cheap and irrelevant kind of objectivity).

Now, if you say it is obvious that we have interests, and reasons to satisfy them--well, sure, with some qualifications which needn't trouble us here. But it is far from obvious that satisfying them is what morality is about. So if "firm foundations" requires only the former, it's a cheap and misguided principle. If it requires the latter--again, an obvious *relationship* between the "foundation" and what the theory is about--then it is actually quite obvious that instrumentalism is a very bad & inadequate foundation for morality.

Now of course you try to show later on that because of some radical uncertainty about our future interests, we have instrumental reasons to be fair to all persons and their interests in order to be fair to those possible future interests of ours. But I am puzzled at many points here. Suppose I (A) find myself to have an unfair advantage over another person (B) today, and take this option. True, tomorrow B (or C) might have such a position over me, and my interests will be slighted if they take that option. But how does my serving B's interests fairly today *count as* treating my interests of tomorrow fairly? For the latter is not identical to B's interests today, just analogous to them, or of the same type. Now, I think there is a good argument for fair treatment here: if I hurt B today, I am implicitly approving of behaviors of this type generally, and hence of my own harm tomorrow on the part of a similarly-situated person. But this is a Kantian/Harean/golden rule-style universalizability argument, not an instrumentalist one. My treating B fairly today neither counts as nor causes my being treated fairly tomorrow.

You often say that we don't want to just make it probable that our future interests are satisfied or at least treated fairly, but to *know* that they will be. But I see no sense in which you have shown or could show that we can *know* this. Indeed, it seems radically unknowable; your arguments for radical uncertainty about our future interests make it impossible for any reasoning, let alone instrumental reasoning (and certainly no form of moral reasoning) to deliver this contented knowledge.

In general, it seems that you're trying to get morality out of instrumentalism by radically changing what "instrumental rationality" means. Not only is this confusing, it would actually lead to very counter-intuitive results. You face a dilemma here. If we base our "instrumental" reasoning on probabilities of what our interests are likely to be and how to satisfy them, then this reasoning will be biased to ourselves and against others in an immoral fashion, unless we bring in universalizability criterion of the sort you are trying to avoid. But if we toss out probabilities and try to be totally "safe" by acting in ways that are fair to our future interests *no matter what these are*, without any consideration of what the probabilities of one set of interests versus another are, then we get disastrous results. And you seem to want to do this as you stress that merely the possibility, not probability, of various changes in your interests is enough to generate an obligation to be fair to those possibilities. Setting aside the other problems I raised above, consider that if we do that for *ourselves* then we must do it for all others: I must treat each person I interact with as if /their/ future interests are radically indeterminate, and be fair to all possibilities. Whatever the full range of logically possible interests of all persons could be--and I don't know how we could begin to understand that in any meaningful way--it is surely not identical with the *likely* future interests of people we actually interact with. If we attempt to satisfy the former, we are likely to not do very well to satisfy the latter.

In short, by abjuring probability-based reasoning for yourself, you do it for all others as well, and are tossing out the baby with the bathwater, going from the frying pan into the fire. The path to morality, I think, is to someone *include* the *actual* interests of other persons into your reasoning as in some way on a part with *your* actual interests, not by inflating your "interests" to include all possible ones and trying to be fair to this entire universe of hypothetical interests which may have very little to do with the interest of actual people. It is *possible* that you, or everyone, will tomorrow take an interest in counting blades of grass. And if some of them do, perhaps we should not get in their way, or even help them in some cases. But it hardly makes sense to plan *now* for this possibility, or treat in on a par with an interest in world peace or curing cancer. And it is unclear how you can privilege interests like the latter once you abandon reasoning based on the probabilities of people actually having various interests.

Scott Forschler

A briefer comment, based on ch 3 & 4: at crucial points you seem to rely the satisfaction about being "fair" to your future self & others, regardless of how any of your other interests are satisfied, hence attaining in at least one area of life the "certainty" of "knowing" that you satisfied some of your interests. But this seems to rely, surreptitiously, upon some prior privileging of our interests in moral fairness. I can also *know* that I satisfied some of my immediate interests by, say, eating that ice cream cone or stealing that money *now* (I may not be happy with it later, but I will sure be happy with it in the next minute!--and you can't take *that* phase of satisfaction away). On p120 you rely, as you seem to in many other places, merely upon sentimentalism, more precisely the possible sentimentalism that I *might* take an interest in other people's lives (knowing that I sometimes do already, and others do at various times). But by the same token, I know that I might, sometimes have, and others sometimes have, become a religious or political fanatic, or a psychopath. Why should I then not equally satisfy these possible interests? Again, I don't see how expanding the realm of possible interests infinitely, and abjuring all probabilities, can lead to the results you (and I) want to reach; you seem to be equivocating on which ones you will point to and which ones you brush under the rug. We need principled ways of distinguishing good from bad interests. E.g, Bentham did so via principles like fecundity; helping psychopaths is less fecund for satsifying interests generally than helping charitable people, given the world as we find it. But this requires calculating probabilities, not ignoring them.

This focus on certainty reminded me at times of similar moves on the parts of Stoics, as well as Gewirth and to some extent Kant, insofar as they also focused their ethics on things we could be *certain* of, avoiding uncertainties. It is curious that you didn't seem to address these parallels, which might have helped clarify the basis of your argument, and how you thought it compared to or improved upon these otherwise similar ones.

Marcus Arvan

Hi Scott: Thanks for critically engaging with the book, and for the good queries! I'll type up a detailed reply ASAP, but it might be a day or two as I'm swamped with grading and want to write up careful (rather than hasty) replies to your questions and concerns.

Marcus Arvan

Hi Scott: Thanks again for your challenging comments! Because I'm still swamped with grading (it's finals week here), I'll have to respond to your concerns in several parts.

Let me begin with your first concern: "I'm not even quite sure what you mean by the [the principle of Firm Foundations], despite having read through this part of the book. The definition on p15 is that we should prefer "theories based on common human observation(s)...taken to be obvious, incontrovertible fact..." But what does *this* mean--and why should we accept it? I don't expect a good theory of quantum mechanics to be based on obvious, commonsense facts; to the contrary. Of course we may have separate reasons for thinking that *morality* should be based on commonsense, but this does not hold for theories generally.

My reply: Let me begin with a quotation from p. 14 (emphases added),

"Let us begin by thinking about what distinguishes epistemically respectable sciences from ‘pseudoscience.’ As we saw earlier, modern science insists, above all else, that theories be based on common observation: on observational facts that virtually everyone recognizes as such. Physics and
chemistry are founded on common observation of ordinary objects and substances. We *all* see tables, chairs, people, water, and air – and modern
physics and chemistry make predictions about how these things behave. It is not merely this or that investigator who can remove a small piece of
skin from a person, put it under a microscope, and test modern biology’s hypotheses about how skin cells function. *Anyone* can look through a microscope and observe whether the predictions the theory makes are
correct. Similarly, it is not merely the physicist who can observe clocks flown at immense speeds to test whether Einstein’s predictions about the relativity of time are correct. *Anyone* who looks at such a clock can see whether it has slowed down relative to clocks on Earth, as Einstein’s theories predict."

Here, in other words, is the basic idea of Firm Foundations. Before modern science, investigators in many fields used roughly the epistemic standards we use in philosophy today--speculating on how things "seem" to them, when it is not at *all* obvious to other investigators that it is *true* that things are really that way. There are two problems here: (A) We don't accept this as a truth-apt standard in everyday life, and (B) Every single science in history has made demonstrable progress truth only *after* insisting on a much more stringent standard--the standard that observations in support of theories be tied to things that *every* one of us recognizes as uncontroversially true (which is simply what Firm Foundations says). Allow me to explain.

Setting aside Cartesian skeptical worries (which I suggest are inappropriate on p. 14, at least for the sake of productive theorizing), virtually everyone recognizes there are chairs in the world. In contrast, when "ghost hunters" say things like, "It seems to me there is a ghost in the house", the rest of us chuckle. Why? Because it doesn't seem at all obvious that it is true to many others of us.

Now turn to science. As I detail several times in the book (pp. 1-2, 10-13), there have been countless times when the *sciences* have made the same kind of mistake--using epistemic standards common in philosophy today (i.e. basing arguments and theories on how things "seem" to inquirers, despite the fact that those very things *don't* seem true to many others). It seemed to Thales that since everything changes, and water is the most changeable substance around us, everything must be water. It seemed Heraclitus that because things are constantly "kindled" and snuffed out, that everything must be fire. In the 20th Century, it seemed to Freudians that everything must step from unconscious motivations...because of how things seemed to Freud. You get the picture. We now know how, time and again--in every field in which it has every been used--this method has *failed* in being truth-apt. And, for a very good reason: truth and good evidence are *not* how things "seem" to this or that person, but things that we can all *demonstrate* to each other on the basis of things virtually *universally* recognized to be true/facts.

It is this principle--that we must always base theories on facts that are virtually *universally* recognized by human beings--that, or so I argue, explains the astonishing success of modern science. Literally every time a field of inquiry has changed to obey this principle--whether it has been physics, or biology, or psychology--the field moved from rampant speculation to real progress. We now have rocketships, cell phones, and the computer I'm writing on right to thank for it.

On that note, let me turn to your point about quantum mechanics. You write, "I don't expect a good theory of quantum mechanics to be based on obvious, commonsense facts; to the contrary."

This is plainly false. The sciences of quantum mechanics, relativity, organic biology, etc., are *all* predicated upon the standard of common observation stated by Firm Foundations. When it comes to quantum mechanics, for instance, although the theory itself is complicated, *anybody* can set up the double-slit experiment and see the tell-tale signs of wave interference https://s3.amazonaws.com/liberty-uploads/wp content/uploads/sites/1546/2015/11/young2a.jpg ). It doesn't just "seem" to some theorists or other as though, when the test is set up, there are many lines on other other slide of the two slits. Those lines are publicly *demonstrable* to anybody and everybody who has two eyes. The theory of QM has been formulated on the basis of public observations like--that everyone can attest to (no one who can see would deny there are multiple lines projected in the double-slit experiments)--and then makes predictions about other things we will *all* see (e.g. in particle-collider tests). For instance, anyone who runs LHC can verify now that the Higgs boson has a mass of 125 GeV.

More broadly, all of modern physics--quantum mechanics included--is based on virtually universal observations, whether it is observations of tables, chairs, the behavior of water and gasses, light observed through telescopes, and so on. The key thing is that in *every* case, modern science insists that the facts the theory in question is based upon--and tested by reference to (in making in predictions)--are not things that "seem" one way to some people but seem another way to others. They insist, e.g., that the theories be confirmed or disconfirmed on the basis of virtually universally recognized facts (again: we can all look at the resonances of the LHC and *see* the peak for the Higgs mass at 125 GeV). See e.g. https://www.youtube.com/watch?v=CBrsWPCp_rs .

You write: "Furthermore, isn't it also important that the theories be based on facts *related* to what the theory is about, and obviously so? E.g., if my theory of morality (or QM) is that more purple there is in the world, the better (or that quarks are purple), then while "the presence/absence of purple" in a given reason is an obvious, commonsense fact, that doesn't make my theory a good one. Indeed, it is quite obvious that it is not related to either topic. (This may sound /crazy/, but it is actually close to something sometimes proposed for morality: that, e.g., since the commands of God are *utterly* beyond our control, they are objective, giving morality the objectivity we intuitively think it has; but of course, this is a cheap and irrelevant kind of objectivity)."

My reply: Two points.

(1) The moral theory you have described here (based on the proposition "the more purple there is in the world, the better", straightforwardly violates Firm Foundations--as this is not a proposition recognized by virtually all human observers as obviously true.

(2) The history of science amply demonstrates that it is not epistemically sound to insist *prior* to obeying the standard of Firm Foundations (i.e. the epistemic standard of common observation) which facts are appropriately "related" to a given theory's subject matter. Indeed, many a sound scientific theory in history (relativity, organic biology, quantum mechanics, etc.) has been greeted with unwarranted skepticism because it didn't match up with what people thought "relevant" at the time. For instance, when Einstein first came up with relativity, the theory was actually quite widely ridiculed (I can scrounge up some wonderful quotes for you!) for not being "relevant" to "objective" space and time. People accused Einstein of making what Ryle famously called a "category mistake." They said things like, "Sure, clocks may keep time differently in different reference frames--but that's not *relevant* to the question of what objective time is."

The lesson we should learn from cases like this is that sound inquiry *begins* with observational facts everyone can see, *determining* what is "relevant" to a theory on that basis--not what "seems" relevant.

Anyway, thanks again for your comments. I realize this has only addressed the first small part of your two comments. I will respond to the other parts of your comments ASAP, and would be happy to continue discussion over the part I've just replied to.

Thanks again!

Scott Forschler

Your clarification and emphasis on what you said prior to p14 make it clear that FF doesn’t mean theories should rest on things that are obvious and commonsensical (as the definition by itself might suggest), but simply on objective facts—some of which might take considerable instrumentation, effort, patience, and both long-developed technical skill and further theory-laden observations. As long as once you do that, the results/observations are repeatable by and between persons.

OK…but that’s rather trivial. No moral theory taken seriously by contemporary philosophers rejects the idea that we need objective arguments to support it. Even error theorists, quasi-realists, and emotivists think you need *that*, although they might disagree on whether the true theory delivers objective norms or just a set of meta-ethical facts.

In any case, if this is all you mean by it, then it seem you had no ground for so quickly dismissing constuctivism, Kantianism, and so many other theories on the grounds that there is “controversy” over them. There is controversy over your “grounds” for morality, and indeed over any egocentrically-based moral theory (e.g., contractarianism). So if “controversy” is sufficient to make the foundations not firm, your theory is not firm. OTOH, it is possible that you have erringly dismissed a good theory, because you failed to bring to bear the patience, theoretical understanding, etc. needed to fully understand it—just as some people dismiss QM because they haven't taken the effort to understand it.

Indeed, you reveal a quite different view on FF when you say my “purple is good” proposal violates it, “as this is not a proposition recognized by virtually all human observers as obviously true.” This fits perfectly your view that FF requires a theory to be non-controversial. But this is radically different from the standard that the theory be based on objective facts of some kind of other (possibly very obscure and hard-to-see facts). ["purple is good" may violate that too, but you can't dismiss it *merely* because it is not immediately obvious to all persons who consider it!] So can you see why I might have been confused about what you meant by FF? You explain it one way when you say it fits QM; you apply it in a very different way when you attack alternative theories and propose your own standards. It seems to me that you equivocate between them. And I think that is not merely my idiosyncratic “seeming” but an objective fact, one visible to anyone reading these two distinct passages in the reply you just wrote. :-)

Marcus Arvan

Hi Scott: Thanks for your reply. I don't think I'm equivocating in that way, and will write a reply explaining why I think I'm not in the morning. I'll then address your other concerns in comments to follow!

Marcus Arvan

Hi Scott: Thanks again for pressing on these issues. I think you raise good questions here, and wish I could have devoted much more time in the book to addressing them. Word-limits being what they were (I had a hard 105K word limit imposed by the publisher), I had to make difficult decisions on what I could address.

While I still think I made a pretty strong case on these issues in the book as it is--and critical resistance is of course to be expected to any work in philosophy--the critical reaction you and a some readers have had so far here suggests I could have done a better job. Still, I guess that is in part what critical discussion of a published work is for--so let me try to address your concerns a bit here.

You write: "Your clarification and emphasis on what you said prior to p14 make it clear that FF doesn’t mean theories should rest on things that are obvious and commonsensical (as the definition by itself might suggest), but simply on objective facts—some of which might take considerable instrumentation, effort, patience, and both long-developed technical skill and further theory-laden observations. As long as once you do that, the results/observations are repeatable by and between persons.

OK…but that’s rather trivial. No moral theory taken seriously by contemporary philosophers rejects the idea that we need objective arguments to support it. Even error theorists, quasi-realists, and emotivists think you need *that*, although they might disagree on whether the true theory delivers objective norms or just a set of meta-ethical facts."

My reply: No, I don't think it's trivial at all. The very point of Firm Foundations is to provide an analysis of *what* we should treat as objective facts. The very point of the principle is that unless the observation in question is accepted as such by *virtually all* human observers (viz. we all see tables and chairs, anyone looking at a readout of the LHC can see resonances, etc.), it should not be accepted as a fact for the purposes of theory construction. This is why I call it a principle of 'Firm Foundations.' It does not permit basing theories on what some people 'think' the facts are, when others disagree. It requires basing theories on things we can *all* agree to be facts.

The next point is that this is precisely what I argue in the book existing moral and political theories don't do. Error-theorists, quasi-realists, Kantians, etc., all base their theories on what their investigators 'think' the facts are, despite the fact that not everyone agrees that they are facts at all. It is in this sense that I'm arguing moral and political philosophy should do better. My claim then is that there are two things relevant to moral and political theory that *do* satisfy Firm Foundations: (A) instrumentalism, and (B) empirically verifiable facts about human psychology. Instrumentalism satisfies the principle because literally *everyone*--children, adults, psychopaths, etc.--utilizes the principle, recognizing that there are true and false instrumental claims (viz. no competent speaker would say it is true that if you want to lose weight, you should eat an entire Thanksgiving meal!). Empirical psychological results then satisfy Firm Foundations because...well, they are empirical psychological results!

This, though, brings us to the real heart of your concern, which is that my theory doesn't satisfy the principle, either. So, let's turn to that...

You write: "There is controversy over your “grounds” for morality, and indeed over any egocentrically-based moral theory (e.g., contractarianism). So if “controversy” is sufficient to make the foundations not firm, your theory is not firm."

My reply: Whether there is real controversy over my 'grounds' for morality remains to be seen (more shortly). Further, as I note above in one of my responses to Pendaran, I take it to be one of the biggest virtues of my theory that it makes empirical predictions--making the theory amenable to confirmation and falsification. Allow me to explain each point in detail.

First, on the issue of controversy over my grounds for morality. My grounds for morality are simply this: that all of us *sometimes* worry about our future, such that we *sometimes* want to know our future interests (i.e. whether we are going to regret a given decision) before the decision is even made. That's it: that's the Firm Foundation I give for morality--as I argue that the rest of the theory follows from it.

Here's a question: is it really controversial whether we all worry about the future in this way at least *sometimes*? Is there any person on the face of the Earth (who is not a psychopath) who hasn't worried that they might regret an action in the future, and wished they could know it in advance so that they could avoid it? I don't think so--which is why I give example after example in Chapter 2 in the book to illustrate just how common this experience is.

I will say this though: I could have done better than this (better than just providing numerous examples)--and I fully intend to! As I mentioned above, what I take to be one of the nicest things about my theory is that it is open to empirical confirmation and disconfirmation. For instance, this coming summer, I plan to run a study examining just how common these experiences are, comparing them to the kinds of 'facts' that other moral and political theorists have given for their theories. Although I'm fairly confident that the results will come out on my side (more on this below), if they don't--well, then, they don't...and I will have to admit the theory is wrong.

This brings me to a more general point. When I've discussed these issues with my own students before, I've had to admit that the grounds I offer are on the 'edge' of Firm Foundations. I've tried to make the strongest possible case for the foundations I provide (as satisfying Firm Foundations)--but those foundations are, admittedly, on the 'edge' of what we currently know empirically (more below). Notice, though, that this is what a lot of good theories in history have done.

I don't mean to compare myself to any of the following figures or theories. Instead, I mention them because I'm an intellectual history buff and they are obvious, famous cases. Consider relativity. When it was first introduced, many theorists doubted its foundations (as it was based on the assumption that light speed is the same in all reference frames, which appeared to be the case but was not absolutely certain). A lot of people initially doubted the theory as a result. Fortunately, though, the theory made *predictions* about many things (e.g. about Mercury's perihelion, etc.) that were eventually confirmed--which is why everyone now believes the theory. Similarly, consider evolution. When Darwin first introduced it, many people were skeptical of the foundations for it--among them, Darwin's observations of adaptive differences between creatures in different environments. Fortunately, his theory too made predictions--and the ever-increasing confirmation of its predictions are why we now believe the theory.

Let us now return, then, to Rightness as Fairness. Are the foundations I provide *currently* unassailable foundations? Perhaps not. Although I still think it's pretty clear--from everyday life--that everyone cares about the future in the relevant sense to generate the 'problem of possible future selves' that I argue Rightness as Fairness is a solution to, this is nevertheless an *empirical* claim that can be tested. Consequently, the real proof will be in the pudding. And this again, I believe, is a real virtue of the theory. Unlike many moral theories (such as Parfit's non-naturalism, or Kantianism given its transcendental roots), which make few or no empirical predictions, my theory makes a lot of them. And, here's the thing, the predictions the theory makes are looking promising so far. Allow me to explain.

Here, to simplify a bit, is what Rightness as Fairness empirically predicts--that morality emerges from:

(1) Fear/anxiety about the future.
(2) Resulting from bad bets in our past.
(3) That make us want to *know* our future interests before the future occurs.
(4) Thereby making us extremely risk-averse in *some* cases.
(4) In a manner that leads us in these cases to care about the perspectives as others.
(5) So we don't experience sadness/regret from making bad 'gambles.'

I give some empirical evidence in the book that supports this picture. However, even more importantly, as I have mentioned on this blog before (http://philosopherscocoon.typepad.com/blog/2017/02/empirical-underpinnings-of-moral-judgment-and-motivation.html ), a lot of empirical results have come out even since the book has been published that lend even greater credence to the account.

Among other things, it has been found across a wide variety of moral tasks that moral judgment and motivation/sensitivity involve:
(1) Fear/anxiety
(2) Gambling avoidance
(3) Concern for one's future
(4) Concern for others' perspectives
(5) Concern for one's future and others' perspectives involve the *same* neural pathways and are inseparable (stimulating future-concern stimulates other-concern, inhibiting other-concern inhibits future-concern).
(6) Stimulating concern for one's own future leads to more *fair* treatment of others.

These are just a few things that have already been found empirically, and they systematically cohere with Rightness as Fairness' account. Indeed, while the facts here are still emerging, I do not know another theory that predicts all of these results in the way that Rightness as Fairness does.

So, long story short, while the account I provide may be on the 'edge' of Firm Foundations, I think:

(1) It's currently on *firmer* foundations than other theories (since it bases morality on common concerns for the future that are incredibly common in human life, experienced by basically everyone).

(2) As with all good theories that make predictions, only time will tell whether it really does have Firm Foundations. If its predictions are verified, its foundations should be accepted; if not, then not. And I'm happy with that. It's what I think a good theory should do: use the best evidence at hand to take some risks, make some predictions, and let the cards fall where they may! :)

Scott Forschler

Thanks, Marcus. I think that what you've just written is a very nice summary of the book, and helps me put it all into focus (I have finished it since my first post, BTW, though have to admit that initially I was more confused about some points which your last message helped me with a great deal).

I am still unconvinced that FF, as you intend to use it, is anything new. You say:

"anyone looking at a readout of the LHC can see resonances, etc.)" Well, not easily or right away of course. The judgement "there goes a resonance (or electron, etc." is only an empirical and obvious one to people with a great deal of training and theoretical understanding. Now the theory with which such observation-statements are laden is itself backed up by a large number of other empirical statements, and no doubt when you get down to the bottom they can all be ultimately reduced to things that, taken individually, "anyone" can see. But again, this hardly differs from the claims of standard moral theories. Even stark raving intuitionists (God help us!) claim that if you *really* looked at your own intuitions, you would see that their own claims about them (both normative and meta-ethical) are true, that everyone shares them; and if they don't seem true to you, it's because you're looking at them in the wrong way, your judgments are clouded by egoism, false theories, etc. There are books full of such explanations. Now: these explanations might not be adequate; you and I will doubtless disagree with many of them. But the devil's in the details. You can't reject them just by saying "your claims about intuition (or the judgments of ideal observers, rational contractors, the constituents of rational agency, etc.) aren't ones that everyone immediately agrees with." Because you concede that this isn't true for sound physical theories, and isn't required by FF when you specify this more precisely. I disagree with your suggestion that most other theories don't already completely agree with FF under their own descriptions; they may fail to live up to it, but that's quite another thing.

"[FF] does not permit basing theories on what some people 'think' the facts are, when others disagree. It requires basing theories on things we can *all* agree to be facts." But here again you equivocate between whether the crucial question is that others *actually* disagree, or whether they *can* [which I think is the word that really needs emphasis in the second clause] all agree (of and course, do so /rationally/--not through coercion, etc.) Perhaps this is merely verbal carelessness I should not hold you to; and yet the former really does seem crucial in your quick dismissal of opposing views.

"Error-theorists, quasi-realists, Kantians, etc., all base their theories on what their investigators 'think' the facts are, despite the fact that not everyone agrees that they are facts at all." Of course they do; and so do you. There is disagreement over whether a desire for X gives anyone a reason to pursue X; X might be genocide, for instance. Now, your initial principle qualifies that as an "instrumental reason," and when so qualified I will actually agree (I am deeply annoyed by the all-too-common tendency to simply argue about whether or not such-and-such a fact gives an agent a "reason" to act, as if there was only one kind of reason, an assumption which leads to enormous confusion). But then instrumental reasons alone aren't terribly interesting, and almost [again, excepting a subset of philosophers I strongly disagree with] everyone agrees that instrumental reasons by themselves do not ground morality. Again, the question is not whether anyone disagrees on the conclusion, argument, or even the data at first glance; rather, does anyone have a good explanation for why we should accept these after considerable reflection and deeper understanding has been achieved.

But I've largely said all this before. If anyone else is reading this besides the two of us, I would love to hear their input. Even if they haven't read the book, I think that you [Marcus] have made your main argument very clear in the last two messages [and in far less than 105K words!], as I think I have with my misgivings, so a third-party reflection on both would be useful.

Your latest post also makes it very clear--and again, impressively succinctly--how you think that psychological predictions can confirm your theory. I have three (probably predictable) misgivings about that. One, psychological correlations, even re-use of neural pathways, by itself may not confirm that two concepts are logically linked; there might be overlap which is not complete, and the subtle differences between them might be crucial. Recall, e.g., the studies that people who find spare change in a phone booth are more generous towards others; this can tell us a lot about the connection between emotions and moral behavior, but probably tells us very little about what moral principles we should follow. Second, additional psychological findings would support alternative theories as well or better; e.g., subtle reminders of moral rules, or of the existence and potential judgments of other people (e.g., painted eyes on a wall near a coffee pot whose users are asked to contribute a quarter to when they take a cup) encourages moral behavior. Such findings, at least on their face, give better support to various other theories which highlight the (possibly imagined) judgments of *others* on your behavior (ideal observer theory, Darwallian 2nd-personal demands, or my favorite: reflective agency theories) pride of place, than to your theory which privileges your own judgments and concerns about your own instrumental success. Third, it is far from clear that the evidence you point to is best explained by your theory, and often seems to fit alternative theories as well or better; e.g., heightened concern about one's future might simply distance oneself from your immediate interests or impulses, encouraging more reflective thinking in general, and hence free the mind from the shackles of the former so that it can consider the interests of others via a universalization principle. Indeed, since it is also possible that I will one day not care about some of the people I currently care about, such radical uncertainty should lead to more immoral behavior if, as you suggest, the mere possibility of having such changed interests gives us reason to respect them, setting aside all questions of their probability; if it doesn't, this would seem to directly undermine rather than support your theory.

But again, I've made this last point before; can anyone else weigh in with an opinion here, based on what Marcus and I have said so far?

Marcus Arvan

Scott: Thanks for continuing the conversation. I'll respond more fully tomorrow, but a quick side-note that might interest you.

You write: "it is far from clear that the evidence you point to is best explained by your theory, and often seems to fit alternative theories as well or better; e.g., heightened concern about one's future might simply distance oneself from your immediate interests or impulses, encouraging more reflective thinking in general, and hence free the mind from the shackles of the former so that it can consider the interests of others via a universalization principle."

Here's the side-note: I actually wrote the first version of this book--defending the same account of Rightness as Fairnss--based on a Kantian analysis of agency/universalization of the sort you allude to here (and which I know from your work you are more sympathetic with). I abandoned this type of Kantian approach due to the methodological issues we've been debating here--but, for what it's worth, I still think the theory is defensible on those more standard Kantian grounds.

Anyway, perhaps what I *really* should have done--and might have done if my publisher would have let me write a 500 page book like 'A Theory of Justice'!-- is argue that *multiple* methodologies lead to the same place (i.e. Rightness as Fairness). I suspect readers such as yourself might have been more amenable to the book had I adopted that multiple-pronged approach.

That being said, I still believe that, given the space constraints I had, the instrumental approach I ended up using is best, both methodologically and in terms of fit with emerging empirical results. This was a conscious risk I took as an author. I could have gone with the traditional Kantian/agentic/universalization approach--but it's not (A) the approach that I think is methodologically best (for reasons we have been debating), nor (B) the approach that I think fits best with the emerging empirical data (though we've been debating that too).

Anyway, I will of course have to live with the risks I took here. But, for all that, I remain optimistic that as the science continues to emerge, it will cohere with the account better than others--as do I remain optimistic that the methodological approach I advocate in Chapter 1 will resonate in time (as I think it has in most other fields that transitioned from purely-speculative to making genuine, demonstrable progress--i.e. physics, biology, psychology, etc.). I think the way all of these fields transformed shows that Firm Foundations is not trivial, and I am hopeful we will find the same is true in moral philosophy.

On that note, what *would* you say if empirical results increasingly converged on *precisely* the picture I paint in Rightness as Fairness--the picture that (1) instrumental rationality is dominant in human normative reasoning and motivation (see below), (2) we experience the normative force of moral norms as a result of wanting to know the future (viz. the problem of possible future selves), (3) that problem leads us to want to justify our actions to all of our possible future selves, etc.? If that's what the science of moral cognition and motivation, would you still not think the theory would be better confirmed relative to its competitors? If not, why not?

[Another side-note: to see just how dominant instrumental reasoning and motivation appears to be in human behavior, see Daniel Batson's "What's Wrong with Morality?: A Social-Psychological Perspective", https://global.oup.com/academic/product/whats-wrong-with-morality-9780199355570?cc=ca&lang=en& . Basically, he argues that empirical results indicate that non-instrumental reasoning plays little to no role in what we are actually motivated to do).

Marcus Arvan

I also still have the old 'Kant' version of the book manuscript, btw. Though you're probably sick of reading my tripe at this point (I sincerely appreciate you taking the time to read the book!), I would always be happy to share it with you.

Like I said, I used to be a dyed-in-the-wool Kantian myself--so, if you have any interest in what a more standard Kantian gloss on Rightness as Fairness would look like, I'd be happy to forward it! :)

[I also provide a very brief--albeit underdeveloped initial sketch--of that Kantian picture in my old unpublished paper, "Unifying the Categorical Imperative, and Beyond", which is a longer version of my 2012 article "Unifying the Categorical Imperative". I would always be happy to forward that to you as well, if you'd like a briefer overview of the project's initial Kantian roots].

Scott Forschler

Marcus, I have a doubtless much earlier version of your "Unifying the CI" paper, perhaps from ROME in Boulder many years ago...I have very strong logical reasons for thinking that the second formula uses a different kind of universalizability test than the other two (this is explained in my most recent Phil Studies article, which is not yet in a numbered issue but is on their website waiting to be assigned to one). So it will be pretty hard to convince me that they can be unified without radically changing them!

As you stated your three psychological hypotheses just now, they actually don't go very far. I already know that instrumental reasoning is very important and looms large in each of our lives...but "dominant"? Well, it depend on what you mean by that. It is pervasive, certainly; but so is universalizable reasoning. Indeed, the two interpenetrate; we try to make our instrumental reasoning universalizable, and often try to bend what counts as universalizable so it fits our instrumental interests (via rationalization & hypocrisy; the tribute vice pays to virtue). So in practice we too often alter each to accommodate the other, though they are distinguishable with effort.

But given this obvious fact, I don't know what hypothesis you are proposing. That we would discover that all our "universalizable" reasoning is prudential-instrumental in disguise, or just isn't even happening in the first place? As well ask me what I would conclude if scientists "showed" that the sun was blue, and always had been. I know what it means for the sun to be blue, but to make this consistent with life so far...well, then I don't know what you mean anymore.

"(2) we experience the normative force of moral norms as a result of wanting to know the future (viz. the problem of possible future selves)" Well, again I don't know why you think "wanting to know the future" is the same as, or the heart of, the problem of future selves. I would rather say that the problem of future selves, viewed instrumentally, is the problem of assigning probabilities to various future interests and maximizing expected utility thereby. I'm puzzled why you think this would be dominated by the desire to "know" my future interest. I mean, sure, that would be nice. It would be nice to have the sun and the moon, too. But I don't, so I settle for probabilities, and think that all prudentially rationally people do the same. Trying to focus only on what we can "know" forecloses very rational calculations of probabilities...which won't get you to morality unless you apply universalizability, making your behavior jusifiable to anyone precisely because it is what you would approve of anyone doing. Put another way, what we do "know" is certain probabilities...so either you'll take those into account, or you mean something different from saying we should focus only on what we can "know" in the context of instrumental reasoning than I do. In any case, I previously suggested an alternative explanation for why drawing one's attention away from one's current, immediate impulses might lead the mind to universalizability constraints.

If you are imagining some much stronger hypothesis, where you could show, somehow, that it leads to morality without going through universalization, then again I am baffled. What if scientists found that, a la Wittgenstein, some of our heads were just full of unorganized sawdust? Or that moral behavior was correlated with thinking of the color purple? Given my reasons for thinking that consciousness and morality are logically connected with quite different things, I simply wouldn't know what to say, and would have to suspect some missing empirical facts here.

"(3) that problem leads us to want to justify our actions to all of our possible future selves, etc.?" That doesn't help either, since, again, justifying our behavior to all our future selves is merely a subset of what is really required--justifying it to all possible agents--which in turn is constituted by justifying it to/for *yourself* given a universalizability constraint (justifying X = approving of X for any agent). But if you don't use univerasalization, only prudential-instrumental critera, then justification *only* to your possible future selves would presumably either have some bias towards yourself, or would explode into indeterminacy or another form of immorality. For my future self might become a fascist, or take significant interest in the species of Martian persimmons which will evolve in their new habitats after terraforming. If I have to justify my current behavior to *those*, and all other, future selves by doing *something* to instrumentally satisfy each of their interests, and can't weight this by either the probability of my taking on such interests, or some prior principles for determining which of *those* choices are justifiable, I will either end up completely paralyzed, or I will end up supporting a range of interests very different from the actual interests of the 7 billion people actually inhabiting my known universe, which only have a small subset of these infinitely-possible ones which your theory suggests I must take into account. So again, if you're suggesting that morality could be "empirically" shown to come from *that* kind of instrumental reasoning, I don't know what you mean. It either wouldn't be morality, or there's something missing in the data or explanation.

Scott Forschler

BTW, just to be clear: while I say that moral principles must be justifiable to all possible agents, that does not mean that they have to instrumentally accommodate the desires of all such agents. That's why it doesn't run into the problem that I see in your derivation. Because I believe that the kinds of principles which are so justifiable are precisely those requiring each agent to equally respect the interest of all other agents /in her world/. For, plausibly, this is what each agent would approve of any given actual agent doing--for then such agents would respect *her* actual interests as much as their own. And since this will always be true for any possible agent in any possible world, then such principles are justifiable (= are the unique governing standard for principles which the agent can rationally approve of any agent adopting) for any possible agent. But I frankly see no logical mechanism by which your argument can get to a plausible morality, since you focus on instrumental satisfaction of all the possible interests of all your possible future selves. That's either indeterminate, or monstrously different from the actual interests of people in the actual world, unless I'm missing something very crucial in your logic at this point.

Marcus Arvan

Scott: Thanks for continuing the conversation. Unfortunately, because it's finals week and I have a ton of grading to do, I don't have time to write much right now. Can we perhaps continue this conversation in detail next week?

A few quick thoughts though:

(1) Surprisingly, your approach of justifying moral principles to all possible agents *is* the approach I defended in my 'Kantian' version of Rightness as Fairness (and really, the approach I defend in the final manuscript - see below). In the 'Kant' version, I argue that the method you're advocating generates the same Four Principles of Fairness that I defend in the instrumentalist final version. Further, as I'll note now, there is a clear sense in which this is what I am still arguing (on instrumental grounds) in the final, published version of the manuscript. I'm arguing that the 'problem of possible future selves' requires you to justify your actions to *every* possible future self you could be, where this is identical to perspective of every possible agent (viz. the Moral Original Position).

(2) You think my method can't get us to a 'plausible morality.' But notice: at the end of the day, I'm claiming to give precisely the kind of justification you say we should try to give. What, after all, does my Categorical-Instrumental Imperative actually say? Answer: it says that (A) we must be able to justify our actions to *every* possible future self we could be, which I then argue are (B) *identical* to every possible agent (since there are an infinite number of possible futures, each corresponding to every different agent's perspectives and interests). Accordingly, if my substantive arguments are correct, our two views (yours and mine) collapse! My argument is that the Four Principles of Fairness emerge from a process of justifying one's actions to all possible agents. So, you and I have the same goal (of justifying actions to all agents). It is just that I argue--on methodological grounds--that the best way to think about this very notion is instrumental, and that substantively, the process leads to the principles of Rightness as Fairness.

(3) Finally, and most importantly, it is a fundamental part of my project (as I make clear in the manuscript on multiple occasions) that I think we should *not* attach substantial weight to commonsense assumptions about what "morality" must be to be "morality." Physicists and philosophers prior to relativity *insisted* that space and time must be absolute. Einstein's response was: sorry, that's not what the data shows. Philosophy and science done well should not be in the business of reifying our preconceptions about what morality is 'supposed' to be. We should follow the data where it leads--even if it leads to places we don't want to accept. This is what good science (viz. Copernicus, Galileo, Einstein, quantum mechanics, etc.) has always done: show that our preconceptions are dead wrong. We should be open to it in the moral domain too--or so I argue in the book.

Scott Forschler

"you and I have the same goal (of justifying actions to all agents). It is just that I argue--on methodological grounds--that the best way to think about this very notion is instrumental."

I think that's right. But I'm still really puzzled as to why you think an instrumental justification is justifiable. The double dilemma I've now mentioned twice--between that approach either (A) biasing towards your actual interests if we use probability, or (B1) being either indeterminate or (B2) radically indiscriminate in what "interests" (mostly held by no actual persons, and often radically opposed to them) are thereby promoted. You reject probability here, which should lead you to B1 or B2; you don't endorse either, I think, but I don't see how you can escape this choice. This is the real nub of my objection to your argument; all the concern over FF and other steps preceding this move really only matter insofar as they shed any light (which frankly they haven't so far, for me) on why you take the later steps that you do. Help me out here--again, I've read the book, but I really don't understand how you get from "plan instrumentally to satisfy all possible interests of all possible people (because I might become or come to care about them)" to "respect the actual interest of actual people, and treat them fairly." If that, or something like it, is what you do. How do you avoid B1 or B2 along the way? I see you trying or wanting to do this; I don't see how you do so, and fear equivocation at one or more steps which I can't quite grasp.

Rejecting probability, and rejecting universalizability in favor of purely prudential-instrumental reasoning, both strike me as radical violations of your FF, since we palpably do use both in our moral reasoning, and surely most people will agree when the question is framed properly. Indeed, there's a strong analogy between your moves here and one of the most plausible-appearing extant theory for deriving morality from instrumental reasoning alone, e.g., Gauthier's contractarianism. He begins by implausibly identifying morality and prudence, then argues implausibly that it is impossible to be a rational knave, and so the prudentially rational person will act morally, pretty much in the way prescribed by universalizable principles. Two mistakes to get back to where you could get without any mistakes, IMO. Which is telling. Different details than yours, but similar in telling us that you can blood from a turnip, as long as by "turnip" we understand something very different from what most people think turnips are; when you compound this by insisting that you're only using "firm foundations" about what everyone understands turnips to be, I am baffled by both moves (picking up the "turnip", and squeezing "blood" out of it). I'm not trying to be mean here, you understand; just trying to show by analogy how the project strikes me in hopes that you can see the logic of my objection more clearly.

"we should *not* attach substantial weight to commonsense assumptions about what "morality" must be to be "morality.""

Well, it depends what you mean; I certainly am no Gertian or intuitionist, uncritically accepting common sense. When, e.g., Benatar argues for antinatalism, I think this is worth paying attention to; it's not automatically wrong just because it feels wrong. Both unfamiliar decisions/contexts, and powerful instinctual drives, can bias our moral judgments against sound reasoning. But if you came up with a theory that purple alone is good, or that we should whack people in the head for fun, I would suspect a mistake somewhere. If you come up instead with a theory that said we should instrumentally satisfy all possible interests, and then said that in practice this means respect the actual interests of actual people around you--and I *think* you are saying something like that--then I suspect a double mistake. Not in the conclusion, which is quite reasonable, but in the logic of both steps, which seem to first go in what is clearly the wrong direction then take it back in some way I don't understand in order to reach something that's actually fairly sensible. Now if instead you just took the first step, and actually thought we should respect all possible interests period, even if that steamrolls over many of the actual interest of actual people (just because I *might* one day want to commit genocide, or study Martian persimmons, and so need to plan ahead to facilitate these amongst so many other possibilities, hence doing less than I otherwise could to hold doors open for people or cure cancer, which shouldn't be favored on the merely contingent grounds that this would benefit the actual interests of living people!), then just like "purple is good," I would suspect a serious mistake. Morality could vary a bit from common-sense, but not *that* much. :-) Now if I didn't have a logical argument for the approvability of principles that respect all actual interests equally from my own actual point of view, then I would admit my rejection was a little weak; but since I've got that too, I'm not so worried.

Marcus Arvan

Got it - sorry I haven't gotten to the dilemma! Like I said, I've been super swamped with grading, so I've sort of been replying quickly to what I can. After I get all my grading done this week, I'll type up a reply to your concerns early next week (probably by Monday).

My apologies for not being able to continue the conversation until then--but I have five classes I need to get grading done for by the weekend!

Marcus Arvan

Hi Scott: Sorry again for taking so long to respond! I think you raise very good questions, and here are my thoughts.

You write: I already know that instrumental reasoning is very important and looms large in each of our lives...but "dominant"? Well, it depend on what you mean by that. It is pervasive, certainly; but so is universalizable reasoning. Indeed, the two interpenetrate; we try to make our instrumental reasoning universalizable, and often try to bend what counts as universalizable so it fits our instrumental interests (via rationalization & hypocrisy; the tribute vice pays to virtue)."

My reply: When I say that instrumental reasoning is dominant, I mean it is *motivationally* dominant. Batson and others have basically found that non-instrumental reasoning plays little to no role in what we are motivated to actually do--that "moral motivation" mostly doesn't exist.

Now, you might say, "That's fine. Moral normativity and moral motivation are two very different things."

However, this is precisely the kind of divide--between the normative and motivational--that I argue we should want to deny, both for methodological and practical reasons. Methodologically, my point in the book is that we can make an *epistemically* better case for morality on purely instrumental grounds (since instrumental rationality satisfies Firm Foundations, etc.). My point is that an instrumental approach has practical advantages as well--as it is more likely to give people *motivational* reasons to behave morally. This is no small thing. One of the more embarrassing things about moral philosophy, in my view, is its inability to actually motivate people (professional ethicists have been found to behave no better than laypeople). The best kind of moral philosophy--I argue--should unify the motivational and the normative, bringing them together. As Marx once said, the aim of philosophy should not just be to describe the world, but to change it. My claim is that insofar as something like 90% of human motivation is instrumental (Batson found in a variety of studies that when self-interest and moral principles conflict, people choose self-interest about 90% of the time), an instrumentalist moral philosophy is not only methodologically preferable (for reasons already discussed), but also the most *realistic* theory for making a better world (insofar as it engages most directly with what people are actually motivated to do). [Neil Sinhababu makes similar arguments here in his new book Humean Nature--that approaches to moral philosophy not based on desire effectively divorce moral reasoning from motivation in a way we should find problematic]. Finally, in basing everything on actual motivation, my theory makes clear predictions about how to improve moral behavior (viz. stimulate people to care about their possible future selves, etc.)--predictions that I still do not think other theories make (though I know you disagree here). Anyways, moving on...

You write: "But if you don't use univerasalization, only prudential-instrumental critera, then justification *only* to your possible future selves would presumably either have some bias towards yourself, or would explode into indeterminacy or another form of immorality. For my future self might become a fascist, or take significant interest in the species of Martian persimmons which will evolve in their new habitats after terraforming. If I have to justify my current behavior to *those*, and all other, future selves by doing *something* to instrumentally satisfy each of their interests, and can't weight this by either the probability of my taking on such interests, or some prior principles for determining which of *those* choices are justifiable, I will either end up completely paralyzed, or I will end up supporting a range of interests very different from the actual interests of the 7 billion people actually inhabiting my known universe, which only have a small subset of these infinitely-possible ones which your theory suggests I must take into account."

My reply: I'm a bit baffled by your suggestion that I "don't use universalization." The very point of Chs 3-5 is that (A) the problem of possible future selves makes acting the Categorical-Instrumental Imperative instrumentally rational, and (B) the Categorical-Instrumental Imperative is a universalization principle whose satisfaction-conditions are given by a Moral Original Position containing *all* agents (which, again, is what you seem to think universalization requires--justifying one's actions to all possible agents). So, I'm not too sure what the concern there is.

That being said, I do understand the dilemma you present. Indeed, it is one of the features of the book that has long troubled me, and which I am currently writing a few papers on!

First, I do think my picture does indeed entail a great deal of moral indeterminacy, and for roughly the reasons you give--that insofar as we cannot actually negotiate with merely possible selves, we nearly always *risk* being unfair to them. However, I intend to argue that this is actually a feature rather than a bug of the theory (there are some fascinating parts of the Bible, for instance, which suggest that, "nobody knows the right thing to do" the vast majority of the time--which I plan to argue is basically right, and Rightness as Fairness makes good sense of!). Basically, I want to turn your concern here on its head, arguing that existing moral theories don't make morality indeterminate enough because, as indeed it *is* possible to screw over future selves by not being able to actually negotiate with them (I suspect you're skeptical, and if so, fair enough. But this is the fun part of philosophy, right? Part of what I hope my book does is open up a variety of new lines of inquiry--including this one: the extent of moral indeterminacy. Anyway, I'll be curious to see what you think if/when I end up publishing this stuff. It's currently in early draft form).

Second, on that note, I hope to argue in detail in future work that although my account entails immense indeterminacy, negotiation with actual people is the most *likely* way to ensure that one doesn't screw over possible future selves. Although this does introduce probability, I intend to argue that it does so only at a higher-order epistemic level, recognizing that at a first-level ethical perspective, most of our decisions really are morally indeterminate, while holding we have *better epistemic grounds* for thinking negotiating with actual selves is likely to lead to actions within the scope of the moral indeterminacy (viz. negotiation with all possible selves) rather than outside its scope. In this regard, the view I aim to defend here is akin to the difference between actual and expected utility forms of utilitarianism (where actual utility defines what is right and expected utility our *best estimation* of what is right). On the account I want to defend, our choices are morally indeterminate--since negotiation with all possible selves can go innumerable ways--yet negotiation with actual selves is the best way to ensure that we *approximate* negotiated compromises that all of our possible selves would arrive at were we actually able to negotiate with them.

Of course, this might not work -- but it is what I am thinking about these days. In other words, I think your concerns here are excellent, and precisely the right ones to have - but I'm optimistic I can meet them! :)

Scott Forschler

I was certainly imprecise earlier; in saying you eschew universalization what I meant is that you don't grant it a fundamental role in moral reasoning; you purport to derive it from prudential reasoning regarding one's indeterminate future selves. So in the end you certainly do use it, but it feels like a rabbit coming out of a hat (or blood from a turnip), rather than something that follows logically or is earned by the argument.

I'm not familiar with Batson, but don't contest the findings; doubtless we are very selfish creatures when push comes to shove. And yet there's the 10% of the time when we aren't; and of course if the rest of time we appear to act morally through various meta-strategems of aligning prudence with morality, including constructing/supporting institutions which help do so, etc., this must take some trouble which requires some motivation as well. The question is, where does *this* motivation, however small or large it looms in our lives, come from? I have trouble seeing how this could arise from the arguments you're making, or how focus on my indefinite possible future selves would increase the motivation. Now if it does, wonderful--let's put it in the water, so to speak. But even if it did, I'm not convinced that you've presented the mechanism by which it would do so.

You say you'd like to show that "negotiation with actual people is the most *likely* way to ensure that one doesn't screw over possible future selves". This is precisely what I find logically implausible given that our future selves are *not* utterly indeterminate, and aren't equivalent to actual people around us anyway. Perhaps pretending that they are, in both cases, would do so. But I'd rather have a theory which didn't involve pretence, which seems unstable to me (and not what you really think you're using anyway). But since you concede that this is indeed an area that needs work, I think we have to leave it there for now.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Philosophers in Industry Directory

Categories

Subscribe to the Cocoon