I was at the annual Central States Philosophical Association conference this weekend (a really great conference, I have to say!), and attended a very interesting talk by Sandy Goldberg on whether we should hold controversial philosophical beliefs. I'm not going to summarize the talk, as I don't know whether Sandy would appreciate that. :) But I would like to discuss the issue in general terms, as it's something that's rattled around in my head for a number of years, and suspect it may have rattled around in many readers' heads as well!
Just the other day at Daily Nous, there was a discussion about telling students our philosophical beliefs. We've discussed this issue here before too! In short, some people think it's okay (viz. it's honest, it makes the classroom more exciting), whereas other people don't think it's so okay (viz. it corrupts the ideal of dispassionate, fair inquiry, and plays into faculty-student power relations). I don't want to rehearse these concerns here. But of there's another concern about why it might not be a good idea to share one's beliefs in the classroom: one shouldn't have beliefs about controversial philosophical issues at all. Allow me to give some reasons in favor of this idea.
Consider any controversial philosophical issue you like--say, the metaphysics of causation. On any such issue, there will be a number of mainstream philosophical views with strong arguments in their favor. Moreover, whatever view one has of the arguments, one has epistemic peers in the discipline--really, really smart people who have throught through the arguments just like you have--who hold very different views than you do. are certain moral worries to have about believing controversial things when the evidence isn't clear. Further, as our own Helen De Cruz notes here at NewAPPS, it's well-known that people's beliefs tend to bias them. In particular, people tend to be notoriously biased in favor of whatever beliefs they actually have at any given time. Because of these facts, one might think, none of us are epistemically entitled to hold controversial philosophical beliefs. We should remain agnostic about controversial philosophical issues because (a) the arguments on controversial issues are disputed among our epistemic peers, and (b) holding beliefs on uncertain grounds tends to bias one in epistemically problematic ways.
In addition to these epistemic worries--that none of us are justified in believing controversial philosophical hypotheses--there are some natural moral worries to have about believing in controversial things without clear evidence. When we believe things, after all, we tend to espouse them. When it comes to moral or political issues, for instance, a person who really believes controversial philosophical ideas (e.g. about libertarianism, egalitarianism, etc.) is likely to convey those beliefs to others, and not just other philosophers, but other people (e.g. at cocktail parties, public debates, etc.). The worry, in other words, is this: there's something wrong with believing controversial philosophical hypotheses because doing so is apt to cause one to lead others to believe the same things (on epistemically uncertain grounds!). To put it another way: how is okay to espouse beliefs that you're not epistemically entitled to believe, and that your audience wouldn't be epistemically entitled to believe either.
I've put these worries abstractly, but I take it they are probably familiar enough to most of us, at least implicitly. I ask myself, "Should I really believe in dualism given that (A) I used to be a physicalist, (B) many really, super-intelligent philosophers see the arguments differently, etc.? Should I really believe my views about moral theory--the views I defend in the book I just finished--given that (A) the arguments surely aren't proofs (few philosophical proofs exist anywhere!), (B) readers will be apt to find flaws in the arguments, see things differently, etc." When I put things this way, it seems intuitive to me that I shouldn't believe. And yet...when I consider why I hold the views I do, I hold them because, in my best judgment, the best arguments support them. In which case it seems intuitively like I should believe my views (it would be epistemically and morally disingenuous not to!). What is a morally and epistemically responsible philosophers to do?
I've come across some philosophers who think we should be agnostic--that one should never truly have beliefs about controversial philosophical issues. I want to suggest, however, that there are moral and epistemic costs to philosophical agnosticism. First, it seems to me dishonest. If one defends something in print, one should believe what one defends. Indeed, I want to say, one should not only believe it. One should live one's beliefs. Anything less, again, seems to me to be dishonest. For example, I recently drafted a book defending a new moral theory. Suppose I wrote the book but was totally agnostic about what it defends. Then suppose a reader asked me, "Do you actually believe the theory you defend? Do you live it?" It seems to me I would be a monster if I answered either of these questions, "no." Of course I think it's good and right to admit some uncertainty. To say, "I know the theory is right" would be hubris. But, for all that, it seems to me, one should at least believe the views one defends. Second, I think there are epistemic hazards to agnosticism--especially in moral and political philosophy. For, in my experience, one way to test a view is to believe, and live, it. Socrates lived his views, and his example illustrates both the compelling points of his philosophy (he meets his death with stoicism), but also its weaker points (he is woefully obtuse to his wife's suffering). Similarly, I think Nietzsche's life is pretty telling vis-a-vis his philosophy. Etc. Philosophy should not be an agnostic shell-game. It should be something we test against lived experience...and one can only truly live a philosophy by believing it. Of course, this isn't to say that we should live every philosophy, or that every philosophy has to be lived to be tested (I certainly wouldn't want anyone to live Evola's philosophy, for instance). It is simply to say that some theories in some areas cannot be tested without someone living them. Finally, it seems to me that so much of value to individual lives is lost without philosophical belief. Philosophy for me, for instance, really has been a wonderful, wild journey. Many of my philosophical views have changed dramatically over the years, and many of the changes have profoundly affected my life. Again, this isn't to say that I'm sure of my beliefs on controversial issues. Far from it! But given the effects that our philosophical beliefs can have on our lives, aren't these good reasons to have them (at least provided we don't harm others on their basis)?
"Moreover, whatever view one has of the arguments, one has epistemic peers in the discipline--really, really smart people who have thought through the arguments just like you have--who hold very different views than you do."
Hi Marcus,
I think this is controversial. There's an argument I've been toying with for some time that I'd offer in response. Let's distinguish two notions of peer:
* Peer 1: x is a peer 1 wrt to p iff x has all the same relevant evidence as you and all the same squishy stuff (i.e., just as conscientious, responsible, informed, interested, etc.).
* Peer 2: x is a peer 2 wrt to p iff x has all the same squishy stuff wrt to p.
I take it that having the same squishy stuff doesn't ensure evidential duplication. If we read your comment as concerning peer 1s, I think that it's false. If we read it as concerning peer 2s, I think that it's true but that it won't support any troubling skeptical consequences.
On the first reading:
P1: Whatever view you have on the topic, you have a peer 1 who disagrees with you.
P2: If you have a peer 1 who disagrees with you about p, you ought to suspend judgment.
C. You ought to suspend judgment on the relevant topic.
I take P1 to be false because I take it that your evidence includes all and only what you know. If E=K, P1 is really just this:
P1*: Whatever view you have on the topic, you have someone who disagrees with you who knows as much as you about the topic.
On that reading, neither of you know p. Now, I agree that if you don't know p, you ought to suspend judgment on p, but why should we grant the first premise that says that you don't have knowledge in the relevant range of cases? That should be the conclusion of a skeptical argument, not it's starting point.
On the second reading:
On that reading, P1 is probably true, but why should we think that if someone has the same squishy stuff as you AND has less relevant evidence than you that you ought to suspend judgment when you discover that this responsible but ignorant person disagrees with you? If you got knowledge, use it! Put it to work! If you put it to work, it should justify beliefs that others cannot justifiably hold because they don't have the right evidence. If they had the right evidence, they'd be on your side! If I'm allowed to spot myself the possibility of knowledge in philosophy, I'll do it and I'll argue that the possibility of sameness of squishy stuff isn't enough to show that we ought to suspend judgment on controversial matters. It's a bit weird, isn't it, to say that someone ought to suspend judgment on something that they know to be true just because some careful but ignorant person with less evidence disagrees with them.
I take it that the standard way to try to defuse my way of trying to defuse the argument is to attack E=K. I think E=K is actually in much better shape than people think, but I appreciate that it's controversial. If it's false, I suppose it's false because it's too stingy or too liberal. If we weaken E=K by saying that K is too demanding, that doesn't undermine the point that I'm trying to make. If anything, this more liberal conception of evidence allows us to say that there's a wide range of cases in which peer 1s will have superior evidence that their colleagues don't have. If you tighten the possession requirements for evidence up so that knowledge isn't sufficient, you end up with the rather silly view that there's something wrong with treating known truths as reasons for believing things.
That could all be cleaned up, but I think that on one notion of peer, anyone who thinks that philosophical knowledge is possible should think that disagreeing peers are hard to find in the relevant range of cases. On the other notion of peer, peers are prevalent but they aren't terribly threatening.
Posted by: Clayton | 10/13/2014 at 05:25 PM
Clayton: I'm actually pretty sympathetic with that line of thought, and it was my initial response. I tend to believe my philosophical views only when I think I have strong evidence (e.g. new arguments) that dissenters have either overlooked or misunderstood--in other words, when I think I have good evidence they don't.
But of course here's a problem--one that I think arises especially if E=K. E=K is sort of an externalist view. From the *inside*, two people might have different views and think they have the same evidence/knowledge, but only one of them actually does (the other is mistaken). The problem then is this: how is one supposed to know from the inside whether you're the one with evidence/knowledge or the one without?
The problem here is: how does one know one is an epistemic peer in the relevant sense (e.g. sense 1), or not? It looks like the only way to assess one's standing is to pound the table and say, "Dammit, I have evidence/knowledge that they others don't have. I'm right, they're wrong. And here are the arguments that show it"...when all the while one's opponents think exactly the same thing in their own case! :)
Posted by: Marcus Arvan | 10/13/2014 at 06:16 PM