Thanks again to my hosts here at the Cocoon for this great opportunity. Today I’d like to continue the conversation about “The Problem of Contingency for Religious Belief.” The problem, as you’ll recall from my first post, is that our religious beliefs have been shaped to a disturbing degree by factors that are completely on the wrong side of the question, factors like when and where we were born, who our parents were, which peer group we admired most, etc. (See my first post for some statements of the argument from John Hick and Philip Kitcher.)
In my own case, there’s a lot of Catholicism on the Cuban side of my family. But I was raised an anti-Catholic Lutheran. I once wondered why. My grandmother gently told me that, when my mother was young and church-ready, a Lutheran church was nearer to their house than any Catholic church. I felt dizzy. My closely-held Lutheran beliefs were ultimately the result of a real estate decision and a love of convenience. Things easily might have gone otherwise. Had my grandparents bought a different house, I might have been raised an anti-Lutheran Catholic. How could I sensibly hold onto my Lutheran beliefs in light of that information? (I later converted to Catholicism for Lutheran reasons, thereby restoring balance to the universe.)
But did learning of the contingency of my religious beliefs rationally require lowering my confidence in those beliefs? That’s the skeptic’s song: because of the shady way in which they were formed, religious beliefs do not rise to the level of knowledge even if they’re true.
It’s a tempting thought, and I’ve felt its allure. But, ultimately, I think this skeptical argument should be resisted. The problem is getting clear on exactly which epistemic principle our religious beliefs allegedly violate. Today we’ll explore a some candidate virtues that are often taken to be necessary for knowledge and which our religious beliefs may plausibly lack: sensitivity, safety, and non-accidentality.
Suppose Smith believes truly that p (e.g. it’s 70˚F in here) on the basis of some method (e.g. checking her thermometer). To say that Smith’s belief is sensitive is to say that, had p been false, Smith would not have believed via this method that p. To say that Smith’s belief is safe is to say that, were Smith to believe that p via this method, p would be true. (Or, alternatively, that not easily would Smith have believed falsely via that method.) To say that Smith’s belief is non-accidentally true is to say that, even if it’s an accident that it’s 70˚F in here (we intended it to be cooler, say), and even if Smith has that thermometer by accident (she stumbled upon it, say), and even if Smith is alive to consider the question by accident (an assassination plot just failed, say), there’s no accidentality “between the subject and the fact,” as Peter Unger would put it: it is not at all accidental that the Smith is right about its being the case that p.
So maybe the problem of contingency for religious belief, in argument form, proceeds like this. Each of us starts by noticing the historical contingency of his or her religious beliefs:
1. If I had been born and raised elsewhere, else when, and formed religious beliefs using the same method I actually used, then, by my own lights, I easily might have believed falsely.
And perhaps this contingency is meant to convince us that our religious beliefs lack an important epistemic virtue:
2. Therefore, my religious beliefs were not formed sensitively, or safely, or non-accidentally.
And perhaps we’re meant to take at least one of those virtues as necessary for knowledge, concluding:
3. Therefore, my religious beliefs don’t count as genuine knowledge.
It’s a formidable argument! Savor it for a moment. Take it out for coffee. Get a crush on it.
Unfortunately, I don't think it's right for you. There are three huge tiny problems with it: both inferences are invalid, and the skeptic who wields it either self-defeats or targets very few religious believers. (Other than that, it's solid gold.)
First, the self-defeat-or-narrow-scope problem. Even religious skeptics like John Hick and Philip Kitcher have beliefs on religious topics. Hick was a self-described “pluralist,” while I presume Philip Kitcher has mainly “negative” beliefs on religious topics: there’s no God, there’s no afterlife, etc. But these views are not and have not historically been popular: Hick and Kitcher easily might have held different views, had their biographies differed. So premise (1) seems as true for Hick and Kitcher as it does for Pope Francis. But then the skeptic who wields this argument will himself fall into its grinding maw; it’s self-defeating.
The skeptic can find a way out, but it comes at a price. In their statements of the skeptical argument, Hick and Kitcher are careful to specify that the argument targets only religious beliefs that have been instilled into one since childhood (Hick), or that one has received through early teaching and socialization (Kitcher). That’s the questionable method mentioned in premise (1), they might say, and they arrived at their beliefs via a superior method: rational reflection. They may thereby carve a loophole in premise (1)—claiming that (1) is false for them and their superior method—and thereby escape the problem of self-defeat.
But they escape self-defeat at the cost of severely narrowing the scope of the skeptical argument. After all, not all religious believers hold their views as a result of “early teaching and socialization.” Very many of them have rationally reflected on their religious beliefs. If rational reflection exempts Hick and Kitcher from premise (1), it also exempts these very many religious believers. So the skeptical argument has less bite; it casts a smaller net than the skeptic may have hoped. It has no grip on anyone reading this post, or on anyone who has ever reflected on anything remotely like this post. That’s a cost. And so the skeptic has a serious self-defeat-or-narrow-scope problem.
There’s a second problem with this argument: the inference from (1) to (2) is invalid. Here’s a case to show why. Suppose that the infamous Evil Epistemologist has poisoned the world’s water supplies with a drug that radically impairs human cognitive faculties. Once exposed to the drug, all of one’s faculties become completely unreliable, unsafe, insensitive, accidentally-right-if-right-at-all, etc. However, a benevolent nurse used his only dose of antidote to immunize you in the maternity ward. Your faculties are therefore safe from the poison and function normally as you mature into adulthood, producing beliefs sensitively, safely, and non-accidentally. (While it may be an accident that your faculties were preserved by that nurse, given that they were there will be no toxic accidentality “between you and the facts,” as Unger would put it.)
But: it is true that, had you been born in a different time and a different place, and used the same faculties and methods you actually used, you easily might have believed things that would be, by your own lights, false. So we have here a counterexample to the general form of the inference that is meant to carry us from (1) to (2). The fact that something might have happened in the past which would have rendered my faculties unsafe/insensitive/accidental does not entail that my faculties are actually unsafe/insensitive/accidental when I use them. And the same may go with the methods by which I formed my religious beliefs.
Finally, a third problem for this argument: the inference from (2) to (3) is invalid since not-a-one of those virtues is required for knowledge. Let’s start with sensitivity: it’s not required for knowledge, and here’s a case to show why (from Comesaña 2005, but originally due to Sosa 2000 who credits Vogel 1987):
GARBAGE CHUTE: I throw a trash bag down the garbage chute of my condo. Some moments later I believe, and know, that the trash bag is in the basement. However, the closest possible world where my belief is false is plausibly one where, unbeknownst to me, the bag is stuck somewhere in the chute, and I still believe that it is in the basement.
In GARBAGE CHUTE, my belief that the trash bag is in the building’s basement is not sensitive, yet it counts as knowledge. (This is a classic case that many accept, but I think it requires tweaking. Other cases work well, however.)
Next, here’s a general recipe for whipping up counterexamples to the alleged safety condition on knowledge: first, pick the most virtuous belief-forming method you can imagine, and have a subject form a belief via that method. In the original counterexample (Bogardus 2014), called “Atomic Clock,” a subject named Smith formed a belief about the time on the basis of the world’s most accurate clock. Second, add a twist of fate: put the method in danger of malfunctioning, but let the danger remain purely counterfactual. In the original example, Smith’s clock was atomic, and it was imperiled by a nearby radioactive isotope. The isotope was due to decay at any moment, and were it to decay it would stop the clock (or even just slow it down significantly), rendering it unreliable.
Now, since that danger remains purely counterfactual—since the clock could have malfunctioned but in fact remained the world’s most accurate clock; since things could have gone less well epistemically but didn’t—it’s quite tempting to allow that Smith knows the time on the basis of the clock. And yet, one might think, her belief in this scenario is not formed safely, for there are many nearby possible worlds in which she forms a false belief on the basis of that clock, worlds in which the isotope has decayed and the clock has stopped or slowed. It’s false that, were Smith to believe via that clock, her belief would be true. Very easily could she have believed falsely via that clock. And so Atomic Clock seems to be a counterexample to the alleged safety condition on knowledge.
Finally, non-accidentality is not required for knowledge. One can know that p even if there is a troubling accident “between the subject and the fact.” I present a lengthy argument for the conclusion in the paper (having to do with swamp-people). A briefer version of the argument might borrow John Hawthorne’s (2002) swampwatch, a particle-for-particle duplicate of the world’s most accurate wristwatch “created by a fortuitous coming together of molecules.” Swampwatch reports the time but, given its birth from chaos, its reports are not aimed at the truth or anything else. And so, if a subject were to believe truly that p on the basis of swampwatch’s reports, it would be an accident—in a familiar and legitimate sense of the word “accident”—that the subject is right about p’s being the case. And yet, Hawthorne says and you may well agree, when a subject “uses his swampwatch to inform him about the time... we are intuitively ready to say that [she] knows what time it is.” If you share Hawthorne’s intuition, as I do, then we have here a case in which S knows that p and yet it is accidental that S is right about p’s being the case. In which case, contrary to Unger, knowledge can be accidental.
But if knowledge doesn’t require sensitivity, safety, or non-accidentality, then the inference from (2) to (3) is invalid. We have three reasons, then, to reject this version of the problem of contingency for religious belief: the self-defeat-or-narrow-scope problem, (1) to (2) is invalid, and (2) to (3) is invalid.
(In the paper, I consider one final version of the argument, an “argument from symmetry.” But in the interest of space, I’ll omit it here.)
We have attempted to lay out the problem of contingency for religious belief as forcefully as possible, and the argument fails badly. The contingency of our religious beliefs does not show that they were formed insensitively, unsafely, or accidentally; and even if it did, none of these is required for knowledge. And the skeptic who handles this argument either self-defeats or targets only unreflective religious believers. It may well be, then, that there simply is no problem of contingency for religious belief. Or, if there is, that it needn’t worry many people.
But what do you think, Cocooners?
Hi Tomas, forgive the enormous comment. Please take its size as a compliment. :)
1. The scope-narrowing retreat to rational reflection is an interesting move. The first thing to note about it is that the question of how many religious believers believe on the basis of 'rational reflection' is both open and empirical.
Moreover, surely this cannot be just any old rational reflection. It has to (somehow) justify some of the more important bits of the religious system, in particular, those that distinguish it from other possible belief-systems that the believer might have adopted (Christianity vs. Hinduism, vs. Judaism, etc.).
So, the real question is this: how many believers have subjected these "definitive" beliefs to rational reflection by seeing if they can be justified a priori? This number, I suspect, will be much smaller than you want it to be. And there is a final debunking threat: famously, rational reflection can feel like it is isolated from irrelevant influence when in fact it is not. I may confidently arrive at an affirmation to the conclusion of the ontological argument, all the while not realizing that it is the semi-conscious fear of disappointing my religious mother that motivates me to do so.
2. It is instructive to think some more about the 'evil epistemologist' case from the perspective of the child herself. She grows up and notes that everyone around her seems to have false/unjustified beliefs. She goes through the records and finds the Evil Epistemologist's plans. She thinks: "They've all been poisoned!" This thought is followed by the more troubling: "Or, perhaps *I* was the one who was poisoned, and the reason these people seem so bizarre to me is that my faculties are useless!" How does she decide between these options in a non-question begging way?
It's a tough problem, but that's not the point I want to make. The issue I have with thought experiments of the sort you've given here is that we are invited to adopt a third-personal perspective on them which allows us to simply *assume* that her faculties are fine. But this is precisely what we do not have in real life (if we did, there would *be* no problem of the sort we are discussing). All we (and our hero) have is the knowledge that other people are very different and that if we'd been born elsewhere, we'd be like them. From her perspective, her beliefs look anything but safe, sensitive, etc., and their safety and sensitivity is an issue she must decide using the very faculties in question. The bare logical possibility that they are safe is of little comfort or relevance.
3. Finally, I have more general worries about the epistemology stuff, worries that revolve around the idea of conceptual analysis. Famously, there is no widely accepted account of the extra Gettier-condition on knowledge. Some of us think this means we have to look harder. I think that this means that an analysis of this slippery concept is not forthcoming. As such, I think that there will be counterexamples (often bizarre) to any Getter-condition. Does this mean that such conditions are useless? No. They might yet provide us with crucial information about how we are ordinarily prepared to deploy this concept. If religious beliefs do not meet the conditions, then perhaps they are in trouble, even if the conditions are not conceptually necessary in this extremely demanding (and possibly useless) sense.
So, while I might be persuaded that this argument fails, I do not think that it can possibly fail "badly", not for these reasons anyway.
Posted by: Nick Smyth | 09/29/2014 at 03:41 PM
Hi Nick. Thanks for the thoughtful comments! Here are a few replies that came to mind.
You said:
>> The scope-narrowing retreat to rational reflection is an interesting move. The first thing to note about it is that the question of how many religious believers believe on the basis of 'rational reflection' is both open and empirical. Moreover, surely this cannot be just any old rational reflection. It has to (somehow) justify some of the more important bits of the religious system, in particular, those that distinguish it from other possible belief-systems that the believer might have adopted (Christianity vs. Hinduism, vs. Judaism, etc.). So, the real question is this: how many believers have subjected these "definitive" beliefs to rational reflection by seeing if they can be justified a priori? This number, I suspect, will be much smaller than you want it to be.>>
That's an interesting thought. All the skeptic needs (to avoid the argument turning back on her) is to deny premise (1) by pointing to some method by which she plausibly arrived at her beliefs on religious topics, a method which plausibly wouldn't easily have gone awry had she been born and raised elsewhere, elsewhen. A low-level conception of “rational reflection” is sufficient to do this. But, as we seem to agree, this low-level conception will plausibly allow many religious believers to escape from premise (1) in the same way. That’s the “narrow scope” horn of the dilemma.
So you suggest that the skeptic employ a much higher standard of rational reflection, a standard that will allow herself to escape from premise (1) but will shut the door behind her on very many not-so-reflective religious believers: they’ll still be in trouble.
Can I get your input on a few concerns about this route you suggest? As you say, by raising the standard of rational reflection the skeptic may thereby imperil many not-so-reflective religious believers. But she’ll also imperil many not-so-reflective religious skeptics who fail to, as you put it, “justify some of the more important bits of the [non-]religious system” they’ve adopted. If that’s right, I guess the skeptic using this argument can just decide whether sacrificing some of her ilk is worth the chance to soak all those religious believers (it’s a free country; the skeptic can run the argument as she likes). But it’s worth pointing out: on this road, many religious skeptics will be targeted. So "the problem of contingency" is not at all unique to religious believers.
Also, opening the net cast by premise (1) in this way will catch not only more religious skeptics, but also more of the super-rationally-reflective religious skeptic’s own beliefs on other topics, beliefs of hers that don’t measure up to this demanding standard of “rational reflection.” I mean, for example, many mundane beliefs that one normally isn’t super skeptical about, ranging from whether anthropogenic climate change is occurring to what to have for dinner. So that’s the second concern: on this road you suggest, the demands of skepticism go way up.
So far up they’re implausibly high. That’s a third concern. It starts looking like this revised premise (1)—read so as to target any method below super-high levels of rational reflection—will even less plausibly entail premise (2). You know that you shouldn’t eat ______ (fill in the blank with an unhealthy but popular foodstuff), but you haven’t reflected super hard on this belief. You were one of the lucky ones who received good testimony on nutrition. But this revised premise (1) targets that belief of yours: it says premise (1) is true of you since that belief isn’t the product of super rational reflection. And the argument then insists that premise (2) is true: your belief that you shouldn’t eat ____(KFC?)____ was formed unsafely, insensitively, non-accidentally, etc. That inference looks even less plausible now that we’ve strengthened our reading of premise (1). So that’s a third worry for this route you suggest: counterexamples to the move from (1) to (2) multiply. (What do you think of these three worries?)
You also said:
>>And there is a final debunking threat: famously, rational reflection can feel like it is isolated from irrelevant influence when in fact it is not. I may confidently arrive at an affirmation to the conclusion of the ontological argument, all the while not realizing that it is the semi-conscious fear of disappointing my religious mother that motivates me to do so.>>
That sounds true. But that’s not a problem for me in particular, right? It sounds like a problem for Hick and Kitcher, who want to use this argument but also want to avoid self-defeat by insisting their beliefs are reflective. “Not so fast!” your ‘final debunking threat’ seems to say. To them, the religious skeptics trying to avoid self-defeat with an appeal to rational reflection. Not to me, the guy who didn’t introduce this reflective/unreflective distinction to save himself.
You also said:
>>The issue I have with thought experiments of the sort you've given here is that we are invited to adopt a third-personal perspective on them which allows us to simply *assume* that her faculties are fine. But this is precisely what we do not have in real life (if we did, there would *be* no problem of the sort we are discussing). All we (and our hero) have is the knowledge that other people are very different and that if we'd been born elsewhere, we'd be like them. From her perspective, her beliefs look anything but safe, sensitive, etc., and their safety and sensitivity is an issue she must decide using the very faculties in question. The bare logical possibility that they are safe is of little comfort or relevance.>>
The Evil Epistemologist thought experiment was meant to show only that premise (1) doesn’t entail premise (2). I think we can know—here, now, in our current condition—that it succeeds as a counterexample to the move from (1) to (2). But I also acknowledge that you’re right: it might be very difficult for the subject in the scenario (or for any of us) to know that her beliefs are really safe/sensitive/non-accidental. That’s a different question though, right? A belief of ours might be formed safely even if it’s difficult for us to know that it was formed safely. And we can know the move from (1) to (2) is fallacious even if we’re unsure whether some of our beliefs were formed safely.
In closing, you said:
>>If religious beliefs do not meet the conditions, then perhaps they are in trouble, even if the conditions are not conceptually necessary in this extremely demanding (and possibly useless) sense. So, while I might be persuaded that this argument fails, I do not think that it can possibly fail "badly", not for these reasons anyway.>>
Your thought seems to be this: safety, sensitivity, and non-accidentally are not strictly necessary for knowledge. But, still, these conditions guide how we normally deploy our concept of knowledge, you say. And if religious beliefs fail to measure up to how we normally deploy our concept of knowledge, then…something bad follows. “Perhaps they are in trouble,” you say. But the trouble isn’t that they’re not knowledge; you admit that these conditions aren’t necessary for knowledge. So what’s the trouble supposed to be, then?
Thanks again for the thoughts! And say hello for me to your new friend Chad Marxen. :-)
Posted by: Tomas Bogardus | 09/29/2014 at 06:36 PM
Thanks, Tomas, for the interesting posts! Arguments of the sort you are criticizing are of course very common, and not just with regard to religious beliefs, so it is really good to have a thorough discussion of them.
But here is a worry I had. I find it hard to believe that there is nothing in the area of sensitivity/safety/non-accidentality as a condition on knowledge. I say this not (or not only) because of Gettier-type cases, but just because it is hard to see what the point of the concept of knowledge would be otherwise.
It seems reasonable to assume that in asserting that, e.g., Mary has knowledge in some subject-matter we imply that Mary's views about this subject matter can be trusted, or relied upon. She would make a good informant or teacher about this subject matter.
We obviously have an interest in identifying good informants, so on this account it is readily understandable why we have the concept of knowledge that we do.
So someone who conducts carefully controlled experiments about, e.g., the effects of various drugs can be said to know whether they are effective and safe or not, while someone who flips a coin is not. The first person can be trusted as an informant on the topic, the other not. And there is a story we can tell about why this is so.
All of this is very vague of course, but it makes me think that it is probably overkill to react to far-fetched counterexamples by denying any kind of non-accidentality condition.
Would you disagree with this?
Posted by: Markos Valaris | 09/29/2014 at 10:13 PM
Hi Tomas,
(1) I am very happy to allow that knee-jerk or relatively unreflective atheists can be caught in the more powerful skeptical net. This seems like just the right result to me (and I may be one of the ensnared). If you realize that your atheism is only due to your having been born into an atheist family, and if you've only done relatively superficial "tidying-up" of certain peripheral beliefs, then yes, coming to understand all of this means you have (temporarily) lost justification.
2. Thanks for pushing me to be clearer on this. One way to make this complaint more precise is to note that there is a more defensible analogue of your (2), "(2a): Therefore, I have -good reason to suspect- that my religious beliefs were not formed sensitively, or safely, or non-accidentally." (2a) more closely reflects the situation of an actual believer, who can only weigh various reasons for and against, and who does not have independent access to the fact: "I was the inoculated one". (2a) seems to survive the counterexample, and it could be used to generate a more modest (yet still potentially effective) skeptical conclusion.
3. I am willing to say of actual persons that they (currently) lack knowledge if their beliefs are unsafe and accidental. I deny that my saying this requires me to be in possession of logically necessary conditions. I assume you're familiar with the general push against old-fashioned conceptual analysis? Ryle and P.Strawson are big names here; see the former's "conceptual explication" and the latter's "connective analysis". Elder Wittgenstein is an obvious tributary. Each denies that necessary conditions are of any serious interest with respect to our central philosophical concepts. We can still apply them in the absence of such conditions. I'd love to elaborate, but the overall view is too complex for a blog comment!
Posted by: Nick Smyth | 09/30/2014 at 07:31 PM
I think my skepticism holds up against all of these arguments.
First, "Even religious skeptics like John Hick and Philip Kitcher have beliefs on religious topics."
I think I can safely deny that I have any beliefs on religious topics. I've genuinely never spent any time thinking about the nature of supernatural beings; I don't accept that topics like morality are religious. I disbelieve in square circles and in gods; the religious person would like to claim that one of my disbeliefs is religious and the other not; but that is simply their delusion.
The other way around that would be to say that beliefs about real things and beliefs about unreal things are qualitatively different. It's a harder argument, but I think it might get us to the same place in the end: as a skeptic, I do not have to concede that I have religious beliefs.
Second, "The fact that something might have happened in the past which would have rendered my faculties unsafe/insensitive/accidental does not entail that my faculties are actually unsafe/insensitive/accidental when I use them."
This entirely misses the point of the contingency argument. Certainly, the fact that "something might have happened" does not entail unsafe beliefs. But the contingency argument is not about something which "might have" happened. It is about the fact that contingently, something *did* happen. It's not the fact that I "might have" formed Christian beliefs through social transmission which makes them suspect; it's the fact that I *did* in fact form my beliefs through social transmission. The existence of many religions in the world is taken to be evidence of a fact about how religious views reproduce themselves, not evidence about a possibility.
I wouldn't accept any of the arguments about safety/accidentality either. They just seem to be attempts to exploit the fact that language use is often fuzzy to make logical hay. The garbage chute, for example: it is perfectly normal in English to say "I know where the rubbish is" while at the same time being open to the possibility that the rubbish is not in fact in that place because of a low-likelihood event (getting stuck in the chute). That's not a logic thing, that's a communication thing.
Posted by: Phil H | 10/01/2014 at 01:31 AM
Thanks for the interesting and stimulating post!
I wonder, though:
1) the skeptic has always upheld a self-defeating thesis and nonetheless this (obvious) conclusion has never canceled skepticism from the philosphical discourse. I guess that skeptics would keep on saying that "this is the best one can get at, and still way better than the opponents' position".
2) I am not sure that the fact that religious beliefs are not liable to the sensitivity (etc.) objection will convince skeptics, who will rather think that this is only a hair-splitting argument which obscures the main point (the irrationality of religious belief, which tragically resembles beliefs in the tooth fairy and the like). I am also surprised that no one raised what I thought was the obvious answer by believers, namely: "It is not by chance that I was born in… with parents… etc. This is, rather, part of God's providential design" (which may include our critical re-thinking of the initial status quo).
Thanks again for the interesting discussion! I look forward for your next post!
Posted by: Elisa Freschi | 10/03/2014 at 09:13 AM
Hi Markos,
Thanks for your feedback. :-) Here are a few comments I had in reply.
You said:
>>I find it hard to believe that there is nothing in the area of sensitivity/safety/non-accidentality as a condition on knowledge. I say this not (or not only) because of Gettier-type cases, but just because it is hard to see what the point of the concept of knowledge would be otherwise… All of this is very vague of course, but it makes me think that it is probably overkill to react to far-fetched counterexamples by denying any kind of non-accidentality condition.>>
I do share that “anti-luck” intuition when doing epistemology. It seems pretty obvious that knowledge can’t be “lucky” or “accidental” in some important sense. (Then again, it seemed pretty obvious to me that JTB was sufficient for knowledge before reading Gettier!) But it’s very difficult to say just what that sense is. We all agree there can be luck in the neighborhood of knowledge; a knower might be there by luck (a boulder just missed him), and a knower might be lucky to have certain evidence (he stumbled upon the murder weapon), and the proposition known might be true by luck. None of that luck precludes knowledge. So just what is the problematic type of luck, the type that precludes knowledge? It’s hard to say.
And when we do try to say more about what this type of luck is (is it un-safety? Or insensitivity? Or…?), I’ll be interested in two questions: is this type of luck really incompatible with knowledge? (It looks like safety and sensitivity are both compatible with knowledge.) And, secondly, does this sort of luck plague religious belief? (It looks like un-safety and insensitivity don’t.)
Can you say more about what this type of luck is that’s incompatible with knowledge? If so, we could check on those two further questions. :-)
Posted by: Tomas Bogardus | 10/06/2014 at 11:07 PM
Hi Phil H,
Thank you for your comments. A few thoughts occurred to me while I read them.
You said:
>>I think I can safely deny that I have any beliefs on religious topics. I've genuinely never spent any time thinking about the nature of supernatural beings; I don't accept that topics like morality are religious. I disbelieve in square circles and in gods>>
Well, it sounds like you *do* have some beliefs on religious topics. You say you disbelieve in gods and in the same breath you say you disbelieve in square circles. So it sounds like you have two beliefs on religious topics right there: there is no God, and the concept of God is incoherent (like the concept of square circles). You also say you don’t accept that topics like morality are religious; I’m thinking that, further, you accept that they are not. Well, that’s a religious topic on which you have a belief. (And, if I’m reading you right, I can’t see how you’ve never thought about the nature of supernatural beings. It seems like you’ve thought enough about supernatural beings to conclude that there are none, that there couldn’t be any just as there couldn’t be any square circles, and that morality has no special connection to any supernatural being.) That’s an abundance of beliefs on religious topics!
>>as a skeptic, I do not have to concede that I have religious beliefs.>>
Alright. But the self-defeat argument has bite so long as you have beliefs on religious *topics*. And it sounds like you do have very many of those. So I’m thinking that if you try to run a “problem of contingency” argument for religious belief, the self-defeat objection will, like a trusty bloodhound, catch up with you. :-/
>>It's not the fact that I "might have" formed Christian beliefs through social transmission which makes them suspect; it's the fact that I *did* in fact form my beliefs through social transmission.>>
OK, I grant that many people get their religious beliefs through “social transmission.” but why does that make them suspect? That’s what I’m interested in (after all, I get a LOT of beliefs through social transmission and so do you, e.g. beliefs about mathematics, science, etc., and those aren’t suspect, right?). I thought the problem with “religious teaching and socialization,” as Kitcher put it, was that had one been born elsewhere, elsewhen, one easily might have had different religious beliefs. And then one wonders whether that counterfactual leads to any skeptical conclusions, as I wondered in my paper. Maybe you think the skeptical argument proceeds differently? If so, how should it proceed? What are the steps, exactly?
>>I wouldn't accept any of the arguments about safety/accidentality either. They just seem to be attempts to exploit the fact that language use is often fuzzy to make logical hay.>>
On the one hand, you say your “skepticism holds up against all of [my] arguments.” On the other hand, you seem a little cagey about just how the skeptical argument goes. So perhaps the best way to proceed would be for you to lay out the skeptical argument, step by step, as clearly as possible, so that we might evaluate whether it’s a good argument. :-)
Posted by: Tomas Bogardus | 10/06/2014 at 11:27 PM
Hi Elisa,
Thanks for your thoughts! Here are a few replies:
You said:
>>the skeptic has always upheld a self-defeating thesis and nonetheless this (obvious) conclusion has never canceled skepticism from the philosphical discourse.>>
I’m skeptical that every skeptical position is self-defeating. Take a garden-variety external-world skepticism, for example. Where’s the self-defeat in doubting there’s an external world? Or where’s the self-defeat in doubting, as I do, that every skeptical position is self-defeating? I don’t see any. :-/
>>I guess that skeptics would keep on saying that "this is the best one can get at, and still way better than the opponents' position".>>
If a skeptic’s position is genuinely self-defeating, I can’t see how their opponent’s position could be any worse. Either the skeptic self-refutes, in which case his position must be false, or the skeptic self-undermines, in which case one can’t sensibly accept the skeptic’s position. How could an opponent’s position ever be *worse* than that? That’s sort of philosophical rock bottom.
>>I am not sure that the fact that religious beliefs are not liable to the sensitivity (etc.) objection will convince skeptics, who will rather think that this is only a hair-splitting argument which obscures the main point (the irrationality of religious belief, which tragically resembles beliefs in the tooth fairy and the like).>>
A skeptic may reply like that, as you predict. But I’ve been a party to many philosophical debates—an embarrassing number with strangers on the internet—and in my experience responses like “well that’s point-missing hairsplitting” (or, worse, “that’s just semantics!”) are the closest one will ever get to winning a philosophical argument. So if the response ends there, I’ll take it for what it is: success. But if the response were coupled with a substantive reply—for example, a detailed explanation of how the skeptical argument *really* goes, in a way which doesn’t “split hairs” or “obscure the main point”—then I’d have to think more about it. Without the substantive bit, this “hair-splitting” reply seems more like a pout.
>>I am also surprised that no one raised what I thought was the obvious answer by believers, namely: "It is not by chance that I was born in… with parents… etc. This is, rather, part of God's providential design">>
Interesting thought! I wonder why that never crossed my mind. Maybe I thought it would be more satisfying—to the skeptic at least—to refute the skeptical arguments without adverting to Providence. And I suppose this Providence line might raise some embarrassing problem-of-evil type questions: why didn’t God providentially include this sort of fortuitous upbringing in everyone’s life plan? I wouldn’t say that makes the problem of evil *much* worse. But we shouldn’t pick at our wounds, you know? :-)
Posted by: Tomas Bogardus | 10/06/2014 at 11:51 PM
I'm really just shooting from the hip here, but how about this.
Suppose the concept of knowledge is not the sort of concept that has necessary and sufficient conditions associated with it (perhaps only boring concepts like 'bachelor' do). The way the concept works is that we have some core paradigm cases of knowledge saliently available to us, as well as some cases of non-knowledge. Then we judge new cases on the basis of similarity to those paradigm cases.
Suppose all paradigm cases of knowledge involve one non-accidentality condition or another. Then there won't be any *one* non-accidentality condition that every case of knowledge has to meet, but rather a variety of such conditions, which might be more or less met by different cases of knowledge.
So, do you know the time by looking at an atomic clock under counterfactual threat? Well this is like a paradigm case of knowledge because there is, intuitively, a non-accidental (by design of the clock) causal connection between belief and the truth.
Do you know the time by looking at the swamp-watch? Well, this case is like a paradigm case of knowledge, because the belief is reliable/safe. (Even if the watch is only accidentally tracking the time correctly, it is surely now nomologically bound to track the time correctly.)
Moreover, on this account you could explain why we might be tempted to apply the concept in cases where no such condition is met: perhaps such cases look like cases where the condition *is* met, at least when we are not reflective or attend to some of their features rather than others! In many such cases, if the relevant features are brought to our attention we would withdraw the ascription of knowledge (Gettier cases). In others, we might remain ambivalent. These could be borderline cases of knowledge.
What do you think?
Posted by: Markos Valaris | 10/07/2014 at 08:56 PM
Hi, Thomas, thank you for those comments.
I kind of think you're having some cake and eating it! Here's you to me: "the self-defeat objection will, like a trusty bloodhound, catch up with you"
And here's you to Elisa: "I’m skeptical that every skeptical position is self-defeating." So I'm not sure you're arguing a single coherent position here - but I'm not sure I am, either, so that's not really an issue.
"...you seem a little cagey about just how the skeptical argument goes. So perhaps the best way to proceed would be for you to lay out the skeptical argument..."
Yeah, fair point. I wasn't really being cagey, it's just that my argument doesn't differ significantly from the versions you cited above.
There is an interesting problem with laying out skeptical arguments, though: often they are not much more than simple denials. The skeptic claims to say less than the argument she denies, and that saying less requires less argumentation. For example, you claim that I have religious beliefs. I claim that I do not, and I think that I don't really have to defend that claim. The substantive claim is that I have religious beliefs; the "default" is to think that I do not (so I say). This idea of a default which does not require defending is kinda central to skepticism.
This ties in with another point you raise. "I get a LOT of beliefs through social transmission and so do you, e.g. beliefs about mathematics, science, etc., and those aren't suspect, right?" Of course they're suspect! The skeptical position is that I don't believe any of those things until a lot of extra work is done to substantiate them. If I believe that genetic information is encoded on DNA because my dad said so, that's deeply suspect, but that is not why I hold that belief. I hold it because I have reasonable faith that the social institutions which provided me with this fact do so for valid reasons.
Now, I don't have reasonable faith that the social institutions which tell me that there is such a thing as a distinct religious sphere do so for good reasons. That concept has been transmitted to me, just like the idea of DNA. But I choose to disbelieve it because I do not trust the institutions. I put the concept of religion in the same class as all concepts which have not been transmitted to me: unsupported. What I'm saying is that the fact of social transmission is epistemically uninteresting, and a bad place to start an argument. The whole approach of looking at the beliefs I have been exposed to and treating them all as equally worthy of argument for or against would introduce a systematic bias in my thinking.
So there you go, I hope that constitutes a good excuse not to give the argument you asked for! :)
Posted by: Phil H | 10/08/2014 at 11:19 PM