Thanks again to my hosts here at the Cocoon for this great opportunity. Today I’d like to continue the conversation about “The Problem of Contingency for Religious Belief.” The problem, as you’ll recall from my first post, is that our religious beliefs have been shaped to a disturbing degree by factors that are completely on the wrong side of the question, factors like when and where we were born, who our parents were, which peer group we admired most, etc. (See my first post for some statements of the argument from John Hick and Philip Kitcher.)
In my own case, there’s a lot of Catholicism on the Cuban side of my family. But I was raised an anti-Catholic Lutheran. I once wondered why. My grandmother gently told me that, when my mother was young and church-ready, a Lutheran church was nearer to their house than any Catholic church. I felt dizzy. My closely-held Lutheran beliefs were ultimately the result of a real estate decision and a love of convenience. Things easily might have gone otherwise. Had my grandparents bought a different house, I might have been raised an anti-Lutheran Catholic. How could I sensibly hold onto my Lutheran beliefs in light of that information? (I later converted to Catholicism for Lutheran reasons, thereby restoring balance to the universe.)
But did learning of the contingency of my religious beliefs rationally require lowering my confidence in those beliefs? That’s the skeptic’s song: because of the shady way in which they were formed, religious beliefs do not rise to the level of knowledge even if they’re true.
It’s a tempting thought, and I’ve felt its allure. But, ultimately, I think this skeptical argument should be resisted. The problem is getting clear on exactly which epistemic principle our religious beliefs allegedly violate. Today we’ll explore a some candidate virtues that are often taken to be necessary for knowledge and which our religious beliefs may plausibly lack: sensitivity, safety, and non-accidentality.
Suppose Smith believes truly that p (e.g. it’s 70˚F in here) on the basis of some method (e.g. checking her thermometer). To say that Smith’s belief is sensitive is to say that, had p been false, Smith would not have believed via this method that p. To say that Smith’s belief is safe is to say that, were Smith to believe that p via this method, p would be true. (Or, alternatively, that not easily would Smith have believed falsely via that method.) To say that Smith’s belief is non-accidentally true is to say that, even if it’s an accident that it’s 70˚F in here (we intended it to be cooler, say), and even if Smith has that thermometer by accident (she stumbled upon it, say), and even if Smith is alive to consider the question by accident (an assassination plot just failed, say), there’s no accidentality “between the subject and the fact,” as Peter Unger would put it: it is not at all accidental that the Smith is right about its being the case that p.
So maybe the problem of contingency for religious belief, in argument form, proceeds like this. Each of us starts by noticing the historical contingency of his or her religious beliefs:
1. If I had been born and raised elsewhere, else when, and formed religious beliefs using the same method I actually used, then, by my own lights, I easily might have believed falsely.
And perhaps this contingency is meant to convince us that our religious beliefs lack an important epistemic virtue:
2. Therefore, my religious beliefs were not formed sensitively, or safely, or non-accidentally.
And perhaps we’re meant to take at least one of those virtues as necessary for knowledge, concluding:
3. Therefore, my religious beliefs don’t count as genuine knowledge.
It’s a formidable argument! Savor it for a moment. Take it out for coffee. Get a crush on it.
Unfortunately, I don't think it's right for you. There are three huge tiny problems with it: both inferences are invalid, and the skeptic who wields it either self-defeats or targets very few religious believers. (Other than that, it's solid gold.)
First, the self-defeat-or-narrow-scope problem. Even religious skeptics like John Hick and Philip Kitcher have beliefs on religious topics. Hick was a self-described “pluralist,” while I presume Philip Kitcher has mainly “negative” beliefs on religious topics: there’s no God, there’s no afterlife, etc. But these views are not and have not historically been popular: Hick and Kitcher easily might have held different views, had their biographies differed. So premise (1) seems as true for Hick and Kitcher as it does for Pope Francis. But then the skeptic who wields this argument will himself fall into its grinding maw; it’s self-defeating.
The skeptic can find a way out, but it comes at a price. In their statements of the skeptical argument, Hick and Kitcher are careful to specify that the argument targets only religious beliefs that have been instilled into one since childhood (Hick), or that one has received through early teaching and socialization (Kitcher). That’s the questionable method mentioned in premise (1), they might say, and they arrived at their beliefs via a superior method: rational reflection. They may thereby carve a loophole in premise (1)—claiming that (1) is false for them and their superior method—and thereby escape the problem of self-defeat.
But they escape self-defeat at the cost of severely narrowing the scope of the skeptical argument. After all, not all religious believers hold their views as a result of “early teaching and socialization.” Very many of them have rationally reflected on their religious beliefs. If rational reflection exempts Hick and Kitcher from premise (1), it also exempts these very many religious believers. So the skeptical argument has less bite; it casts a smaller net than the skeptic may have hoped. It has no grip on anyone reading this post, or on anyone who has ever reflected on anything remotely like this post. That’s a cost. And so the skeptic has a serious self-defeat-or-narrow-scope problem.
There’s a second problem with this argument: the inference from (1) to (2) is invalid. Here’s a case to show why. Suppose that the infamous Evil Epistemologist has poisoned the world’s water supplies with a drug that radically impairs human cognitive faculties. Once exposed to the drug, all of one’s faculties become completely unreliable, unsafe, insensitive, accidentally-right-if-right-at-all, etc. However, a benevolent nurse used his only dose of antidote to immunize you in the maternity ward. Your faculties are therefore safe from the poison and function normally as you mature into adulthood, producing beliefs sensitively, safely, and non-accidentally. (While it may be an accident that your faculties were preserved by that nurse, given that they were there will be no toxic accidentality “between you and the facts,” as Unger would put it.)
But: it is true that, had you been born in a different time and a different place, and used the same faculties and methods you actually used, you easily might have believed things that would be, by your own lights, false. So we have here a counterexample to the general form of the inference that is meant to carry us from (1) to (2). The fact that something might have happened in the past which would have rendered my faculties unsafe/insensitive/accidental does not entail that my faculties are actually unsafe/insensitive/accidental when I use them. And the same may go with the methods by which I formed my religious beliefs.
Finally, a third problem for this argument: the inference from (2) to (3) is invalid since not-a-one of those virtues is required for knowledge. Let’s start with sensitivity: it’s not required for knowledge, and here’s a case to show why (from Comesaña 2005, but originally due to Sosa 2000 who credits Vogel 1987):
GARBAGE CHUTE: I throw a trash bag down the garbage chute of my condo. Some moments later I believe, and know, that the trash bag is in the basement. However, the closest possible world where my belief is false is plausibly one where, unbeknownst to me, the bag is stuck somewhere in the chute, and I still believe that it is in the basement.
In GARBAGE CHUTE, my belief that the trash bag is in the building’s basement is not sensitive, yet it counts as knowledge. (This is a classic case that many accept, but I think it requires tweaking. Other cases work well, however.)
Next, here’s a general recipe for whipping up counterexamples to the alleged safety condition on knowledge: first, pick the most virtuous belief-forming method you can imagine, and have a subject form a belief via that method. In the original counterexample (Bogardus 2014), called “Atomic Clock,” a subject named Smith formed a belief about the time on the basis of the world’s most accurate clock. Second, add a twist of fate: put the method in danger of malfunctioning, but let the danger remain purely counterfactual. In the original example, Smith’s clock was atomic, and it was imperiled by a nearby radioactive isotope. The isotope was due to decay at any moment, and were it to decay it would stop the clock (or even just slow it down significantly), rendering it unreliable.
Now, since that danger remains purely counterfactual—since the clock could have malfunctioned but in fact remained the world’s most accurate clock; since things could have gone less well epistemically but didn’t—it’s quite tempting to allow that Smith knows the time on the basis of the clock. And yet, one might think, her belief in this scenario is not formed safely, for there are many nearby possible worlds in which she forms a false belief on the basis of that clock, worlds in which the isotope has decayed and the clock has stopped or slowed. It’s false that, were Smith to believe via that clock, her belief would be true. Very easily could she have believed falsely via that clock. And so Atomic Clock seems to be a counterexample to the alleged safety condition on knowledge.
Finally, non-accidentality is not required for knowledge. One can know that p even if there is a troubling accident “between the subject and the fact.” I present a lengthy argument for the conclusion in the paper (having to do with swamp-people). A briefer version of the argument might borrow John Hawthorne’s (2002) swampwatch, a particle-for-particle duplicate of the world’s most accurate wristwatch “created by a fortuitous coming together of molecules.” Swampwatch reports the time but, given its birth from chaos, its reports are not aimed at the truth or anything else. And so, if a subject were to believe truly that p on the basis of swampwatch’s reports, it would be an accident—in a familiar and legitimate sense of the word “accident”—that the subject is right about p’s being the case. And yet, Hawthorne says and you may well agree, when a subject “uses his swampwatch to inform him about the time... we are intuitively ready to say that [she] knows what time it is.” If you share Hawthorne’s intuition, as I do, then we have here a case in which S knows that p and yet it is accidental that S is right about p’s being the case. In which case, contrary to Unger, knowledge can be accidental.
But if knowledge doesn’t require sensitivity, safety, or non-accidentality, then the inference from (2) to (3) is invalid. We have three reasons, then, to reject this version of the problem of contingency for religious belief: the self-defeat-or-narrow-scope problem, (1) to (2) is invalid, and (2) to (3) is invalid.
(In the paper, I consider one final version of the argument, an “argument from symmetry.” But in the interest of space, I’ll omit it here.)
We have attempted to lay out the problem of contingency for religious belief as forcefully as possible, and the argument fails badly. The contingency of our religious beliefs does not show that they were formed insensitively, unsafely, or accidentally; and even if it did, none of these is required for knowledge. And the skeptic who handles this argument either self-defeats or targets only unreflective religious believers. It may well be, then, that there simply is no problem of contingency for religious belief. Or, if there is, that it needn’t worry many people.
But what do you think, Cocooners?