Religion as Belief, A Realist Theory:
A Review of Religion as Make-Believe, A Theory of Belief, Imagination, and Group Identity
Joseph Sommer
A fellow was stuck on his rooftop in a flood. He was praying to God for help. Soon a man in a rowboat came by and the fellow shouted to the man on the roof, “Jump in, I can save you.” The stranded fellow shouted back, “No, it’s OK, I’m praying to God and he is going to save me.” So the rowboat went on. Then a motorboat came by. “The fellow in the motorboat shouted, “Jump in, I can save you.” To this the stranded man said, “No thanks, I’m praying to God and he is going to save me. I have faith.” So the motorboat went on. Then a helicopter came by and the pilot shouted down, “Grab this rope and I will lift you to safety.” To this the stranded man again replied, “No thanks, I’m praying to God and he is going to save me. I have faith.” So the helicopter reluctantly flew away. Soon the water rose above the rooftop and the man drowned. He went to Heaven. He finally got his chance to discuss this whole situation with God, at which point he exclaimed, “I had faith in you but you didn’t save me, you let me drown. I don’t understand why!” To this God replied,
“I sent you a rowboat and a motorboat and a helicopter, what more did you expect?”“You have done well, my son, in not confusing those physical saviors with my miracles. I was merely testing whether you factually believed in me. If you had accepted help from the boats or the plane, you would have revealed that your religious credence in miracles only govern your behavior in Church, while your factual beliefs determine your behavior in a crisis.”-Neil Van Leeuwen, probably
1. Introduction
I should hate this book.
Not ‘should’ in a moral sense; I’m pretty sure I’m opposed to hating books. But usually, when I read a book that almost agrees with me, I get irrationally angry. This is probably a “narcissism of small differences” thing, where views that are totally off are kind of boring, but views close to my own exist in an uncanny valley of “how did you get this close and not just agree with me?!”
But even though Van Leeuwen’s views on belief are pretty similar to mine – though still different enough for 11,000 words of disagreement (Yeah. You’ve been warned.) – the anger never materialized. I really liked this book. Van Leeuwen does so many things right and, even when I think he goes wrong, his arguments are so novel, compelling, and well-written that it’s hard to be angry. That said, it isn't so novel, compelling, and well-written that I’m convinced. And given the length of this thing, I think a short introduction is only decent of me, so I’ll jump right in.
Just a quick disclaimer: I wrote this very quickly because I was, strictly speaking, supposed to be working on other projects. I can’t promise that I’ve adequately summarized all of Van Leeuwen’s positions. For best results, read the book, which is worth your time anyway.
2. What are beliefs?
Van Leeuwen’s thesis is that most religious (and political, tribal, etc.) beliefs are not truly beliefs but are better understood as a kind of imagination. To understand Van Leeuwen’s notion of different types of belief, it helps to be familiar with the notion of a “propositional attitude”. Philosophers often define belief as a particular type of attitude (the believing type) toward a proposition, which is kind of like a sentence. So, my belief that I will write this review comes down to my taking a believing attitude toward the proposition “I will write this review.” Importantly, there are other types of attitudes I can take toward this same proposition: I can hope that I write this review or doubt whether I will write this review, etc.
Van Leeuwen argues that there is a particular kind of attitude – call it a credence – which is commonly called belief but is actually a qualitatively different attitude. Critically, because we’re talking about attitudes, not propositions, whether someone “believes” a given proposition or “creeds” it has nothing to do with the proposition’s content. In other words, I can believe that Elvis is still alive or creed that he is, and you can’t tell from the proposition “Elvis is still alive” which attitude I have. In a move I really like, Van Leeuwen allows that there is some heuristic value to the proposition – i.e., in the case of Elvis, this is probably a creed, but to find out for sure, you need to look at how the attitude works, not what it is about.
Well then, how does the credence attitude differ from the belief attitude? To begin to answer this, Van Leeuwen goes back to Hume, who asked how the mind distinguishes between merely entertaining a thought and believing it. Because Hume was an Associationist, he only had two options for explaining mental states: sense data or associations. Hume suggested that beliefs are more “vivid” than thoughts, a solution which is widely regarded to have failed. As Van Leeuwen observes, many beliefs aren’t vivid at all, and some imaginings can seem terrifyingly real. But if Hume was wrong, how do we distinguish belief from mere imagination?
Van Leeuwen’s solution is to lay out four principles that define (factual, i.e., standard) belief:
1. Involuntariness: You can’t help believing your (factual) beliefs
2. No compartmentalization: Factual beliefs always guide actions
3. Cognitive governance: Factual beliefs guide inferences all the time (including while you are imagining something)
4. Evidential vulnerability: Factual beliefs are sensitive to evidence[1]
To briefly explain these four principles, involuntariness means that you cannot choose what to believe. This is an intuitive idea that is at the core of my own theory of belief. The gist is that it seems impossible to volitionally believe that you’ve won the lottery. And this makes sense, because if you could do this, you would probably form lots of comforting beliefs instead of going to work or cooking dinner, which would have detrimental consequences for you, evolutionarily speaking.
No compartmentalization means that no matter what you are doing, your factual beliefs guide your behavior. Even if you’re imagining that you are currently in outer space, when you get up from your chair, you are going to stand up, not try to push off and float away. Van Leeuwen cites evidence that even young children are sensitive to compartmentalization while playing pretend (i.e., if a child pretends a brick is a cake, they won’t eat the brick when they get hungry).
Cognitive governance is a similar principle applied to inferences instead of behavior. To borrow an example from the book, if you imagine lightning hitting a tree, you next imagine the tree catching fire. This inference is drawn from factual beliefs about lightning.
Finally, evidential vulnerability means that your beliefs should be extinguished if you encounter evidence against them. Like involuntariness, this makes sense: for beliefs to guide useful actions, they need to get updated in the face of evidence. If I believe my car has gas and it doesn’t, I need the fuel gauge to cause my belief to be updated or I’m going to have a bad time.
I quite like these 4 principles and what I like even more is Van Leeuwen’s subsequent insistence that factual beliefs exist. This may seem uncontroversial, but there are there are plenty of philosophers who disagree. One nice observation here is that psychologists implicitly rely on these principles when deceiving research participants. Researchers often provide participants with false information, say, that coffee is good vs. bad for their health. They don’t just ask people to imagine the information was true, because imagination isn’t belief. “If you want someone to factually believe something, your two options are informing them or deceiving them—neither of which brings about a voluntary or chosen change” (p. 37).
Van Leeuwen also argues that we have many rational beliefs, which regularly guide our behavior. “[C]onsider, to pick an item at random, the beliefs you have about automated teller machines (ATMs). You probably believe that ATMs have buttons, ATMs have screens, ATMs operate on electricity, ATMs store money, ATMs take bank cards, ATMs only give money if you enter your pin, ATMs charge fees when they’re not from your bank, ATMs distribute bills and not change, ATMs in other countries distribute the currencies of those countries, your bank charges a fee when you use an ATM in another country, and so on” (pp. 54-55).
All of this is great stuff, and I completely agree. Where I get off the boat is the next step, which argues that religious beliefs violate the four properties, making them a different mental attitude. Specifically, they are closer to pretending than to belief. Van Leeuwen suggests that pretend play is characterized by a two-map cognitive structure, where people simultaneously represent the real world with one map (factual beliefs) and represent the objects of imagination on a second map.
Both maps guide behavior during pretending. For example, if I am pretending that an action figure can talk, I need to speak for it, because I know (on map one) that the toy can’t speak even if (on map two) it supposedly can. Of course, people from children to adults are perfectly capable of this kind of pretend and even children do not get confused between the two maps. They also continually track things on the first map, even while pretending, which allows them to take appropriate actions (i.e., not run off a roof just because they are pretending they can fly) and to accurately track when to stop pretending and revert to only one map.
Notice that pretending violates all four principles defining factual belief. Pretending is voluntary – you choose when and what to pretend. As discussed above, factual beliefs continue to guide actions while you are pretending. Inferences (e.g., lightening causing fire) during pretend are still drawn from factual beliefs while the opposite is not true: if you imagine cutting your hand, you don’t form a factual belief that you need a band-aid. Importantly, we need imaginings to be parasitic on factual beliefs and not vice versa, because it's our factual beliefs that tell us that we are pretending, which current thing we are imagining, and when to stop. If I decided to pretend for 5 minutes that I was a millionaire and this changed my factual beliefs, I wouldn’t be able to revert to my earlier state. Imagining is also not vulnerable to counterevidence. Van Leeuwen believes that religious beliefs also violate these four principles and are therefore more similar to imaginings than to factual beliefs.[2] This is obviously a pretty big claim, but before we look at Van Leeuwen’s evidence, I think it will be helpful to put my theory on the table.
3. An extremely (though less extreme than you might like) abridged version of my theory of belief
For present purposes, my theory of belief can be divided into three parts. The first concerns the difficulty of forming true beliefs, the second is an account of the psychological processes involved in belief, and the third deals with the structure of evidence in the world. With these three pieces, I think we can tell a story which is roughly as plausible (I would say more plausible, but your mileage may vary) as Van Leeuwen’s, and which does not appeal to a novel mental attitude. The first two subsections have been published in extensive form elsewhere (Sommer, Musolino, & Hemmer, 2022, 2023a, b) and the third is a paper in progress, in which I hope to present much more detail than I do here.
3a. The difficulty of belief
Many people seem to think that forming true beliefs is easy. My work argues that nothing could be further from the truth. To keep this as short as possible, I’ll just note that, on topics like politics and religion, there is no possible way to read (or even locate) all the relevant evidence, to say nothing of integrating it or reasoning about it appropriately. This means that when people fail to form accurate beliefs, this isn’t a deep mystery that needs explaining. As Popper (1960) beautifully articulates, ignorance is the default state; it is knowledge that needs explaining. Later in the book, Van Leeuwen has an excellent discussion of the problem of explaining how people can form irrational beliefs (e.g., religious beliefs; see section 6a, below). My answer is that factually incorrect beliefs don’t require much explanation, they’re just hard.
So far, this is pretty intuitive – though it’s common enough to see people complaining that it’s impossible to understand how Democrats/Republicans/Christians/etc. believe what they claim. I don’t find this hard to understand at all – you just aren’t familiar with the “evidence” that they see. Where my theory gets a bit less intuitive is when it comes to maintaining consistency among your beliefs. It turns out that being fully consistent is computationally intractable (read: physically impossible). We cannot check all our beliefs for consistency because we have too many beliefs and all possible subsets must be checked. This results in a combinatorial explosion that cannot be surmounted in a human lifespan’s worth of computation. As in the case of false beliefs, this leaves inconsistency as the default: we should expect to find some of our beliefs to be inconsistent (as we often do) and this doesn’t need special explanation.
My colleagues and I suggest in our paper that two things help us achieve relatively good consistency despite this intractability. The first is that the physical world is itself consistent. That means that our beliefs about physical objects will tend to be consistent (although sometimes we will fail if we move some object and forget about that later). The second is accessibility in memory: our memory is very good at retrieving relevant memories, so we will often (but not always!) recognize relevant information in the moment and update our beliefs accordingly.
This view has implications for violations of the cognitive governance principle. First, we should expect better cognitive governance for simple beliefs about physical objects than about complex, ideological beliefs, because the former are often automatically consistent, while the latter may not be. Second, people should be better able to recognize relevant beliefs about physical objects because these objects are likely to present better matches to memory. For example, if I want to walk to my door and there’s a chair in the way, I will immediately recognize a problem with my plan to walk in a straight line and adjust course. In contrast, if I’m stuck in a complicated moral dilemma, I may not realize that my beliefs about utilitarianism or some verses in Leviticus are relevant to this dilemma. Therefore, we shouldn’t be surprised if people don’t realize the relevance of abstract beliefs in the heat of the moment.
Violations of the no compartmentalization principle can be explained by similar results regarding behavior based on beliefs. We cannot assume that people’s behavior will obviously follow from their beliefs, because they may fail to recognize which behavior is entailed by their beliefs. For example, my behavior in a moral dilemma may not follow my utilitarianist beliefs, because I may fail to conceive of my situation as a moral dilemma. Similarly, if religious people fail to act religiously, this doesn’t imply that their beliefs aren’t “factual beliefs.”
While we’re on the topic of behavior not obviously following from beliefs, research on the “belief-behavior gap” (cf. Fishbein & Ajzen, 1977) is clear: you cannot predict specific behaviors from general beliefs. That is, general belief in God does not commit a believer to any behavior at any given time. The real world is much too messy for simple predictions of behavior. For example, Van Leeuwen observes that anti-abortion activists generally don’t support government-subsidies for single mothers, even though this would reduce abortions. This would be an incisive point, except I have approximately 0 confidence that pro-lifers are aware of this relationship between subsidies and abortions and every reason to believe they would be extremely skeptical of the results of government welfare programs. Of course, this might just be groupish credences all the way down, with pro-life inconsistencies supported by Republican policy irrationalities. Personally (and, to be fair, Van Leeuwen agrees on p. 204), I find it more plausible that these people are misinformed by the giant media infrastructure devoted to feeding them confirmatory information (like ours do to us, at least to some extent).
3b. Psychological processing
Like Van Leeuwen, my theory of belief (described in much more detail in Sommer, Musolino, & Hemmer, 2023b) gives involuntariness an important role. However, my view is more nuanced. While I agree that we can’t believe at will and that evidence impacts beliefs automatically, I don’t think things are as simple as “get evidence against your belief, automatically change your mind,” a view that Van Leeuwen appears to advocate throughout the book. The more rigorous forms of this argument can be found in places like the Duhem-Quine thesis and the work of Imre Lakatos, but the simple version is that not all evidence is reliable. It is a bad idea to update on everything you hear, because some people may be lying or incorrect. As an example, suppose someone shows you a complicated looking mathematical proof which ostensibly disproves quantum mechanics. I imagine you don’t think the correct thing to do here is to decide that quantum physicists are all wrong. Crucially, as I’ll discuss in 3c, this is true even though you may have literally no actual evidence for believing in quantum mechanics yourself. Instead, you just believe that the physicists have good reasons, and you trust them.
On my theory, while evidence that you accept as such is involuntarily incorporated into beliefs, you may receive some evidence that you suspect is flawed. In this case, you actually do have some voluntary leeway. You can deliberately try to search for evidence that bolsters your belief or that debunks the evidence you think is problematic. In other words, some belief processes, like evidence search (confirmation bias) and motivated reasoning are voluntary. So, to me, it is not immensely problematic if people try to believe something, as long as try means “look for evidence that they actually find subjectively compelling.” Happily, Van Leeuwen agrees with me about “indirect methods” of convincing yourself to believe something, noting that this is how self-deception works (and not via spooky unconscious processes).
The important point, for our purposes, is that some violations of involuntariness and evidential vulnerability are possible. People may have good reasons to try to bolster their beliefs and may hold beliefs for nebulous reasons (compare your belief in quantum physics or global warming). These beliefs may 1) be doubted and yet still held, because people assume there is good evidence they aren’t aware of and 2) may be deliberately bolstered (compare looking for papers supporting climate change because you genuinely believe they exist).
3c. Locality of Evidence
The final part of my theory relates to beliefs held for vague reasons. I hope to have a preprint up with more details on this soon, but the gist is that beliefs differ in the “locality” of evidence supporting them. By locality, I mean how much of the evidence is immediately available. Perceptual beliefs are extremely local – most of the information bearing on these beliefs is immediately at hand from, e.g., your eyes. However, even perceptual beliefs aren’t perfectly local. You can fall prey to illusions like the Muller-Lyer illusion, and bringing in some less local evidence by grabbing a ruler can revise your belief in your own percept being accurate. You could also learn that you are dreaming or hallucinating, etc. In general, though, evidence about physical objects in your immediate vicinity is very local, while evidence about religious beliefs and political ideologies is extremely non-local. Take your beliefs about quantum physics or whether raising taxes is bad for the economy. Here, the relevant evidence is buried in complicated academic sources that you haven’t read or in academic minds that you don’t have access to.
What are the consequences of believing something based on non-local evidence? Well, take my belief that climate change is real. I know almost nothing about climate change, and I lack the expertise to evaluate climate science. Therefore, even if someone gave me really strong evidence – say a bunch of scientific papers – that climate change was false, I probably wouldn't change my mind. However, isn’t because the belief is evidentially invulnerable, but that I have good reasons to trust (my belief in; I haven’t personally polled them) the consensus of experts in this domain. I may doubt my belief (apparently violating evidential vulnerability) and I certainly won’t be able to justify it, and I may even go digging in the literature to try to debunk the studies I was given (a violation of involuntariness). But none of this indicates that my belief isn’t a “factual” belief, and it arguably isn’t even irrational.
To sum up, on my theory, we should expect some failures of principles 2, 3, and 4. Principle 1 is iffy, and depends on what you mean by voluntary belief. Outright choice is forbidden, but “working” to form a belief is ok, if working means seeking out evidence. We should see failures of no compartmentalization because of inconsistencies and lack of accessibility. For the same reason, we should expect some failures of cognitive governance. And we shouldn’t be surprised at invulnerability to evidence, given that people may doubt that any given piece of counterevidence is authoritative and/or they may be aware of the existence of non-local evidence, but not what it is. If I know there’s an expert who agrees with me, you refuting me doesn’t matter, you have to refute them – even if your arguments make me doubt.
4. Evidence that religious credences don’t behave like factual beliefs
Van Leeuwen summarizes evidence that credences fail to exhibit each of the 4 principles of belief. In this section, I’ll briefly review (a large amount of) his evidence, and, where appropriate, I’ll criticize his interpretation and suggest that my theory provides a better account.
4a. Evidence against involuntariness
Van Leeuwen focuses on the Vineyard evangelical movement, documented in Luhrmann (2012). Vineyarders actively work to hear God speaking to them, e.g., by cultivating auditory imagery using sensory deprivation. Additionally, when they do hear voices, these are not always attributed to God, and it seems to be something of a decision as to whether a particular voice is God’s. This decision is at least partially based on evidence, like whether the voice is saying Godly things and whether the voice is surprising or if it gives them a feeling of peace. Nonetheless, at least one of Luhrmann’s interviewees reports a choice of whether to believe the voice is God’s.
All of this is ostensibly evidence against involuntariness, but I’m not convinced. First, hearing voices is a percept, not a belief, so actively cultivating voices is not obviously choosing to believe, any more than actively pointing my eyes at the scientific literature is choosing a belief in science. Second, as Van Leeuwen notes, there simply isn’t unequivocal evidence available to determine whether a given voice is God’s. The situation appears to be: A Vineyarder hears a surprising voice out of the blue which provides a sense of inner peace and offers Godly instructions related to their recent prayers. Nonetheless, the Vineyarder remains skeptical, thinking it might just be a hallucination. Given this ambiguous evidence, they “decide” to believe it was God. I will grant that this is a decision, but it is hardly a paradigm of voluntary belief. Instead, there is a ton of evidence pointing in one direction (note: Vineyarders already believe God speaks to them), and they give it one final push. Van Leeuwen contrasts this “voluntary” belief with Levy and Mandelbaum’s (2014) observation that “you cannot directly decide to believe that today is Wednesday.” The belief in this example has no evidence supporting it all, making it dramatically different from deciding that the voice that you (a) heard which (b) sounded like God and (c) made you feel like God was speaking to you was in fact God. Third, there could be more evidence in Luhrmann’s book, but Van Leeuwen only cites the one Vineyarder (two separate times) as saying that there is a voluntary choice.[3] This isn’t very compelling to me.
Next, Van Leeuwen describes the fact that many people who convert do so because they want the benefits of joining a close community, not based on evidence. I think this is probably true, and I will even grant that these people often become true believers – though there is no attempt to establish this in the book. The big question would be: upon joining a new religion for instrumental reasons, do converts immediately come to believe everything the community does? My assumption would be that they don’t, because they aren’t pretending, and they need to accrue evidence over time. However, let’s grant that they eventually come to believe in the religion, and that they don’t receive explicit evidence from sermons or whatever (which isn’t remotely obvious to me). My explanation appeals to evidence from consensus: if all your friends and neighbors believe something, that is actually good evidence that the belief is true. To see this, ignore religion and think about any other belief. If everyone you know thinks that strawberries are healthy, or that the corner store is a rip-off, would that not be good evidence? A plausible story for post-conversion belief is that by being exposed to a group of people whose opinions you respect and who all share the same belief just is good evidence for that belief.
Van Leeuwen also discusses a view that accepting Jesus is a choice, but as I’m not familiar with this sort of Christianity, I’ll concede, with some skepticism – especially considering Van Leeuwen’s discussion of the doubt these people often experience – that this may be voluntary belief.
Finally, there is some additional evidence against involuntariness which I find very implausible and will only treat briefly. Van Leeuwen observes that religious beliefs are often creative. For example, one Chinese emperor decided, apparently out of the blue, to construct terra-cotta soldiers to be buried with him in death. The argument is that the beliefs guiding these actions must have been invented voluntarily, as they were novel and there could be no evidence to support them. I find this pretty unlikely. My guess would be that the emperor probably had subjective “good” reasons to construct this belief or to accept it once it was on the table, but I will refrain from speculating wildly outside of my area of knowledge. Similarly, syncretic religions sometimes combine elements of two religions, which appears voluntary. Again, I don’t know much about this, but I believe that this often occurs when both religions can be understood by adherents to be true simultaneously – i.e., there are multiple gods, so we can share (see Wright, 2010). Finally, in addition to people converting voluntarily, there are many cases of incentivized conversions, where people will adopt a new religion for love or by force. Van Leeuwen allows that these people may simply be faking their new religion, but claims that some of them may be real. I agree that some of them may be real, but this is hardly evidence for voluntarism and – at least in my opinion – is better explained by exposure to evidence in the form I describe above.
3b. Evidence Against No Compartmentalization
Compartmentalization refers to beliefs governing your behavior in only a restricted domain, e.g., while pretending. Van Leeuwen observes (rightly) that we don’t normally think about our factual beliefs, but with them. When I enter my address into a GPS, I don’t think about the fact that I live at that address, I just automatically use that address to guide my behavior. In contrast, Christians talk about belief a lot. More importantly, they complain about how hard it is to keep God on their mind and about how they fail to have faith or fail to act according to God’s will. Moreover, research has found that there are behavioral consequences of reminding people about God. For example, Christians sign up for fewer pornographic subscriptions on Sundays (though their overall subscription rates don’t differ from non-Christians, so it is just a “Sunday effect”) and Muslims are more cooperative while the call to prayer is being played (Edelman, 2009; Duhaime, 2015). This suggests that religious beliefs only guide behavior under very circumscribed situations. In contrast, factual beliefs are supposed to guide behavior all the time, not only when you have a religious reminder.
My problem with this stems from my view of consistency (see section 3a). To take a non-religious example, we often regret not doing “the right thing” or comporting with our principles after the fact. Why does this happen? I don’t think it’s that we don’t “really” believe in our principles, but a failure to realize in the moment that one of our principles was at stake! Van Leeuwen is somewhat right that this rarely happens for factual beliefs about our immediate physical surroundings, but it’s not hard to realize the relevance of your immediate physical surroundings– they’re right in front of you! It is therefore not shocking to me that religious people sometimes fail to realize that a religious precept is relevant. The reminder to ask yourself “what would Jesus do” is not because religious beliefs aren’t real, but because in the heat of the moment, we may not realize we are about to do something un-Jesus-like.
Recall that one of the ways I think we compensate for consistency being impossible is via accessibility in memory. Well, playing a call to prayer or going to church should raise the accessibility of religious thoughts, thereby making people more likely to reason and act based on these beliefs. The reason you don’t always need this special accessibility is that when you are acting based on immediately available physical information, the relevant facts are all accessible. But this isn’t always true even for “factual” domains. To take a trivial example, why do we write grocery lists? It isn’t because we only pretended to need milk in the context of our kitchen, as shown by this credence failing to govern our behavior in the supermarket! Instead, we just don’t always think of every relevant fact (like that we need milk), and we need reminders.
For similar reasons, I don’t think regretting or complaining about ungodly behavior shows a lack of cognitive governance by religious beliefs. Instead, it may just show that it’s easier to react based on a salient emotion of jealousy than to realize you are currently coveting your neighbor’s goods, in violation of the 10th commandment.
Next, Van Leeuwen observes that religious people often expend lots of unsuccessful effort into acting on religious credences. Again, I don’t think this shows a lack of governance, but that people experience failures of willpower. The fact that I fail on my diet doesn’t show that I don’t really believe I need to lose weight.
Finally, he notes that religious rituals are often designed to bolster beliefs, and beliefs that need bolstering are unlikely to be factual. I like this point, assuming that we can reliably say that this is what rituals do (I don’t know that we can, but let’s grant it). My response again appeals to locality: it is plausible to me that people do doubt their religious beliefs because they don’t have very good explicit evidence for these beliefs. They are relying on tradition being a valid source of wisdom, on their religious leaders being smarter and more educated than they are, and/or on community consensus. Under these conditions, bolstering beliefs may be desirable, and while we can quibble about whether getting everyone together to perform a ritual qualifies as rational evidence, I think there’s a decent case that it does. Again, look at consensus: if everyone I know – all of whom are smart, capable, reasonable people who lead successful lives – believe that this ritual is legitimate and are willing to devote their time to seeing it through, that could very well be evidence that it is. (Note that I don’t mean the ritual actually is effective, just that someone might rationally take this as evidence that it is).
Put another way, Van Leeuwen writes, “Factual beliefs (with contents like dogs have teeth…) do not need strengthening; we just rely on the world’s being as they describe. If they are strengthened in any sense, it is by evidence rather than ritual.” I think this is a tempting argument, but let’s consider a belief with less locality than dogs having teeth. Suppose I believe in climate change, but have no explicit evidence to justify this belief. I hear about a protest in my city and decide to attend. Some thousands of people turn out to the protest and there are sermons from prominent climate scientists, though they don’t cite specific sources or data. I come away much more convinced that climate change is real. Is this a ritual irrationally bolstering my belief? Or is it a (quasi-)rational update of a belief that was due for some strengthening?
4c. Evidence against cognitive governance
The main claim in this section is that “various things religious people do reveal that at some level they represent the world as being merely a natural world, even if they say otherwise.” In other words, their factual beliefs do most of the governance of their reasoning and behavior, not their religious credences” (p. 69).
First, some Vineyarders (and other religious people) use a “double coding” of supernatural phenomena, where they represent both the religious as well as the physical explanations. So, someone might report that a house collapsed because of witchcraft, but, when pressed, they are aware that the house was destabilized by termites. The implication is that they maintain two maps. Again, I don’t know too much about this, especially in terms of cross-cultural analyses of religion, so I’m prepared to bite the bullet, except for one thing. Pascal Boyer, who does have the relevant experience here, brings up this exact point and comes to the opposite conclusion (my termite/witchcraft example is taken from his [2001] and originates with Evans-Pritchard). While people are aware of the mundane causes of events, they don’t view the supernatural as an epiphenomenal gloss on top of those events. Instead, the witchcraft explanation answers an entirely different question: ‘why did the house collapse now, with those people inside?’ In short, the two representations appear to do different sorts of work, at least some of the time. However, I’m prepared to concede this point if other cultures truly keep two sets of books.
Another source of evidence against cognitive governance is that people don’t pray for things that are certain, preferring to pray for things that might happen anyway. So, people might pray for someone to recover from cancer, but not to regrow a limb. This suggests that their “true” set of books recognizes reality and governs their prayer. They don’t pray for outright miracles because they “know” that these events will not happen. But I have serious doubts about this explanation.
I’m no expert, but I’m aware of multiple religious admonitions against praying for miracles. Additionally, there may be an assumption that asking for a stronger miracle requires more righteousness on behalf of the asker (you don’t get something for nothing), so asking for extreme miracles is presumptuous. In other words, what is driving behavior here may still be religious beliefs, just not the most simplistic strawman version of these beliefs. There is also an extremely straightforward cultural-evolutionary story about these praying guidelines, to wit: those who demanded miracles didn’t get them and had to adjust their (religious) beliefs. Or worse, those who asked for extreme miracles in the context of battle just died out entirely. In short, I accept the point that religious people are aware that asking for open miracles is unlikely to work, but I see no evidence that this is because their factual beliefs are seeping through their credences.
The same is true of religious people who pray for health, but who still go to the doctor. Again, religious people have explicit beliefs about doing their due diligence. The phrase “God helps those who help themselves” and the joke at the top of this essay exemplify this. Now, one could argue that all these beliefs are simply religious people trying to resolve dissonance with their factual beliefs or trying to invent apologia to cover their failings, and that could be true. I just think we should consider taking them at their word about their beliefs and assume that the coherent story they’re telling may actually explain their behavior.
Before ending this subsection, I’ll note that some of Van Leeuwen’s examples of failed cognitive governance are a little weird. In one case, Van Leeuwen recounts an incident of a holy man looking at his watch while speaking in tongues. This is presented as an obvious instance of failing to be governed by the religious belief (or action?), but lacking knowledge of the specific beliefs around speaking in tongues, I don’t know how I’m supposed to conclude that this is obviously problematic. Similarly, he reports a rainmaker finding it absurd to be asked to conduct a rainmaking ceremony during a dry season. Again, we are supposed to take this as obvious evidence that the rainmaker doesn’t “really” believe that the ritual works, but absent information about their theology, I don’t know how this follows. If the rainmaker’s holy scripture says “thou shalt only perform this ritual in the rainy season,” this reaction is perfectly natural. I’m not saying this evidence is worthless, I just don’t know what to make of it one way or the other.
4d. Evidence against evidential vulnerability
Here, Van Leeuwen argues that while factual beliefs definitionally respond to evidence, religious beliefs rarely get revised given counterevidence. I agree with Van Leeuwen that these beliefs are not evidentially vulnerable, but I don’t think that’s because they aren’t beliefs, but because of non-local evidence. If you don’t have access to the relevant reasons for your belief, but you believe they exist, you may not change your beliefs given new evidence (again, consider being presented with high quality evidence against climate change or vaccine safety). Importantly, the evidence you believe is out there may not even exist! This leads to an unfortunate failure of common knowledge, where everyone is looking at everyone else to prop up their belief, but no one actually has any evidence.
In light of this, the fact that people sometimes doubt their religious faith isn’t very compelling. Nor that sometimes people say that belief is not about evidence, but about faith. In fact, this is almost perfectly understandable if they are responding to evidence (e.g., consensus, authority) that is not readily verbalizable.
Another point Van Leeuwen raises is that many people who leave the faith do this for social reasons or because they are let down by hypocrisy or bad behavior by community members. Again, I don’t dispute this, but I think it can be explained on my view – if you are relying on others to embody your evidence and those others reveal themselves to be hypocritical, that is actually the main source of your evidence gone.
One thing I will provisionally argue on is the idea that doomsday cults get more certain of their beliefs after failed predictions of an apocalypse. This is presented as evidence against evidential vulnerability. My knowledge on this is confined to Festinger et al. (1956), and there could be more recent data, but as far as I know, this claim is almost entirely wrong. Very briefly: 1) most cult members do in fact update their beliefs. This includes the famous Seekers cult that Festinger studied – only a few members kept their faith. 2) Crucially, the beliefs they get more certain of are different beliefs – they do in fact update. For example, the Seekers did not get more certain of an apocalypse, and they definitely didn’t believe one happened. Given the failed predictions, they revised their beliefs to say that the aliens (don’t ask) canceled the apocalypse in recognition of their faithfulness. 3) This is an active process of reasoning to find explanations for the evidence that they absolutely recognize and update on. In short, their beliefs are absolutely vulnerable, just not in a naïve falsificationist sense (for additional discussion, see Sommer, Musolino, & Hemmer, 2023b).
4e. Believing vs. thinking
One final point on evidence – in chapter 5, Van Leeuwen presents results from experiments showing that laypeople treat the word “belief” differently than researchers/philosophers. They tend to reserve “belief” for religious beliefs and use “think” or “thought” for factual beliefs. This implies that laypeople are implicitly aware of the distinction between two types of beliefs. I haven’t looked into these studies in detail and Van Leeuwen accounted for my immediate alternative hypotheses, so I’m just going to concede that this is evidence against me (and it makes me doubt, and I can’t respond to it, but I’m sticking with my beliefs – not because they aren’t evidentially vulnerable, but because I have other evidence, and I suspect an alternative explanation is available, even if I don’t have one).
5. Signaling, credences, and functionalism
5a. Why Credences?
If Van Leeuwen is right, why would we evolve a whole separate type of belief (even if it is parasitic on pretense, which probably evolved for learning through play)? Van Leeuwen’s answer is that credences are used for signaling that you are a member of a group, rather than for their truth value. Importantly, credences may be better signals if they are false. This is why you wouldn’t signal your Christianity by yelling about murder being wrong, but by yelling about Jesus. Almost all non-Christians think murder is wrong, so it isn’t a good signal. Similarly, you will send stronger signals if your credences don’t get updated based on evidence, otherwise you might lose your faith, so credences shouldn’t be evidentially vulnerable either. However, because credences are often false, it’s important that they be compartmentalized and not govern your cognition, lest you take inappropriate actions when it matters. You also need to be able to join a group voluntarily and adopt their signaling beliefs. So, a signaling account explains why credences do not fulfill any of the four criteria of belief. In general, agree that beliefs do serve as signals and that false beliefs would tend to be good signals, but I don’t think things are this simple.
First, take the claim that beliefs send stronger signals of group identity if they’re false. I don’t think the key to a good signal is truth or falsity, but exclusive access to evidence. Scientists can signal because they have access to esoteric knowledge. Many cults throughout history taught members secret rites and forbidden knowledge. Sure, a lot of this stuff was false, but that isn’t what made it a good signal. One way to see this is to look at groupish divides on factual matters. Take climate change again. Presumably one side of this debate is (more) accurate. But it’s not obvious to me that either side is more successful at being a group or at anything else, simply because their beliefs are false. Again, what matters is differential information, not truth or falsity. Now, believing things that are false does open a massive source of “differential” information, because you’re no longer constrained by reality, but beliefs are often so difficult to get right that there’s no need to look for information this outlandish.
Second, if religious credences really are voluntary, this would seem to make them incredibly bad signals, because anyone can adopt them. In fact, there’s a much nicer signaling story to be told on my account: if beliefs really are always based on evidence, but it’s often evidence that you can only get by consistently associating with a group, then truly holding a groupish belief would be a great signal. On Van Leeuwen’s account, this would be evidence that you decided to engage in this group’s game of pretend. On mine, its that you accepted them as trustworthy on a level outside of your control, such that your beliefs were involuntarily updated. Ironically, elsewhere in the book (p. 159), Van Leeuwen tells a very similar story when it comes to behaviors. In coming to habitually learn a set of cultural behaviors, people burn their bridges behind them, in that their behavior comes to be partially involuntary, making it hard to move to a new group. So here, involuntariness makes for an even better signal. (For the record, Van Leeuwen doesn’t think voluntariness increases signal quality, it’s just useful because you need to be able to join groups for arbitrary reasons. I only bring up this argument because elsewhere [e.g., p. 164], he is happy to suggest that anything that makes for a good signal should be expected to evolve for that purpose).
5b. A problem with pretense
I want to open this section with a big concession. For a while, I’ve been skeptical of self-deception (and Van Leeuwen is too! [p. 234]), because it seems to involve double-bookkeeping. For you to deceive yourself, some part of your (unconscious) mind must detect that a belief is false, keep that information to itself, and somehow encourage the rest of your mind(?) to believe it anyway. Even worse, if beliefs are important for guiding actions, you need something like a two-map structure to avoid acting on the beliefs you subconsciously know are false. This whole story seems incredibly unlikely to me. However, Van Leeuwen is absolutely right that we do engage in double bookkeeping when we pretend! This is a brilliant move and shifts me a bit in his direction. That said, I really don’t see how credences can be a form of pretend.
My main problem is that we seem to need some distinction between pretending and religious belief. They cannot be exactly identical. While we do keep two maps while pretending, we are explicitly conscious of this fact. This is almost certainly because we need to maintain the second map in working memory, which is accessible to consciousness. But do religious people know they are pretending? It doesn’t seem like they do.
Now, Van Leeuwen is sort of aware of this criticism – in discussing the Vineyard holy man who looked at his watch while speaking in tongues, Van Leeuwen writes, “people pretend all the time without consciously thematizing it as such.” I completely agree; this is a consequence of our lack of consistency – we cannot represent everything we do under all possible descriptions. But I don’t think anyone pretends without being able to acknowledge that they are. Most religious people just don’t seem to be pretending at all. Now, you could argue that they are pretending even harder, i.e., pretending to not be pretending, but beyond the fact that this is unlikely, am I supposed to believe that every formerly religious atheist wouldn’t be shouting from the rooftops that everyone in their religion is just pretending?
In fairness, Luhrmann observes that many Vineyarders appear to explicitly treat their religious practice as pretend, but this only emphasizes the extent to which most religious people do nothing of the sort. For that matter, I (and, I assume, you) do nothing of the sort! Presumably, Van Leeuwen doesn’t mean to imply that any particular group of people only has factual beliefs. But this means that some of my beliefs are more like imaginings. If this is true, they are definitely unlike imaginings in the following way: I have no idea which ones those are or how to stop imagining them. If they are subconscious imaginings, that appears to be a totally novel form of imagining.
Van Leeuwen suggests some answers to this question of the difference between pretending and creeding, but they are frustratingly vague. He writes, “religious credence is imagining plus group identity and sacred value” (p. 16), but what is this supposed to mean? I think he might mean that credences are emotionally infused or backed by sacred values, which may distract people from the second map, but this is speculation on my part and it’s not at all clear that this would work. Later (p. 149), he suggests that what distinguishes credences from imagining is their functional role in marking group identities, but again, it’s not clear how this role solves the awareness problem.
One final question about pretense: I understand that Christians complaining about doubt is not perfectly compatible with evidential vulnerability (because doubt should extinguish belief), but how is it compatible with pretense? On Van Leeuwen’s account, these beliefs are credences. They should be immune to doubt. Are they pretending to doubt? Is there some kind of vacillating or slippage between the two maps? Maybe, but there is no explanation of how this is supposed to work. It seems simpler to imagine that they are just struggling with insufficient evidence for one-map beliefs. (Note: I only noticed this during proofreading, after reading the book and writing this whole piece – consistency checking is hard!)
5c. Functionalist confusions
Functional accounts of belief date back at least to the 1960s (e.g., Katz, 1960) and attempt to explain beliefs based on the benefits they offer. For example, beliefs might bolster our self-esteem or send signals to in-group members. One difficulty these accounts face is that effects occur after their causes. Suppose I do not currently believe X is true, but it would make me happy if I did believe it. You can’t appeal to this happiness in explaining how I come to believe X, because we need a mechanism that moves me now, before I’m happy. There are at least two broad solutions to this problem. The first is to propose voluntary belief or (extreme forms of) self-deception, where some part of my mind detects that belief X would make me happy, and arranges that I come to hold it. I think this view is attractive because we often do act to achieve future goals. I go get a drink not because I’ve already quenched my thirst, but because I predict that I will, if I get a drink.
The second solution to the problem of causes preceding effects may be a remnant of Behaviorism. According to Behaviorists, behaviors work like evolutionary natural selection: they are generated at random and are either rewarded or punished. This scheme allows us to explain behaviors by their effects. The randomly generated behaviors (or beliefs) that persist are those that have beneficial effects.
I don’t think we can voluntarily form beliefs and I think our belief formation is much more intelligent than trying on beliefs at random (we also form plenty of beliefs that cause us pain). So, neither of these solutions work for me. And that’s fine, because on my view, we don’t form beliefs based on their effects, but based on their causes, and their causes are singular, and that singular cause is evidence.
But Van Leeuwen’s story seems to be a weird mix of both explanations. Religious credences are voluntarily chosen for their benefits. However, they also serve as signals, which is almost paradigmatically an unconscious mechanism. This gets even more convoluted because Van Leeuwen sometimes combines the two explanations, suggesting that behavior is driven by an intentional desire to achieve the signaling benefits. For instance, in discussing religious extremists who act on their credences, Van Leeuwen writes, “For the extremist to be aware that his refusal to go to the hospital is a strong signal of group loyalty, he must also be aware that his refusal constitutes some sort of risk or cost” (p. 167). The problem is that there is no need for this person to be aware that they are sending a strong signal (or, for that matter, that refusal is costly) – signals work via the unconscious evolutionary mechanism. But if that’s true, we need a causal story about why they acted on this belief and we can’t appeal to voluntarism, because there is no goal for them to voluntarily achieve.
Indeed, elsewhere, Van Leeuwen allows group behavior that is unconscious of the signaling or unaware of the benefits a given credence/ritual has for the group. This is fine, but again, there must be some explanation for what drives behavior on the conscious level. This is what you’re committed to if you take a voluntary, as opposed to evolutionary, view of functional belief. For example, buying an expensive engagement ring is a costly signal of commitment to a relationship. This might make the relationship more likely to endure. However, most people probably aren’t aware of this signaling explanation. Crucially, there does need to be some story about why a given person actually bought a ring. It may appeal to custom or wanting to make their partner happy, but there needs to be something. If religious people are unconscious of the benefits, how are they supposed to voluntarily choose them?
Moreover, how does this voluntary process get off the ground? Even if we accept that people can voluntarily adopt credences to gain benefits, why does the first person adopt a new credence? It can’t be for the benefits, because those haven’t been established yet. Compare the ring case: the first person who spent a lot of money to signal commitment instantly achieves this, and we can account for their behavior on the conscious level if we assume they wanted to give a large gift. But credences only give benefits to a group, which means you need some reason for a bunch of people to get together and somehow decide to pretend together.
Now all of this can be addressed. Van Leeuwen discusses the creativity of novel religious beliefs and notes that, in many cases, early religious groups have a core group of followers who elaborate on a novel religious belief and work out the relevant mythology (p. 105). The story would run as follows. Some prophet claims to have had a vision, they (somehow) convince a core group of followers to join in their imagination based on this vision, and maybe they convince their kids or family members by conformity pressure. Further converts can choose to believe based on benefits or even awareness of signaling power. This gives us an origin and a proximate/conscious mechanism before the benefits have accrued. But compare the alternative: someone has a vision which seems compelling to people who know them. They take the prophet at their word, or maybe they observe their new pious behavior and personality shift and take that as evidence. In turn, many people believing in the prophet provides additional evidence that there is something real here, even if there isn’t, so more people convert on the “evidence.” I think this is at least as plausible as the book’s account.
5d. An abundance of theoretical caution
There’s another set of issues that I almost feel bad talking about, because they stem from Van Leeuwen being a careful thinker who acknowledges when evidence disagrees with him and who doesn’t want to make sweeping generalizations beyond the evidence. For example, Van Leeuwen allows that some religious groups really do have factual religious beliefs (see fn 2), and only claims that some or most religious groups possess credences instead. Unfortunately, this theoretical caution leads to some claims with questionable falsifiability.
Because some groups may hold factual religious beliefs, no group behavior violates the theory. Additionally, any individual’s religious behavior that appears belief-like can be explained as “extremism,” or taking credences as factual (no reason is offered for why this might happen). Moreover, private religious behavior – which is problematic on this theory because why would you practice religion when no group members are watching? – is explained as the development of “habitus,” or a cultural set of behaviors. So even those who appear to be sincerely acting on religious views may only be habitually conforming to their group practices (p. 158) or even explicitly practicing (p. 167) to develop that habitus. Or they may simply enjoy performing religious actions (p. 167). Of course, this may be true, but it explains away any behavior that would otherwise conflict with Van Leeuwen’s theory.
Beyond behavior that doesn’t conform to the theory, religious faith gained or lost based on evidence is often explained away. So, the subset of Vineyarders with factual arguments are said to also be dissatisfied with the social group, which is the real cause of their loss of faith (p. 96). Similarly, Van Leeuwen suggests that we find it impolite to argue with people’s credences because they aren’t factual beliefs, but he acknowledges that people sometimes do argue about faith. Such arguments are written off as the “messiness” of human nature (p. 143). And Van Leeuwen concedes (p. 26) that sometimes people’s beliefs about, say, climate change, do update with evidence, which reveals that they were really factual beliefs all along. This is presented as a benefit, in that Van Leeuwen’s account has the “expressive power” to deal with both beliefs that do and that don’t update given evidence. Unfortunately, the amount of expressive power here seems to be infinite, explaining any possible results. In contrast, my one-map theory explains behavior and belief in terms of evidence consciously accessed, in the case of behavior, or believed in, in the case of belief. When people violate strict versions of my theory, e.g., failing to behave based on their beliefs, it should be simple enough to demonstrate that those beliefs failed to come to mind, leaving them unable to drive their behavior.
6. Wrapping up
I think Van Leeuwen and I agree about most of the substantive facts. Our disagreement revolves around two questions:
1. How do we explain deviations from the 4 principles of belief?
2. Is the pretense/credence account plausible?
My answer to the first question is that beliefs are much more complicated than most people give them credit for. Sure, it’s easy enough to form the belief that there’s some broken glass in front of you and behave accordingly. But it’s almost infinitely harder to form accurate beliefs about complex topics like climate change, and it can be very difficult to act on these beliefs in the heat of the moment. Let’s imagine a U.S. citizen who strongly leans Democrat and believes in climate change but who has no specific knowledge of the science and hasn’t read anything on the topic. They sometimes debate with a family member who denies that climate change is real, and some arguments make them feel a bit of doubt, but they don’t change their mind. Sometimes, they engage in some confirmation bias to bolster their beliefs by reading information that supports them. Despite their strong belief in climate change, they sometimes fail to act in accordance with those beliefs. For example, when they bought a car recently, they really wanted to get a large vehicle for safety reasons, and agonized about paying less money for a sedan or going for an SUV. In the end, they bought the larger SUV, and only later realized that this wasn’t a green decision. They wish they had realized this in the moment, but they were distracted by price and safety considerations.
In short, lack of evidential vulnerability, cognitive governance, and compartmentalization are not obviously incompatible with beliefs. To be sure, these are – at least to some extent – failures of perfect rationality, but I think the difficulty of belief means that we are doomed to fall short of such perfection.
Regarding the second question, I don’t find the pretense account very plausible. While there do appear to be some people who use terms like voluntarism and pretending when talking about their religious beliefs, this doesn’t seem to characterize most religious people. What voluntarism there is appears to be people volunteering to join a group, not volunteering to hold a belief. This is further supported by – as far as I know – almost no religious people admitting that they are pretending or deciding to spontaneously quit. Van Leeuwen may be right that some cultures have two-map theories, and I’m as disinclined as he is to make sweeping claims. So, where he thinks most cultures have two-maps, I lean toward thinking most cultures are one-map believers. Beyond voluntarism, there appears to be enough evidential vulnerability (or at least sensitivity) in religious apologetics and appeals to miracles and religious experiences, to say nothing of softer evidence like group consensus and trust in authorities to satisfy me that evidence is plausibly responsible for religious beliefs. I also think failures of cognitive governance and compartmentalization are overstated. Millions of people devotedly practice their religion multiple times a day and while no one is watching. What failures there are seem better accounted for by lack of accessibility of religious thoughts in the heat of the moment than by sudden shifts in mental state to/from a state of pretending.
6a. The puzzle of religious rationality
Van Leeuwen ends the book with a brilliant chapter on “the puzzle of religious rationality.” His question is, how can rational humans believe in religion? I love everything about this chapter, from his organization of views in the literature, to his arguments against them, to his solution fitting beautifully with his emphasis on the attitude part of propositional attitudes. Of the views presented, my position is most closely aligned with Neil Levy’s (e.g., 2021), which proposes that people (rationally) trust authorities. I agree with this point, but emphasize the effects of non-local evidence. Let’s see how this emphasis deals with Van Leeuwen’s objections to Levy’s view.
Van Leeuwen’s first argument is that Levy’s view diminishes the rationality of people’s information consumption and doesn’t fit with evidence that people learn critically from authorities, they don’t just accept all information. And people are aware that religious beliefs are contested – and they, themselves, contest the beliefs of all other religions – so they should be critical. I think non-local evidence addresses both arguments. I don’t think it’s too harsh a judgment of people’s information consumption to say that they can’t examine every piece of apologia ever written to come to a reasoned conclusion about religion. I also think that, given the difficulty of the problem, deferring judgment is unavoidable. To reuse an example, I am aware that there are debates about climate change, but I am not equipped to evaluate these technical debates, so I will stubbornly defer to my experts. Moreover, the evidence Van Leeuwen cites shows that children are skeptical of sources who haven’t been accurate in the past. But we are talking about parents and church leaders here. I’m not saying these sources are perfectly reliable (God knows, I’m tempted to invert my dad’s advice half the time), but surely it’s not crazy to defer to your parents or to judge that they are credible sources.
The second objection to Levy’s view asks why beliefs based on authority are so resistant to change. Again, my explanation appeals to non-locality. Because I often don’t know why these authorities hold their beliefs (especially when one such authority is God, who famously works in mysterious ways), I may hold on to a debunked belief because I don’t even realize it’s been debunked. Instead, I assume there is some other reason to be discovered, even if there isn’t. In fact, Van Leeuwen tells a story along these lines in the book (p. 124), where a doubter learns that her Archbishop shares the same doubts and decides that if the Archbishop can maintain faith with those doubts, she can too.
Van Leeuwen’s final argument is that deference to authority can’t explain why people work so hard to form and maintain religious beliefs. Non-locality may solve this too. I think deference to authorities is a kind of degenerate evidence. It is evidence of evidence. Deference can work, but I imagine it can cause people some serious angst that they don’t have a single good (explicit) reason for their beliefs. This is especially true if they are being challenged. Under these circumstances, I expect people to work to resolve their doubts.
In sum, I think a small tweak of Levy’s view explains how people form religious beliefs. In Van Leeuwen’s organization of theoretical views, Levy’s solution adjusts the rationality of beliefs, while his own credence solution adjusts the attitude people hold (from belief to credence). I am sympathetic to Van Leeuwen’s concern about adjusting rationality, or, as he nicely puts it, sailing “between the Scylla and Charybdis of positing too much or too little rationality.” In this vein, I would modify the title of this solution a little: I don’t intend to adjust people’s rationality to account for weird beliefs, I want to adjust our estimate of the difficulty of the problem.
7. Conclusion
I don’t like how critical this turned out, but it was probably the only way to highlight the differences between my view and the book’s. But beyond our differences, I was delighted to see how much Van Leeuwen and I agree on. I love his arguments for the rationality of factual beliefs and his insistence that we have many such rational beliefs. I (almost) completely agree about belief being involuntary and about this point being extremely important. Likewise evidential vulnerability and his idea of “captur[ing] the rationality of factual beliefs by focusing on their extinction conditions: what makes mental states go away.” I also have to include the quote, attributed to Alison Gopnik, that “one way to tell that someone doesn’t believe something is if they say they ‘believe’ it” (p. 130). I have had similar thoughts and I really like this idea. The book also presented a wide array of evidence that really did make me doubt a few times. In fact, I would go so far as to say that I went into the book expecting to think the whole theory was wrong (this was probably unfair of me), but I now consider it a viable possibility.
Again, I want to stress that I found the book incredibly stimulating, well-written, and insightful. The theory encompasses so much territory and yet remains extremely tight. Beyond the theory, there are so many nice little points scattered throughout the book, like the idea that factual beliefs aren’t the only cognitive attitudes that people collect evidence to support – we also do this for hypotheses or suppositions. At the same time, beliefs differ from hypotheses because you can’t hang on to belief without evidence, but you can maintain a hypothesis in the absence of any support. And I haven’t even mentioned Van Leeuwen’s accounts of group identity, group belief, or sacred values, but all three were excellent. In particular, I cannot praise the sacred value connection highly enough. I absolutely love links between beliefs and preferences and tying credences together with violations of the axioms of choice and other economic principles of utility maximization is inspired. No, I won’t clarify this further. If you made it this far, you’re clearly up for some reading. If you haven't already, go read the book.
[1] This is extremely nitpicky, but I don’t think this works as a solution to Hume’s problem. As I understand it, Hume’s concern wasn’t about the functional role of beliefs vs. thoughts. In other words, he wasn’t asking how we can differentiate beliefs and imagining in terms of their effects, but how the mind distinguishes these mental states. When I believe something, I just know that it’s a belief and not a thought. I don’t infer this from the fact that it’s involuntary or that it shows cognitive governance. This is why Hume had to propose vividness as a criterion. He was gesturing at the feeling of belief, not its causes or effects.
[2] Note that Van Leeuwen distinguishes between particular and general one-map vs. two-map theories. A particular theory says that a certain group’s religious beliefs are two-map credences or one-map beliefs. General theories make this claim about all religious groups. Thus, it might be the case that only some groups have two-map credences. Van Leeuwen is not claiming that all religious beliefs are credences, but his thesis is that many and probably most people have two-map structures.
[3] Unfortunately, there is something of a recurring theme of relying on anecdotal reports. For example, Van Leeuwen writes “I once asked a then-recent convert to Protestant Christianity why he adopted his “beliefs.” His answer: “I wanted that as part of my life.” That suggests he chose general Christian “beliefs” for their effect in his life: he could just as well not have chosen those “beliefs” had he not wanted those effects. I think his outlook is representative.”
References:
Duhaime, E. (2015). Is the Call to Prayer a Call to Cooperate? A Field Experiment on the Impact of Religious Salience on Prosocial Behavior. Judgment and Decision Making 10, no. 6: 593–596.
Edelman, B. (2009). Markets: Red Light States: Who Buys Online Adult Entertainment? Journal of Economic Perspectives 23, no. 1: 209–220.
Festinger, L., Riecken, H. W., & Schachter, S. (1956). When prophecy fails.
Fishbein, M., & Ajzen, I. (1977). Belief, attitude, intention, and behavior: An introduction to theory and research.
Katz, D. (1960). The functional approach to the study of attitudes. Public Opinion Quarterly, 24(2), 163-204.
Levy, N. (2021). Bad beliefs: Why they happen to good people. Oxford University Press.
Levy, N., & E. Mandelbaum. (2014). The Powers That Bind: Doxastic Voluntarism and Epistemic Obligation. In R. Vitz & J. Matheson (Eds.). The Ethics of Belief: Individual and Social, 12–33. Oxford University Press.
Luhrmann, T. M. (2012). When God talks back: Understanding the American evangelical relationship with God. Knopf.
Popper, K. R. (1960). On the Sources of Knowledge and Ignorance. In K. R. Popper. Conjectures and Refutations.
Sommer, J., Musolino, J., & Hemmer, P. (2022). Toward a cognitive science of belief. In J. Musolino, J. Sommer, & P. Hemmer, (Eds.). The Cognitive Science of Belief: A Multidisciplinary Approach. Cambridge University Press.
Sommer, J., Musolino, J., & Hemmer, P. (2023a). A hobgoblin of large minds: Troubles with consistency in belief. Wiley Interdisciplinary Reviews: Cognitive Science, 14(4), e1639.
Sommer, J., Musolino, J., & Hemmer, P. (2023b). Updating, evidence evaluation, and operator availability: A theoretical framework for understanding belief. Psychological Review.
Wright, R. (2010). The evolution of God: The origins of our beliefs. Hachette UK.
Comments
Post a Comment