Introduction | Part 1: Chronic Pain | Part 2: Depression | Part 3: Ideas
I manned an Ask an Atheist table regularly when I was in college. A lot of good discussions were had at that table, and the ones I most enjoyed tended to be the ones that focused on a particular point. Someone would say they didn’t believe in evolution, and I would go over the genetic evidence for it (which was and is my favorite); someone would argue that without religion you can’t have morals, and I would explain why their morals had no more objective basis than ours; etc.
Every now and then, though, there would be one of those discussions where every time I made a few points against one of their arguments, the religious person would change subjects. The whole argument would be a game of religious apologist bingo, jumping from “The Bible says God exists” to “You can’t prove God doesn’t exist” to “But you can’t have morality without God” to “But you can’t explain the universe without God” to Pascal’s Wager, etc., etc. No matter how many arguments for a god failed to stand up to scrutiny, the person would just jump to another one.
The strange thing about this pattern, which you see in all kinds of debates all the time, is that people don’t seem to recognize the pattern. When five arguments have been knocked down, the person jumps immediately to the sixth argument, apparently without pausing to consider the possibility that maybe there is a reason the first five “foolproof” arguments have failed. That maybe the problem is that the premise they’re defending is wrong, rather than that they just haven’t found the one argument that really will be foolproof.
To my mind, the reality in the argument with the Christian at the Ask an Atheist table is that the first five arguments failed for the same reason the next five arguments are bound to fail: because the premise they’re defending is false. Gods don’t exist.
Why doesn’t this occur to more people in these arguments? Why, after the umpteenth argument in a row has failed, don’t people think to ask themselves if maybe the problem is with their premise? Why do they keep searching for reasons why they are correct instead of asking if they are correct?
Well, why did I keep searching for a physical problem with my back when I started experiencing chronic back pain? Why did I keep searching for a problem with me all those times I experienced the emotional pain resulting from depression?
I know the answers to those questions: I kept searching for a physical problem with my back because physical pain begs for a physical explanation. Physical pain instinctively prompts you to ask the question “How am I damaged?”. By the same token, I kept searching for a problem with me when I felt bad about myself because that kind of emotional pain begs the question “What is wrong with me?”.
I think the reason why people search for reasons why they are correct instead of asking if they are correct is because cognitive bias begs the question “Why am I correct?”, not “Am I correct?” in the same way that pain begs the question “How am I damaged?”, and not “Am I damaged?”.
Our instinct is to trust our lived experience. Our instinct is to trust what we perceive — to trust that when we feel hurt, it is because we are hurt, that when we feel bad it is because of something bad, and that when we feel conviction it is because something is true.
I spent years uselessly searching for a problem with my back to explain my back pain, and I spent years uselessly searching for problems with myself to explain why I felt bad about myself, and everywhere you look, all the time, in arguments, people spend tremendous amounts of time asking “Why am I correct?” in response to their bias without even realizing that that is what they are doing. To me, these all look like different versions of the same story, and I think that the same type of skeptical thinking can be applied in each case.
What would it mean to think of bias as an analog of pain? It would mean thinking of bias as an experience produced by my brain based on taking in all of the available contextual data I have about an idea and turning it into a conscious experience of conviction about that idea. In the same sense that my brain will take in an enormous amount of subconscious context before creating a conscious pain experience, I imagine that it takes in context in a similar way before producing a conscious experience of conviction. Also, in the same sense that I can skeptically evaluate an experience of pain and make a conscious decision about whether or not it indicates damage, I can skeptically evaluate an experience of bias and make a conscious decision about whether or not it accurately suggests the validity of an idea.
Let’s do one more analogy:
Imagine that you have a motion-activated security system in your house. If someone tries to break in, it goes off, and you call the police. However, if you have a dog, sometimes your dog might set it off, too. You don’t want your reaction to the alarm sounding to automatically be “Someone is breaking in.”, you want your reaction to the alarm sounding to be “The alarm is going off.”, so that you can then ask “Is it sufficiently likely that someone is breaking in that I should call the police, or did I forget to close the gate to keep the dog upstairs?”. The alarm going off doesn’t mean “burglar”, it means “alarm”, and it’s up to you to decide whether or not that alarm is a sign that your house is being broken into or not.
In the same way that it is better to think “alarm” instead of jumping to the conclusion of “burglar”, I endeavor to think “pain” instead of “physical damage”, “emotional pain” instead of “something wrong with me”, and “I feel conviction about this idea” instead of “this idea is true”.
It is incredibly valuable to identify the experiences, the “alarms”, that cause us to act in certain ways or beg us to make certain assumptions. I wasn’t aware that it made sense to question my experience of pain, physical or emotional, until I realized that pain was a constructed experience that might not be accurate, and therefore deserved to be examined critically. By the same token, we are not able to question our biases unless we are able to separate ourselves from them, look at them, and critically question the conscious experience of them.
If you don’t realize that you are experiencing a product of your fallible subconscious brain when you experience, say, a conviction that a particular deity must exist, then you may end up searching for ways to justify it instead of asking if it is justified. You cannot effectively question the bias “alarm” unless you are aware of it. Similarly, if I didn’t understand that pain is a product of my fallible subconscious brain, then I would still be fruitlessly searching for ways to treat my low back. If I didn’t realize the same thing about emotional pain, then I might still be wasting time trying to find things to fix about myself in response to it.
I want to take a moment here to make sure that I emphasize that when I say we should question these experiences, I don’t mean to say we should dismiss them. Sometimes the alarm going off means there is a burglar in your house. Sometimes your ankle hurts because you sprained your ankle. Sometimes you feel bad about yourself because you really shouldn’t have taken that candy from that baby, or that job at Fox News. Sometimes you feel conviction that George W. Bush was a terrible president because he was, in fact, a really terrible president. The point of all of this is not to say you should instinctively doubt your experiences, but that you should practice instinctively being able to spot them and hold them out in front of you for examination.
In almost exactly the same way that I skeptically examine my experiences of physical or emotional pain, when I experience a conviction that a particular truth claim is correct, I try to figure out why my brain thinks that conviction is appropriate (“Why is the ‘conviction alarm’ going off?”). If I can’t recall the experiences or context that led to the conviction, but I have confidence in it, then, usually, instead of sifting around for justifications, I will simply say that it is my strong impression that a particular truth claim is correct. On the other hand, if I decide I don’t have confidence in the conviction, then I have an opportunity to reevaluate my position.
In light of this process, I have become incredibly fond of the phrase “It has been my impression that [assertion].”. In discussions, this is my way of saying “I have a bias about [assertion] that I trust, but I can’t remember the specifics of how I formed that bias at the moment.”. We so often think of bias as a negative thing, but for the most part I think of it as a useful form of data compression. Deciding to trust a bias isn’t necessarily bad, but it is a decision that should be made consciously instead of automatically wherever possible.
Be skeptical of your brain. See the process. See the “alarms” that prompt you to jump to certain conclusions before jumping to those conclusions. In the same way that I have to routinely ask “Why do I think I am physically damaged?” instead of “How am I physically damaged?” in response to pain, never ask “Why am I correct?” without first asking “Why do I think I am correct?”.

The phrase “Deity X is real.” being put under a magnifying glass, revealing the words “I am having an experience of conviction about an idea. This conviction may or may not indicate that this idea is correct. Do I have any other info to suggest that this conviction is or isn’t the result of this idea being correct?”.