I think that developing biases is really about data compression. You have a limited amount of space to store information so rather than save every single detail of why you think X is awesome and Y is terrible, you add points to a single awesome/terrible scale in your brain every time new details come up for X or Y. As a result, you have more brainspace for other things, and the same basic information in compact form. It’s efficient, but makes backtracing to the origin your biases more difficult.
This idea has implications for the frustrating arguments I have with religious people so often. If you spend a decent portion of your time arguing with religious people or watching or reading arguments with religious people, you (or at least I) tend to wonder a lot why they aren’t more often more impacted by your arguments. They’re rarely able to provide anything remotely substantial to counter the arguments against religion, yet this doesn’t seem to concern them as much as it seems like it should. I find myself at times confronted with people who will say something like, “I think you make some interesting points”, when they’ve been unable to counter any of a number of points that, to me, appear to demonstrate that their religion is both logically impossible and evidentially baseless. It’s as though I heard that someone I thought was innocent was seen leaving the scene of a crime, caught on video committing the crime, and left fingerprint and DNA evidence at the scene, and my response was, “Well, those are some fascinating points, but I still think he’s innocent.”
Conversations like this seem a lot more reasonable given the data compression model of biases:
Bias is about compressing data. Just as with data compression in computers, there is often significant data loss associated with mental data compression. The majority of the original data that contributed to a particular bias is lost with time. The information that contributed to the generation of the bias isn’t necessarily useful—if the bias itself is the important thing, the information on why the bias exists is just taking up space. If you need to escape from a predator, you need data on two things: you need to know that you should escape, and you need to know how to escape. If knowing why you need to escape is using up memory that you could be using on how to escape, then knowing why is worse than useless—it’s making it more challenging for you to actually escape.
If the above is true, then a strong bias held for a long period of time is, ideally, representative of the accumulation of an enormous amount of supporting data. However, a person may not have access to that supporting data—it may have been lost in compression. This creates a situation where it would be potentially adaptive for a person to trust that their biases are based on an enormous amount of real data even if they cannot produce any real valid data to back them up. Which is why you might find yourself unable to convince someone of your position even where facts and logic seem to be uniformly on your side.
When you’re arguing with a religious person and they keep throwing books and web sites and blogs at you and telling you to read them, they’re making an assumption based on how bias works. They make an assumption that because they have a strong bias, there was at some point solid data that led to the creation of that bias. If they cannot produce that data, it must be a failure to locate the data, rather than that the data doesn’t exist in the first place.
Food for thought.