Odds are, someone at some point has quoted Einstein’s definition of insanity to you: “Doing the same thing over and over again and expecting different results.” I love this quote for several reasons, the top two being that there is no evidence Einstein ever said it and it is not what insanity actually is. Yet somehow, by people saying it over and over in hope that it is true, it has become true in our conventional wisdom. Isn’t that the kind of paradox that is supposed to rip a hole in space-time and make the universe eat itself?
The dictionary definition of insanity is being deranged or unsound of mind enough to be divorced from reality and thus responsibility. Which nicely demonstrates the root of the problem with our current public discourse – it is Einstein Nutters versus Webster Loons.
Someone needs to impose some Nurse Ratched-level tough love on the world, so here I am, with math.
Back in January, I applied Bayesian reasoning (probabilistic thinking) to relationships, in order to get a better understanding of how illogical I often am in matters of the heart. It was fun! And I started to notice something quite interesting about the math, something that has been increasingly relevant as the clash between Science and Faith has escalated.
To recap: Bayesian reasoning is a process that involves estimating the likelihood of things, then reassessing that likelihood with each new piece of information. In short – and I know this is a dirty word – Bayesian thinkers evolve their ideas over time, getting ever closer to understanding.
If this sounds familiar, it is because probabilistic thinking is how we learn. Hey, that thing on the stove is shiny and pretty – I bet it will feel good too. Ouch. Nope. That did not feel pretty. Maybe pretty things don’t always feel good. Hey, that tiger over there is really beautiful. Maybe this time… and so on. Eventually, we get a feel for the odds (or die).
The formula representing this process – Bayes’ Theorem – is simply a mathematical expression of logic at work. It centers on three variables: our original level of certainty about something (x), the probability of this new info (Ouch) if that something is true (y), and the probability of this new info if that something is not true (z). That’s it!
When we reassess in the face of new information, we simply multiply our original level of certainly by the probability that our theory is still true (xy), then divide that by all possibilities – our original certainly (x) times the probability of true (y) PLUS our original uncertainty (1-x) times the probability of the theory being false (z). In math, that reads: (xy) / [(xy) + (1-x)z]
That’s the worst of it, I promise. What I find most interesting is how the impact of new information changes the more confident we are at the outset. Let me demonstrate with an example involving something totally uncontroversial right now: birth control.
Take two people, one who is super confident that I am a good little girl who keeps her knees closed (we’ll call him “My Dad”), and another who is willing to bet the farm that I am a total slut (“Rush Limbaugh”). They are both men, because involving a woman in a conversation about birth control would be ridiculous. Now, what happens to their respective outlooks when we introduce a new piece of information into their worlds: my use of birth control?
My Dad starts out with only 10% concern that I am a floozy (x=.1), while Rush is 90% sure I am sex crazed, since I am unmarried (x=.9). Variable y is the probability that I would use birth control if I am indeed a slut, which is clearly about 95% – who else would use birth control? Variable z is the probability that I would use it if I am not. Since women are either virgins or whores, that’s maybe 5%.
When we plug those probabilities into the formula, we see that My Dad, who is faced with contradictory information, skyrockets to a new 68% certainty that I am a daughter of questionable morals. As for Rush, he goes from 90% sure to 99% sure I am easy. Had we presented them with opposite information – like a purity ring on my finger – My Dad’s fears of parental failure would have dropped from 10% to .6%, and Rush would suddenly have to grapple with a mere 32% chance of my nymphomania.
Of course, my probabilities here are extreme, but the formula holds. The more confident we are in a theory at the outset, the more devastating contrary information becomes. As it should be! If we truly think an outcome is “inconceivable” and then it happens, we either have to admit that we were very likely wrong, or accept that the word “inconceivable” does not mean what we think it means.
But a funny thing happens when confidence becomes absolute certainty: new information loses all impact. When x=1 (we are 100% sure of something), the formula reduces to y/(y+0z), which equals 1 no matter what y and z are. When x=0 (“Inconceivable!”), the fraction becomes 0/(0+z), which is always zero.
In other words, there is no amount of evidence, experience, or new information that will change the mind of someone who has absolute certainty. Proving once and for all with math that there is no arguing with believers. (Or Beliebers – ugh.)
If you got this far, you are probably tired, because math is hard. Not in the sense that it requires a Y-chromosome (I’m looking at you, Larry Summers), but in the sense of hard work. Math is work; logic is work; being open minded requires the effort of reassessment. Faith, on the other hand, is easy. Not real faith, as defined in the dictionary (“belief in something for which there is no proof”), but the Faith demonstrated too often these days: belief despite all evidence of any kind.
You want the kicker? Thomas Bayes, from whom Bayesian reasoning gets its name, was an 18th-Century minister. I think it’s time for the universe to eat itself now.