I Was Told There Would Be No Math at This Debate (Statistics Part II)


Odds are, someone at some point has quoted Einstein’s definition of insanity to you: “Doing the same thing over and over again and expecting different results.” I love this quote for several reasons, the top two being that there is no evidence Einstein ever said it and it is not what insanity actually is. Yet somehow, by people saying it over and over in hope that it is true, it has become true in our conventional wisdom. Isn’t that the kind of paradox that is supposed to rip a hole in space-time and make the universe eat itself?

The dictionary definition of insanity is being deranged or unsound of mind enough to be divorced from reality and thus responsibility. Which nicely demonstrates the root of the problem with our current public discourse – it is Einstein Nutters versus Webster Loons.

Someone needs to impose some Nurse Ratched-level tough love on the world, so here I am, with math.

Back in January, I applied Bayesian reasoning (probabilistic thinking) to relationships, in order to get a better understanding of how illogical I often am in matters of the heart. It was fun! And I started to notice something quite interesting about the math, something that has been increasingly relevant as the clash between Science and Faith has escalated.

To recap: Bayesian reasoning is a process that involves estimating the likelihood of things, then reassessing that likelihood with each new piece of information. In short – and I know this is a dirty word – Bayesian thinkers evolve their ideas over time, getting ever closer to understanding.

If this sounds familiar, it is because probabilistic thinking is how we learn. Hey, that thing on the stove is shiny and pretty – I bet it will feel good too. Ouch. Nope. That did not feel pretty. Maybe pretty things don’t always feel good. Hey, that tiger over there is really beautiful. Maybe this time… and so on. Eventually, we get a feel for the odds (or die).

The formula representing this process – Bayes’ Theorem – is simply a mathematical expression of logic at work. It centers on three variables: our original level of certainty about something (x), the probability of this new info (Ouch) if that something is true (y), and the probability of this new info if that something is not true (z). That’s it!

When we reassess in the face of new information, we simply multiply our original level of certainly by the probability that our theory is still true (xy), then divide that by all possibilities – our original certainly (x) times the probability of true (y) PLUS our original uncertainty (1-x) times the probability of the theory being false (z). In math, that reads: (xy) / [(xy) + (1-x)z]

That’s the worst of it, I promise. What I find most interesting is how the impact of new information changes the more confident we are at the outset. Let me demonstrate with an example involving something totally uncontroversial right now: birth control.

Take two people, one who is super confident that I am a good little girl who keeps her knees closed (we’ll call him “My Dad”), and another who is willing to bet the farm that I am a total slut (“Rush Limbaugh”). They are both men, because involving a woman in a conversation about birth control would be ridiculous. Now, what happens to their respective outlooks when we introduce a new piece of information into their worlds: my use of birth control?

My Dad starts out with only 10% concern that I am a floozy (x=.1), while Rush is 90% sure I am sex crazed, since I am unmarried (x=.9). Variable y is the probability that I would use birth control if I am indeed a slut, which is clearly about 95% – who else would use birth control? Variable z is the probability that I would use it if I am not. Since women are either virgins or whores, that’s maybe 5%.

When we plug those probabilities into the formula, we see that My Dad, who is faced with contradictory information, skyrockets to a new 68% certainty that I am a daughter of questionable morals. As for Rush, he goes from 90% sure to 99% sure I am easy. Had we presented them with opposite information – like a purity ring on my finger – My Dad’s fears of parental failure would have dropped from 10% to .6%, and Rush would suddenly have to grapple with a mere 32% chance of my nymphomania.

Of course, my probabilities here are extreme, but the formula holds. The more confident we are in a theory at the outset, the more devastating contrary information becomes. As it should be! If we truly think an outcome is “inconceivable” and then it happens, we either have to admit that we were very likely wrong, or accept that the word “inconceivable” does not mean what we think it means.

But a funny thing happens when confidence becomes absolute certainty: new information loses all impact. When x=1 (we are 100% sure of something), the formula reduces to y/(y+0z), which equals 1 no matter what y and z are. When x=0 (“Inconceivable!”), the fraction becomes 0/(0+z), which is always zero.

In other words, there is no amount of evidence, experience, or new information that will change the mind of someone who has absolute certainty. Proving once and for all with math that there is no arguing with believers. (Or Beliebers – ugh.)

If you got this far, you are probably tired, because math is hard. Not in the sense that it requires a Y-chromosome (I’m looking at you, Larry Summers), but in the sense of hard work. Math is work; logic is work; being open minded requires the effort of reassessment. Faith, on the other hand, is easy. Not real faith, as defined in the dictionary (“belief in something for which there is no proof”), but the Faith demonstrated too often these days: belief despite all evidence of any kind.

You want the kicker? Thomas Bayes, from whom Bayesian reasoning gets its name, was an 18th-Century minister. I think it’s time for the universe to eat itself now.

Love, Damn Love, and Statistics


Okay, kids, let’s get down and nerdy for a little bit. Fair warning: there will be math in today’s session. I promise to make it fun and not scary (says the former captain of her high school math team), and I assure you there will be no test after. To every student past, present, and future who ever rolled his eyes to the heavens in math class and asked, “When am I ever going to use this in real life?” I answer with the eternal wisdom of Shania Twain: “From this moment on.”

I have been reading about Bayesian reasoning lately (in Nate Silver’s awesome book about prediction – and if that surprises you at all, I invite you to glance up at the title of this blog one more time), which is a school of probabilistic thinking employed by, among others, the most successful gamblers. According to Marvin Gaye (and confirmed by anyone who has ever been willing to eat at a Taco Bell), life is a gamble, so I naturally wondered how Bayesian reasoning might apply to areas more relevant to me than sports betting. Now, I consider myself a fairly logical and scientific individual – mostly because I am ridiculously logical and scientific – but what I came to realize about my approach to other humans kind of blew my mind.

A little background: Thomas Bayes was an 18th century English minister who sought to resolve the paradox of a benevolent God and the existence of evil. See? I told you this would be fun. In very brief terms (my apologies to any theologians or philosophers our there), his answer revolved around the idea that the imperfections we see in the world are ours, not Gods, because our knowledge is never complete. In other words, if we see too much evil in the world, it doesn’t mean that there isn’t overall good, but rather that we are not seeing the whole picture. I’ll save the larger debate about good versus evil for my next Lord of the Rings party, but what matters most is that Bayes introduced the concept that humans learn about the world through approximation rather than certainty – getting closer and closer to the truth with each new piece of the puzzle, but never knowing the absolute truth.

Bayes’s chief rival in those days was David Hume, a Scottish philosopher to whom I’m going to give the benefit of the doubt and assume was drunk a lot, because he equated rational belief with certainty. Talk about depressing. Here’s a quick example to demonstrate the disparity: imagine you have moved to Los Angeles with no prior knowledge of it climate, history, or reputation, presumably because you have never seen a movie, read a book, watched TV, or met a Californian. This makes you either an alien or Amish, but I digress. Day 1: it is sunny. Aw, that’s nice. Day 2: sunny again. Cool. Perhaps this is a trend. Day 3: still more sun, and so on, and so on. The Bayesian thinker will grow more confident with each passing day that tomorrow’s weather is likely to be sunny – never fully reaching 100% certainty, mind you, but getting darn close. Even when, 300 days in, it suddenly rains (in case you haven’t heard, we’re experiencing an epic drought here in LA), the Bayesian will still be pretty sure the next day will be sunny. Those on Team Hume, on the other hand, reason that since we can’t be certain about tomorrow’s weather, it is equally rational to predict sun and rain. This sounds like a pretty high-stress way of life to me, and a recipe for an early ulcer. No wonder he drank.

Now we’re all caught up: Bayesian reasoning balances past knowledge with new information to make a probabilistic prediction about what is true, while those on Team Hume remain susceptible to the false positive – when the newest info is given disproportionate importance. I don’t know about you, but one seems like a far more productive way to interact with the world. (And if you think I mean the second way, then, well, you should run for Congress. You would do well there.) But when it comes to pursuing the opposite sex, or the same sex, or just sex in general, we tend to drop Bayes like a hot potato and make out with Hume every time.

First date went well? We’re in love! No word for the next two days? He hates me! Got asked out in a clear, direct way? Hooray, a grown up! Got cancelled on a few days later? What a flake; it’s over. In relationships, we tend to ignore the past entirely in favor of how we are feeling right now (it’s raining today and thus will never be sunny again), OR deny the probable with the excuse that we can’t know for certain (sure, he hasn’t called for three weeks, but maybe he was unexpectedly sent to space; YOU don’t know). Either way, there is going to be a lot of anxiety and crying over what is – to be all cold and scientific for a second – just one new piece of data.

To be Bayesian in life, we must consider not just the newest information, but also the weight of everything else we have learned up to this point. This is easier than it sounds, but brace yourself: here comes the math. In Bayes’s theorem, when new information comes in (an event occurs), we must consider three specific things before we can make a probabilistic guess at the truth. Let’s make our “new event” one to which we can all relate: he asked for your number (email, Twitter handle, whatever), and then didn’t call (text, write, tweet, you get the idea). According to an entire franchise, this means without question that he is just not that into you. But to really judge the truth of that, Bayes asks us to evaluate the following:

First is the probability that, if it IS true – he is NOT into you – he would ask for your number and then not call. This is variable Y. It seems weird that someone not into you would ask for your info, so our instinct might initially be to set this probability low. But then again, there is social convention to consider, as well as alcohol, the existence of sadists, and the fact that this is Los Angeles where people are always hedging their bets, plus there is the actual fact of his not calling…so let’s say there is a 75% chance of someone NOT into you still asking for your info but then not calling. Y=0.75

Next we have to consider the opposite – the probability of someone who IS into you asking for your number but then not calling. This is variable Z. As a female, I can come up with a million possible reasons for the lack of call: maybe he lost my info, or his phone, or maybe he’s scared, or hasn’t broken up with his current girlfriend yet, or maybe he works for the CIA… But I am going to let the rational part of my brain step in and acknowledge that, while possible, all together there is still at most probably a 20% chance of any of these being true. Z=0.2

Finally, and most importantly, is what Bayesians refer to as the prior – the probability before the event, before knowing anything about this particular guy or situation, that any guy you meet would NOT be into you. This is variable X. This is also where self-esteem comes to play, so let’s start with a neutral 50%. X=0.5

Once you have assigned those probabilities, the math is pretty simple. The probability of it being true that his is, in fact, NOT into you is the fraction: (XY) over [(XY) + Z(1-X)]. In plain English, it is the probability of ANY guy being not into you multiplied by the probability of a guy being not into you and not calling (XY), divided by that product (XY) plus the probability of any guy being INTO you (1-X) multiplied by the probability of him being into you and not calling (Z). With our numbers, that is: (.5)(.75) / [(.5)(.75) + (.2)(.5)], which comes out to 0.79. So, yeah, there is an almost 80% chance he isn’t into you – but a far cry from the 100% chance that it feels like in the moment.

What I love most about this, though, is how it shows with math the effect that our own personal outlook changes the way we react to things (or should react to them). A person with very high self esteem would probably have a low prior – say, a 10% chance that any random guy would NOT be into her. When X gets changed from 0.5 to 0.1, the lack of phone call results in only a 29.5% chance that he isn’t that into you. We become more willing to consider the event a false positive. But if we have a low opinion of ourselves – say, a prior of 90% (and if this is you, listen to some Katy Perry or go hug a Muppet or something, stat) – one missing phone call results in a 97% chance he isn’t into you. Devastating. So, if you find yourself reeling from every little dating hiccup, take a hard look in the mirror and re-evaluate your priors. Also, find a friend to tell you how awesome you are – and listen.

Besides protecting us from the imbalanced impact of a false positive, Bayesian reasoning also defends against being that sucker who believes Adam Sandler could actually be a secret agent, because the idea is that we re-asses our reality with each event. Instead of treating each time he doesn’t call as a new event to be reasoned and given the benefit of the doubt in isolation, we absorb them and allow them to affect our prior. One last time, let’s set our variables: we will keep Y at 75% and Z and 20%, but let’s go for a normal, healthy prior of 30% – a 30% chance any random person wouldn’t be into you. When he doesn’t call the first time, this calculates out to a 61.6% chance he isn’t into you. This becomes our new prior for this guy (rounding down to 60% for the sake of headaches). Now, when we go out and run into this guy again, and he is flirty and attentive again, and then doesn’t call or communicate again (you know who you are), we calculate the probability that he isn’t into us with an X-factor (not to be confused with an American Idol) of 60%. That results in an 85% percent likelihood of his disinterest. And if it happens a third time (again, you know who you are, and I am NOT amused), the prior is set at 85% and Bayes’s theorem calculates a 95.5% chance he is not that into you. Time to write the boy off, for sure!

Bayesian reasoning allows us to learn and grow from experience, rather than repeat the same mistakes by coming at the world from a place of willful ignorance. Every failed relationship has something to teach us about what we do or don’t want in the future, until ideally we know enough to get one right. That is exactly the idea behind Bayes’s probabilistic thinking – it is the path, through logic, to less and less wrongness. We can’t ever be 100% certain about what is in another person’s heart or mind. But if we are willing to apply a little patience and, yes, math, we can get to a level of confidence that allows us to trust the gamble and win big.