Skip to content

Posts tagged ‘Why We Reason’

Voting Time And Happy Birthday

First things first. Happy Birthday to Why We Reason! Believe it or, it’s one year old. I want to thank everyone for the visits, tweets, Facebook shares, comments, emails, etc. Seriously: It’s been great.

Second, the wonderful website 3 Quarks Daily, which curates written content from around the web, is currently holding a best-of-the-web science writing contest. I have a piece nominated! Long time readers of Why We Reason might remember it from October 2011. It’s titled “Does Pinker’s ‘Better Angels’ Undermine Religious Morality?” In brief, I look at Steven Pinker’s latest book in context of religious morality.

If you have a second, please click on this link and vote for it!

Finally, as you may know, I’m now blogging full time at BigThink.com. My blog over there is called “Moments of Genius” and it’s about the psychology of creativity. Feel free to visit. Now that I’m blogging full time for BT I’ve essentially stopped writing for Why We Reason, instead using it as a medium to promote my stuff. As a result of this shift, I’ll soon be launching SamMcNerney.com, a website that will curate all of my writing, including old WWR stuff, BT pieces, ScientificAmerican.com articles, CreativityPost.com articles and current material. The website should be up and running soon.

Until then, I’ll see you elsewhere on the web.

Sam

Odysseus And The Science Of Willpower

If you paid attention in high school you might remember the story of Odysseus and the Sirens. After the Trojan War ended, Odysseus went on a protracted sea voyage back to Ithaca. At one point he realized that his ship would pass by the island of Sirenum scopuli, where the enchanting Sirens sang melodies to seduce the “weak” human mind. In his genius, Odysseus had all his men fill their ears with beeswax and tie him to the mast. It worked. They sailed on by and the rest is history.

Ever since Adam and Eve ate the apple, humans have had trouble with self-control. But if there is one lesson to learn from the psychological research it is that Odysseus had the best strategy: it’s the people who avoid tempting situations altogether – the ones who take a different street so they don’t have to walk past the ice cream shop – who exhibit the highest level of self-control.

This was originally demonstrated by the psychologist Walter Miscel who is famous for creating what is known as the Marshmallow Experiment. Conducted back in the 1960s, Miscel invited four-year-old children into a tiny room, containing a table and a chair, and offered them a deal: They could have one marshmallow now or they could wait a few minutes and have two. Most kids took up Miscel’s deal only to give into their impulses 30 seconds later. But some did not, and Miscel’s important discovery is that the kids who waited didn’t have more willpower, they “simply found a way to keep themselves from thinking about the treat, directing their gaze away from the yummy marshmallow.” They didn’t tie themselves to the proverbial mast exactly, but they did find a way to distract themselves from the situation.

A similar experiment recently published by Wilhelm Hofmann, Roy Baumeister and Kathleen Vohs demonstrates complementary results. The researchers equipped a couple hundred participants with Blackberries for one week. Seven times a day the participants were beeped and asked to report if they were experiencing a desire now or in the preceding 30 minutes. They also took note of how strong the desire was, if it was an internal conflict, if they attempted to resist it and how successful they were. Here’s the BPS Research Digest explaining the results:

The participants were experiencing a desire on about half the times they were beeped. Most often (28 per cent) this was hunger. Other common urges were related to: sleep (10 per cent), thirst (9 per cent), media use (8 per cent), social contact (7 per cent), sex (5 per cent), and coffee (3 per cent). About half of these desires were described as causing internal conflict, and an attempt was made to actively resist about 40 per cent of them. Desires that caused conflict were more likely to prompt an attempt at active self-constraint. Such resistance was often effective. In the absence of resistance, 70 per cent of desires were consummated; with resistance this fell to 17 per cent…

People who scored highly on a measure of trait self-control had just as many desires, but they were less likely to report experiencing internal conflict; their desires were generally weaker; and they attempted to resist them less often. These findings are revealing. It’s not that people with high self-control have saintly willpower, it seems. Rather, they seem to avoid putting themselves in situations in which they are exposed to problematic temptations. “The result is not a desire-free life,” the researchers said. “Au contraire, the result appears to be that they mainly have desires that they can satisfy.”

Like Miscel, the researchers found that everyone possess inner demons but it is the people who are smart enough to avoid situations that trigger their inner demons who command higher levels of self-control.

The exciting part of the science of self-control and willpower is that it is now being understood at the neurological level. In 2004 neuroimager Jonathan Cohen and the psychologist Samuel McClure teamed up with the economists David Laibson and George Loewenstein to study how the brain reacts to short-term and long-term rewards. They had participants lay in a scanner and pick between a small reward – five dollars – which they would receive in the near future and a big reward – forty dollars – which they would receive several weeks later. How did the brain handle the two options? Did the brain handle them differently?

They found that choices that tempted the possibility of immediate gratification lit up the striatum and medial orbital cortex, which are more associated with the automatic and emotional brain. All choices lit up the dorsolateral prefrontal cortex, a part of the frontal lobes associated with more rational and deliberate thinking. This finding isn’t too surprising. Ever since Phineas Cage’s freak accident, which sent a spike through his orbital and ventromedial cortex largely destroying his frontal lobes, scientists have known that self-control is closely tied to frontmost parts of the frontal lobe.

It was likely these parts of the brain that were most active when Odysseus gave his orders. It is makes sense, then, that he was known by epithet Odysseus the Cunning; he must of had large and fine-tuned frontal lobes! Although it is probably impossible to resist every delicious temptation out there, we can, like Odysseus, find ways to counter our weakness of will by avoiding certain situations altogether.

What Philosophers Got Right

Philosophers got a lot of things wrong. Everything is not made of water; the problem of evil is still a problem; and scientists have yet to find anything resembling a platonic essence. When it comes to understanding people and the world, the great metaphysicians missed the mark.

But there is one thing they got right.

Philosophers were masters of self-doubt. Socrates was famous for his self-criticisms; rarely was he satisfied with a query and never did he claim to have knowledge – a sure sign of ignorance if you asked him. Descartes began his meditations by identifying things that “can be called into doubt,” which turned out to be everything save his existence. And then there is George Berkeley, who went as far as saying that our existence is an illusion.

The rest of us are the opposite. We think we’re right most of the time; we are overconfident; we rarely challenge our opinions. Modern society, where sound bites are the norm and a decent intellectual conversation demands too much patience for most, isn’t helping. But it is, I think, our very nature that is the root of the problem.

For example, in one experiment Gary Marcus had subjects read either a report showing that good firefighting correlated with risk-taking ability or that bad firefighting correlated with risk-taking ability. Then each group was subdivided – some people reflected on what they read and others attempted to complete difficult geometrical puzzles. Marcus’ experiment, of course, was bogus. And when he asked his subjects what they really thought of firefighting, he found that “people in the subgroups who got a chance to reflect (and create their own explanations) continued to believe whatever they had initially read.” Unlike philosophers, they started with a conclusion and then went looking for reasons to support it.

In another study done by Ziva Kunda, participants were brought into a room and told that they’ll be playing a game. Before the game started, they were instructed to watch someone else play the game who will either compete with them or against them. However, Kunda rigged the study; the participants actually watched a confederate, who played the game perfectly answering every question correct. Kunda found that the participants who were lined up to play against the confederate were dismissive and tended to attribute his accuracy to luck whereas the participants who were lined up to play with the confederate were praiseworthy of his “skills.” Both groups saw the same performance yet came to exact opposite conclusions. Clearly, we scrutinize much less when things go our way.

We are also overconfident in terms of just about anything. As one author explains, “95 percent of professors report that they are above average teachers, 96 percent of college students say that they have above average social skills…[and] 19% of Americans say that they are in the top 10% of earners.” We humans take great pride in maintaining our nearly infallible positions at the center of our subjective universes and this is easy to illustrate empirically. Psychological findings are normally fairly difficult to replicate, but not overconfidence, which is one of the most reliable findings in the lab.

How do we escape our self-inflicted epistemology traps?

Philosophy is about listening and reading carefully, logically analyzing arguments and being critical of your opinions and the opinions of others. It forces people to question presuppositions about the world that they normally take for granted and it requires a certain sensitivity to nuance. People will only be able to break out of their epistemology self-delusion once they realize that a life without these attributes is a mistake.

Philosophers got a lot of things wrong, but they understood better than anyone else that a life unexamined is a life not worth living.

Fooled By Randomness

In my last post I discussed the myth of the SI jinx. Here is a brief recap. Athletes and teams are usually on the cover of SI for extraordinary performances. Almost always, an extraordinary performance is followed by an ordinary performance. So the SI jinx is easily explained by athletes and teams that have regressed back to their normal performance level. Contrary to popular belief, therefore, Sport Illustrated is not actually causing an athlete or team to play worse. However, people have a difficult time understanding this, and this is largely due to their inability to perceive and understand randomness. In this post, I want to build off of this point by illustrating just how bad we are with randomness. Let’s start with ipods.

When Apple first sold the ipod shuffle, users complained that it was not random enough. Though the engineers at Apple had programmed the ipod shuffles to be random, people were convinced that they were not. The problem was that “the randomness didn’t appear random, since some songs were occasionally repeated.” I took to the Apple blogosphere to see if this was true and on the Google’s first hit I found the following two posts:

User 1: There are 2800 songs in my ipod, I found that the Shuffle Songs function is not random enough, it always picks up the songs which I had played in the last one or two days.

User 2: It is random, which is why it’s not paying attention to whether or not you’ve played the songs lately.

User 2 is right, the ipod shuffle is random, making it entirely possible for a song to be played two days in a row, or two times in a row for that matter. The mistake made by User 1, is that people perceive streaks and patterns as indications that sequences are not random, even though random sequences inherently contain streaks and patterns.

Our tendency to misinterpret randomness is exemplified by the gambler’s fallacy, which describes our intuition’s habit of believing that the odds of something with a fixed probability are influenced by recent occurrences. For example, we think that the more times a coin lands on heads the more chances it has of landing on tails. In reality though, if a coin landed on heads one hundred times in a row it would still have a 50/50 chance of landing on heads the 101st time.

We make the same mistake when we watch sports. In 1985 Cornell psychologist Thomas Gilovich published a paper that “investigated the origin and the validity of common beliefs regarding the ‘hot hand’ and ‘streak shooting’ in the game of basketball.” His study was motivated by the common belief shared by fans, coaches, and players that a player’s chance of hitting a shot are greater following a hit as opposed to a miss. To see if basketball players actually “heat up,” Gilovich collected shooting stats from the Philadelphia 76ers 1980-81 season. He found that the chance a basketball player has of making a shot is actually unrelated to the outcome of his previous shot. In his words:

Contrary to the expectations expressed by our sample of fans, players were not more likely to make a shot after making their last one, two, or three shots than after missing their last one, two, or three shots. In fact, there was a slight tendency for players to shoot better after missing their last shot… the data flatly contradicts the notion that “success breeds success” in basketball and that hits tend to follow hits and misses tend to follow misses (1991, p. 12).

Gilovich’s conclusion comes as a surprise to most people. For some reason, our intuition tells us that a basketball player’s field goal percentage is influenced by his previous shots. This is why we want a player who is shooting well to continue to shoot, and vice versa.

Similar results have been found with baseball players and baseball teams. Michigan State University psychologist Gordon Wood demonstrated that the probability of an MLB team winning after a win, or losing after a loss, was fifty percent after analyzing the outcomes of all 1988 Major League Baseball games (26 teams & 160 games). Likewise, Indiana University statistician Christian Albright found the same with batters. He states that, “The behavior of all players examined… does not differ significantly from what would be expected under a model of randomness.” Like the outcome of a basketball shot, an MLB game and at bat were unaffected by past performance

None of these studies are denying that streaks exist; but they are saying that our intuition does a poor job of understanding and perceiving randomness – we mistakenly “see” patterns amongst randomness.

There are powers and perils to this cognitive bias. If you bet your life savings on a falsely perceived streak in the stock market, you could easily lose a life’s savings. Likewise in gambling, if you have gotten lucky on a slot machine you will want to keep going thinking that you have found a “hot” slot (in the end, of course, you will most likely have less than you started). On the other hand, our tendency to see order amongst random-chance events is an incredibly useful survival technique. Think what it would be like if you perceived the world as a series of random events; imagine that headache. With this in mind (not the headache), it seems awfully useful that we can “see” patterns that aren’t actually there.

  • Thanks Nassim Taleb for the title of the post.
  • The ipod shuffle discussion can be found here.

The Myth of the SI Jinx

When the 2003 regular season ended, things were looking good for the Chicago Cubs. They had won the National League Central – edging out the Houston Astros by one game – and were lined up to play the Atlanta Braves in the first round of the playoffs. They also had two of the best pitchers in the game, Kerry Wood, who had recorded a career-high 266 strikeouts, and Mark Prior, who went 18-6 with a 2.43 ERA. Earlier in the season, Sports Illustrated dubbed them “Chicago Heat,” and they indeed appeared unbeatable. On top of that, the Cubs had a solid line up that consisted of power hitters Sammy Sosa and Moises Alou, speedster veteran Kenny Lofton, and recently acquired young star Aramis Ramirez. Needless to say, Cubs fans were thinking that this was finally their year.

Then, just as the stars were aligning, everything collapsed. With a 3-0 lead in the 8th inning of the 6th game of the National League Championship Series versus the Florida Marlins, Mark Prior surrendered five straight runs (three earned) and was replaced by reliever Kyle Farnsworth, who gave up three more runs; in the blink of an eye, the Cubs went from a 3-0 lead to a 8-3 loss – Wrigley was speechless. Fortunately, it was a best-of-seven series and there was still one more game to play. Kerry Wood was starting, and he and Prior hadn’t loss back-to-back games the whole season. However, in typical Cubs fashion, Wood pitched poorly, the offensive didn’t score enough runs, and they lost.

Next came the finger pointing, and it seemed like the actual performance of the Cubs was last on the list. On the top, there was the curse of the Billy Goat, Steve Bartman, and of course, the classic SI jinx. For those that don’t know, the SI jinx is an urban legend that states that a team or player’s performance is made worse by being on the cover of Sports Illustrated. Below is a clip from the ESPN show “Mike & Mike In The Morning,” which illustrates the SI jinx debate. Listen closely to Mike Greenberg (the one on the left with the blue collard shirt) explain why he thinks it is a real thing (Begins around 1:00).

I hope that you are smart enough to realize that Greenberg’s pseudo psychological story is completely wrong; the classic “I read a book” example just doesn’t cut it. But I want to tell you why he is wrong, and for that we need to rethink what it means for something to be random.

Leonard Mlodinow’s The Drunkard’s Walk tells the story of how psychologist Daniel Kahneman was first motivated to understand the cognitive biases behind people’s tendency to misunderstand randomness. According to Mlodinow, Kahneman, then a junior psychology professor at Hebrew University, was lecturing to a group of Israeli air force flight instructors on the importance of rewarding positive behavior and not punishing mistakes (a well established finding in psychology). During the lecture, however, Kahneman was met with sharp objections. Many instructors believed that their praises were almost always followed by worse performances and vice versa. But Kahneman knew that in any series of random events, an extraordinary event is most likely to be followed, due purely to chance, by a more ordinary one.

Any especially good or especially poor performance was… mostly a matter of luck. So if a pilot made an exceptionally good landing – one far above his normal level of performance – then the odds would be good that he would perform closer to his norm – that is, worse – the next day. And if his instructor had praised him, it would appear that the praise had done no good. But if a pilot made an exceptionally bad landing – running the plane off the end of the runway and into the vat of corn chowder in the base cafeteria – then the odds would be good that the next day he would perform closer to his norm – that is, better. And if his instructor had a habit of screaming “you clumsy ape” when a student performed poorly, it would appear that his criticism did some good. In this way an apparent pattern would emerge: student performs well, praise does no good; student performs poorly, instructor compares student to lower primate at high volume, student improves. The instructors in Kahneman’s class had concluded from such experiences that their screaming was a powerful educational tool. In reality it made no difference at all (2008, p. 7-9).

This explains why the instructors are wrong, and illustrates what psychologists call the regression fallacy. The regression fallacy refers to the tendency for people to “fail to recognize statistical regression when it occurs, and instead explain the observed phenomena with superfluous and often complicated causal theories.”

As you may have guessed, the SI jinx is a prime example of the regression fallacy. Though people like Mike Greenberg tend to think that SI actually causes a poor performance i.e., “everyone talking about the curse gets in your head and you start squeezing the bat tighter,” the regression fallacy illustrates his mistake. The reality is that athletes are usually on the cover of SI for an extraordinary performance. And like a superb landing by the pilots, an extraordinary performance by an athlete is usually followed by an ordinary performance. So it is not that athletes do worse because they were on the cover of SI, it is that their performance regresses back to its norm. This also explains why Michael Jordan and Wayne Gretzky, two athletes who have been on the cover of SI many times, are not affected by “the jinx.” Relative to most athletes, their ordinary performances are extraordinary.

Returning to the Cubs. Prior and Wood were never the same after the 2003 season. From 2004 to 2006, they went 30-32 and combined for an ERA over 4.00. Prior didn’t play in the majors after 2006, and Wood, who is still active, has only seen moderate success since. This suggests they were average pitchers with one great season, not great pitchers with a handful of average seasons. As they say, “You can have a lucky day, sure, but you can’t have a lucky career.”

So we can safely conclude that the SI jinx is in fact a myth. In addition, we can also understand why it exists in the first place – the regression fallacy. So the next time you see your favorite player or team on the cover of SI, relax, take a deep breath, and realize that SI isn’t actually causing anything.

Rethinking the Scientific Method

“Scientific discovery and scientific knowledge have been achieved only by those who have gone in pursuit of it without any practical purpose whatsoever in view”

-Max Planck

On a warm June night in 1878, Russian chemist Constantin Fahlberg sat down for his evening meal. Unbeknownst to Constantin, his dinner would change history forever. As he began to eat a bread roll, he noticed something unusual, it was sweet. This was confusing since he had never tasted bread that was so sweet before. How was this possible? It soon became clear, and it was only a matter of time before Constantin realized what was going on.

At the time, Constantin was working for the H.W Perot Import Firm in a laboratory run by Ira Remsen, a professor of chemistry at Johns Hopkins University. A German immigrant, Constantin had spent most of his life in Europe studying chemistry. He was fluent in many Indo-European languages and had a clear passion for science. However, the night of his fateful sweetbread-encounter would ensure him a place in future chemistry textbooks, a wikipedia page, and prevent him from being forgotten as just-another-chemist.

After he tasted the bread, he ran to his lab where he spent the next several hours examining everything around his work station – beakers, vials, etc. – until he discovered the source of the sweetness. It turned out to be “an overboiled beaker in which o-sulfobenzoic acid had reacted with phosphorus (V) chloride and ammonia, producing benzoic sulfinide.” Constantin had previously synthesized benzoic sulfinide, but he certainly did not think to taste it – why would he? But serendipity set in that night, and the first artificial sweetener was born: Saccharin.

You now know benzoic sulfinide from those little pink “Sweet N’ Low” sugar packets, which can be found at your local diner, Denny’s, or IHOP. While Constantin’s story is remarkable, it is not as uncommon as you think. Like many other scientific breakthroughs in history, Saccharin was discovered by accident. Take Penicillin, for example. It was stumbled upon by Alexander Fleming in 1928 when he found it growing mold on an old experiment as he was cleaning up his laboratory (perhaps more astonishing is the fact that it took Fleming over a decade to convince health officials of its potential). Viagra, the microwave, chocolate chip cookies, the list goes on and on. Some of the most beneficial discoveries have been mere happenstance.

If you have had any interest in science then perhaps you’ve heard this story before: scientists does y but gets x, and x turns out to be exceptionally useful. Histories accidental discoveries is a popular science topic and it has been the subject of several books. However, have you ever thought about the implications of all this serendipity for the scientific method?

Think back to your middle school days when you were taught that all scientific inquiry begins with a prediction, which is then tested in an experiment, later analyzed, and concluded. Makes sense, right? But when we consider that most discoveries weren’t in the initial plans, i.e., Saccharin, the scientific method seems to misunderstand how science works. A more accurate understanding realizes that randomness and chance are highly relevant steps in the scientific method. Put differently, within its logical and structured step-by-step procedure, the scientific method doesn’t leave room for the Saccharins and Viagra’s of history. This brings me to my first point: the fundamental problem with the scientific method is that it assumes you can predict the future. Let me explain.

Imagine that you are a cave man or woman living somewhere in Europe around 10,000 BC when your local chief tasks you with the responsibility of predicting the major scientific discoveries of the next several thousand years. Without a doubt the invention of the wheel should be one of the first things to make the list. But here is the catch: The ability to predict the invention of the wheel presupposes knowledge of the wheel. In other words, if you could predict the wheel then you might as well build it right then and there since you already what it is. As Nassim Taleb says in The Black Swan, “if you know about the discovery you are about to make in the future, then you have already made it.”

This is the fallacy of the scientific method: it doesn’t realize that implicit in a prediction about the future, albeit about the wheel or artificial sweetener, is knowledge of the future.

My other problem with the scientific method relates to how we think of the past. When we learn about the scientific method we get the idea that discoveries were made because it was followed. This is a mistake, and it brings me to my second point: our explanations of how scientists made their findings are warped by us knowing that they actually happened. These explanations are problematic because they are what psychologists call post hoc explanations.

Poc hoc explanations are descriptions of the past that are based on knowing what has already happened in the present. Everybody does this, and it is certainly not specific to science. In one study, several women were asked to choose their favorite pair of nylon stockings from a group of twelve. Then, after they had made their selections, researchers asked them to explain their choices. Among the explanations, texture, feel, and color were the most popular. However, all of the stockings were in fact identical. The women were being sincere – they truly believed that what they were saying made sense – but like the scientific method, they simply made up reasons in the present for something that happened in the past – and that’s the problem.

The more plausible explanation is that we don’t know how or why a scientific discovery is made; we know that it was made, but that does not mean we know how it happened. Our stories are all after-the-fact constructions that make it seem as if something like the scientific method was followed. But the reality of science is that there is an extreme amount of chance and randomness that is involved, and because we have such a propensity to decorate the past with phony cause and effect stories, it never gets reported. 

So when we consider that many scientific discoveries are accidental, and that we have a strong tendency to make up reasons to explain how and why they happened, it seems that we are justified to call B.S. on the scientific method. But the purpose here is not to put down your 8th grade science teacher; it is to convince you that the most important part of science is to not necessarily focus on an end goal and all the steps that precede it, but rather, as Max Planck would have suggested, to ignore the end goal and just start experimenting, testing, and most importantly, doing. 

  • I took the nylon sock experiment from Nassim Taleb’s The Black Swam (page, 65).
  • The quote from the 3rd paragraph was taken from here. 
  • More on post-hoc explanations can be found here.

Why We Reason

Last year Huge Mercier and Dan Sperber published an paper in Behavioral and Brain Science that was recently featured in the New York Times and Newsweek. It has since spurred a lot of discussion in the cognition science blogosphere by psychologists and science writers alike. What’s all the hype about?

For thousands of years human rationality was seen as a means to truth, knowledge, and better decision making. However, Mercier and Sperber are saying something different: reason is meant to persuade others and win arguments.

Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade…. reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found (2010).

Though Mercier and Sperber’s theory is novel, it is not entirely original. In the western tradition, similar theories of human rationality date back to at least ancient Greece with the Sophists.

Akin to modern day lawyers, the Sophists believed that reason was a tool used for convincing others of certain opinions regardless of them being true or not. They were paid to teach young Athenians rhetoric so they could have, as Plato says in Gorgias, “the ability to use the spoken word to persuade the jurors in the courts, the members of the Council, the citizens attending the Assembly – in short, to win over any and every form of public meeting.”

So why is Mercier and Sperber’s paper seen as groundbreaking if its central idea is thousands of years old? Unlike ancient Greece, Mercier and Sperber have a heap psychological data to support their claims. At the heart of this data is what psychologists call confirmation bias. As the name indicates, confirmation bias is the tendency for people to favor information that conforms to their ideologies regardless of if it is true or not. It explains why democrats would prefer to listen to Bill Clinton over Ronald Reagan, why proponents of gun control are not NRA members, and why individuals who are pro-choice only listen to or read sources that are also pro-choice. In addition, confirmation bias greatly distorts our self-perceptions, namely, it describes why “95% of professors report that they are above average teachers, 96% of college students say that they have above average social skills… [and why] 19% of Americans say that they are in the top 10% of earners.”

If we are to think of rationality as having to do with knowledge or truth, like Socrates, Plato, and Descartes did, confirmation bias is a huge problem. If rationality really was about discovering objective truths, then it seems like confirmations bias would be ripe for natural selection; imagine how smart we would be if we actually listened to opposing opinions and considered how they may be better than ours. Put differently, if the goal of reasoning was really to improve our decisions and beliefs, and find the truth, then there should be no reason for confirmation bias to exist.

Under the Sophist paradigm, however, confirmation bias makes much more sense, as do similar cognitive hiccups such as hindsight biasanchoring, representativeness, the narrative fallacy, and many more. These biases, which began to appear in the psychological literature of the 1960s, provide “evidence, [which] shows that reasoning often leads to epistemic distortions and poor decisions.” And it is from this point that Mercier and Sperber have built their ideas from. Instead of thinking of faulty cognition as irrational as many have, we can now see that these biases are tools that are actually helpful. In a word, with as many opinions as there are people, our argumentative-orientated reasoning does a fine job of  making us seem credible and legitimate. In this light, thank God for confirmation bias!

Of course, there is a down side to confirmation bias and our rationality being oriented towards winning arguments. It causes us to get so married to some ideas – WMD’s in Iraq and Doomsday events, for example – that we end up hurting ourselves in the long wrong. But at the end of the day, I think it is a good thing that our reasoning is so self conforming. Without confirmation bias, we wouldn’t have a sense of right or wrong, which seems to be a necessary component for good things like laws against murder.

Finally, if you were looking to Mercier and Sperber thesis’ to improve your reasoning, you would be missing the point. Inherent in their argument is the idea that our rationality will forever be self-confirming and unconcerned with truth and knowledge. And for better or for worse, this is something we all have to deal with.

Brains, Comedy, and Steve Martin

In my last post I discussed the neuroscience of music. I concluded that renowned musicians share one thing in common: they understand the importance of patterns, expectations, prediction in music. I encourage you to read it if you have not already.

This post takes the ideas of the last – patterns, expectations, and prediction – and applies it to comedy. Comedy is made possible by creating and fulfilling expectations while considering the importance of delivery, context, and timing. Consider this joke, taken from a recent article from Discovermagazine.com. 

A couple of New Jersey hunters are out in the woods when one of them falls to the ground. He doesn’t seem to be breathing; his eyes are rolled back in his head. The other guy whips out his cell phone and calls the emergency service. He gasps to the operator: “My friend is dead! What can I do?” The operator says: “Take it easy. I can help. First, let’s make sure he’s dead.” There is silence, then a shot is heard. The guy’s voice comes back on the line. He says, “OK, now what?”

Why is this funny? It starts by establishing a familiar pattern; in this case, the pattern is the standard beginning-middle-punch line structure that many jokes are structured by. Then, it creates an expectation; implicit in the statement, “First, let’s make sure he’s dead” is the expectation that the hunter is going to do something reasonable to see if his friend is dead. Finally, the comedy is delivered when the answer deviates from the expectation – we expected x, but we got y, i.e., we never though the alive hunter would shoot his friend just to make sure he was dead. Most importantly, the entire joke still maintains the beginning-middle-punch line pattern.

The best jokes have the most unexpected punch lines but maintain the pattern. Neuroscientist Vilayanur S. Ramachandran explains this in his 1998 book Phantoms in the Brain. 

Despite all their surface diversity, most jokes and funny incidents have the following logical structure: Typically you lead the listener along a garden path of expectation, slowly building up tension. At the very end, you introduce an unexpected twist that entails a complete reinterpretation of all the preceding data, and moreover, it’s critical that the new interpretation, though wholly unexpected, makes as much “sense” of the entire set of facts as did the originally “expected” interpretation (Ramachandran, p. 204).

From Richard Pryor to Chris Rock, comedians rely on what Ramachandran is describing. It is their ability to create and relieve tension, and deliver the unexpected while maintaining the pattern, that makes them so funny.

Steve Martin is one of my favorite comedians and is someone who understands this well. If you are familiar with Martin’s standup you will know his unique style. Like Pryor and the Rock, Martin did not change the medium per se; he simply altered the expectations that defined the medium. For example, here is an opening bit from one of Martin’s routines: “I’d like to open up with sort of a funny comedy bit. This has really been a big one for me… I’m sure most of you will recognize the title when I mention it; it’s the Nose on Microphone routine.” Martin would then lean in and placed his nose on the microphone for a few seconds, step back, take a few bows, and move on to his next joke. The “laugh came not then, but only after they realized I had already moved on to the next bit.”

Martin’s anticlimax style ended up defining his stand up. But it did not come to him in the blink of an eye, rather, it was the product of years of trial and error. He describes this in his autobiography Born Standing Up:

With conventional joke telling, there’s a moment when the comedian delivers the punch line, and the audience knows it’s the punch line, and their response ranges from polite to uproarious… These notions stayed with me for months, until they formed an idea that revolutionized my comic direction: what if there were no punch lines… What if I created tension and never realised it…  Theoretically, it would have to come out sometime. But if I kept denying them the formality of a punch line, the audience would eventually pick their own place to laugh, essentially out of desperation. This type of laugh seemed stronger to me, as they would be laughing at something they chose, rather than being told exactly when to laugh… My goal was to make the audience laugh but leave them unable to describe what it was that had made them laugh (Martin, p. 111-113).

Note how similar Martin’s remarks are to Ramachandran’s. They are talking about the relationship between patterns and expectations, and understand that something which is funny denies the initial expectation and challenges the observer to understand the new pattern. Like a Pryor or Rock joke, Martin’s Microphone bit takes the observer down a familiar path to start, but leaves her at an unfamiliar destination. Yet, she is not entirely lost for she still exists in the context of the joke. In other words, she knows that she is supposed to laugh – and she does – but she doesn’t know why.

Below is a video that illustrates an extreme example of this. Here we seen Kurt Braunohler and Kristen Schaal perform a bit at the 2008 Melbourne comedy festival. After a brief opening dialog, Braunohler and Schaal begin an impressive staging that seems to defy comedic logic. However, underneath all the repetition Braunohler and Schaal remain committed to the same principles that Martin and all successful jokes are committed to.

This is funny for the same reason the New Jersey joke is funny – it introduces a pattern, creates an expectation, and breaks an expectation while keeping to the pattern. But the genius of Braunohler and Schaal is that they break the expectation by not breaking the expectation. In other words, you don’t expect them to keep doing the “Kristin Schaal is a horse” dance, but they do, and that’s why its funny. Like Martin’s joke, the punch line is that there isn’t a punch line. Again, the audience is left in hysterics even though they couldn’t have reasonable said what is so funny. And this is one of the secrets of comedy – breaking an expectation in such an unexpected way that they audience can only respond by laughing.

Brains, Music, and The Bad Plus

A lot has been written about the neuroscience of music lately, including these two articles in the NYTimes, a blog post by Jonah Lehrer, books by Oliver Sacks and Daniel Levitin, and a paper in Nature Neuroscience. What are they all saying? To get a sense, meet The Bad Plus, a Minneapolis trio known for a unique brand of rock infused Avant-garde jazz music. The Bad Plus have been around for just over a decade and have made a name for themselves by covering famous hits from the 80s and 90s – everything from Blondie’s ‘Heart of Glass’ to Nirvana’s ‘Smells Like Teen Spirit’ – as well as producing original material.

If you have ever listened to The Bad Plus you will know that their music can be a bit challenging. Instead of the standard verse-chorus, 4/4 time structure that most pop songs are constituted by, Bad Plus songs are much more chaotic. Often switching from one unusual time signature to the next (5/16 to 3/4 to 10/8 for example), speeding up and slowing down the tempo, and rarely repeating previous motifs, their songs could be classified as lawless. However, beneath all the disorder, The Bad Plus nonetheless maintain a deep and steady structure that binds all of their songs together – and this is what makes them successful musicians.

It is their ability to combine what we expect with what we don’t expect that separates them from most. Sometimes this is frustrating, and other times it is confusing, but it is ultimately enjoyable. Why? It all goes back to patterns, expectations, and predictions – three things that brains love. When it comes to music, brains are focused on identifying patterns, forming expectations, and then predicting where the song will go based off of the patterns and expectations it has identified and formed. Brains like it when they do this successfully and hate it when they don’t. This is one reason singers like Britney Spears and Justin Timberlake are so popular –  their songs are structured by patterns and expectations that are very easy to predict. When we listen to ‘Baby One More Time’ or ‘Sexxyback’ we know exactly what we are going to get.

However, groundbreaking musicians like The Bad Plus know that it is ultimately more enjoyable to hear a song violate an expectation instead of fulfill an expectation – this is why many of their songs replace the expected with the unexpected. Unlike a pop song, a Bad Plus song challenges your brain to figure out the new pattern. Many times this is difficult, and this is probably why the average listener does not give The Bad Plus a chance. But if you give your brain enough time to figure out the new pattern it will reward you. It’s like doing a difficult math problem, at first it sucks, but it feels great to figure it out, especially if you worked hard to get it.

All of this is explained by neuroscientist Daniel J. Levitin in his 2006 book This Is Your Brain On Music:

As music unfolds, the brain constantly updates its estimates of when new beats will occur, and takes satisfaction in matching a mental beat with a real-in-the-world one, and takes delight when a skillful musician violates that expectation in an interesting way – a sort of musical joke that we’re all in on. Music breathes, speeds up, and slows down just as the real world does, and our cerebellum finds pleasure in adjusting itself to stay synchronized (Levitin, p. 191).

One of my favorite Bad Plus songs, which exemplifies what Levitin is talking about, is a cover of the academy award-winning theme song, “Titles,” from the 1981 hit Chariots of Fire. Below is a video of The Bad Plus performing this song live, and there are three things that you should pay attention to. First, notice how the song begins with what you are familiar with – the Chariots of Fire theme. Second, listen at the 1:53 mark to how The Bad Plus deviate from what you are familiar with and notice that you do not find this particularly appealing. Finally, if you were patient enough to listen through the entire middle section where the band seems to get buried in its own sound, pay close attention to what happens at 5:39. Amidst a smattering of bass notes and drum crashes, pianist Ethan Iverson slowly brings back the familiar Chariots of Fire motif. The song climaxes at 6:22 when all three members strike the same chord and deliver “Titles,” as you know it.

If your brain is like most, it will love this moment. As Levitin explained, it is an instance where the brain rewards itself for being able to understand a pattern that had been previously violated – this is why Bad Plus songs are ultimately rewarding. They establish a pattern we know, they deviate from this pattern, and then they reward us by bringing the pattern back to its original state (for another good example of this listen to their cover of the Radiohead song “Karma Police”). I challenge you to listen to “Titles” several times – give your brain a chance to “get to know its patterns” so that it can successfully predict what comes next. Listened to enough times, I suspect that the 6:22 mark will eventually become extremely enjoyable.

How a musician understands and uses patterns, expectations, and prediction largely defines the quality of his or her music. Musicians who only deliver what is expected may be popular, but they certainly won’t go down in history as one of the best. It is people like Dylan, who took folk electric, or the Ramones, who introduced punk, or Jay-Z, Kanye West, and Girl Talk, who sampled other songs to create new songs, who will be remembered. Though it took audiences some time to get used to these artists, their music became celebrated once brains adapted to the new patterns e.g., electric, punk, or sampling.

I look forward to even more literature that addresses the relationship between brains and music. If the last few years are indicative, we should see many more insights.

Follow

Get every new post delivered to your Inbox.

Join 331 other followers

%d bloggers like this: