Skip to content

Posts tagged ‘Daniel Kahneman’

The Irrationality of Irrationality: The Paradox of Popular Psychology

Here’s my latest on ScientificAmerican.com 

In 1996, Lyle Brenner, Derek Koehler and Amos Tversky conducted a study involving students from San Jose State University and Stanford University. The researchers were interested in how people jump to conclusions based on limited information. Previous work by Tversky, Daniel Kahneman and other psychologists found that people are “radically insensitive to both the quantity and quality of information that gives rise to impressions and intuitions,” so the researchers knew, of course, that we humans don’t do a particularly good job of weighing the pros and cons. But to what degree? Just how bad are we at assessing all the facts?

To find out, Brenner and his team exposed the students to legal scenarios. In one, a plaintiff named Mr. Thompson visits a drug store for a routine union visit. The store manager informs him that according to the union contract with the drug store, plaintiffs cannot speak with the union employees on the floor. After a brief deliberation, the manager calls the police and Mr. Thompson is handcuffed for trespassing. Later the charges were dropped, but Mr. Thompson is suing the store for false arrest.

All participants got this background information. Then, they heard from one of the two sides’ lawyers; the lawyer for the union organizer framed the arrest as an attempt to intimidate, while the lawyer for the store argued that the conversation that took place in the store was disruptive. Another group of participants – essentially a mock jury – heard both sides.

The key part of the experiment was that the participants were fully aware of the setup; they knew that they were only hearing one side or the entire story. But this didn’t stop the subjects who heard one-sided evidence from being more confident and biased with their judgments than those who saw both sides. That is, even when people had all the underlying facts, they jumped to conclusions after hearing only one side of the story.

The good news is that Brenner, Koehler and Tversky found that simply prompting participants to consider the other side’s story reduced their bias – instructions to consider the missing information was a manipulation in a later study – but it certainly did not eliminate it. Their study shows us that people are not only willing to jump to conclusions after hearing only one side’s story, but that even when they have additional information at their disposal that would suggest a different conclusion, they are still surprisingly likely to do so. The scientists conclude on a somewhat pessimistic note: “People do not compensate sufficiently for missing information even when it is painfully obvious that the information available to them is incomplete.”

In Brenner’s study, participants were dealing with a limited universe of information – the facts of the case and of the two sides’ arguments. But in reality – especially in the Internet era – people have access to a limitless amount of information that they could consider. As a result, we rely on rules of thumb, or heuristics, to take in information and make decisions. These mental shortcuts are necessary because they lessen the cognitive load and help us organize the world – we would be overwhelmed if we were truly rational.

This is one of the reasons we humans love narratives; they summarize the important information in a form that’s familiar and easy to digest. It’s much easier to understand events in the world as instances of good versus evil, or any one of the seven story types. As Daniel Kahneman explains, “[we] build the best possible story form the information available… and if it is a good story, [we] believe it.” The implication here is that it’s how good the story is, not necessarily its accuracy, that’s important.

But narratives are also irrational because they sacrifice the whole story for one side of a story that conforms to one’s worldview. Relying on them often leads to inaccuracies and stereotypes. This is what the participants in Brenner’s study highlight; people who take in narratives are often blinded to the whole story – rarely do we ask: “What more would I need to know before I can have a more informed and complete opinion?”

The last several years have seen many popular psychology books that touch on this line of research. There’s Ori and Rom Brafman’s Sway, Dan Ariely’s Predictably Irrational and, naturally, Daniel Kahneman’s Thinking, Fast and Slow. If you could sum up the popular literature on cognitive biases and our so-called irrationalities it would go something like this: we only require a small amount of information, often times a single factoid, to confidently form conclusions and generate new narratives to take on new, seemingly objective, but almost entirely subjective and inaccurate, worldviews.

The shortcomings of our rationality have been thoroughly exposed to the lay audience. But there’s a peculiar inconsistency about this trend. People seem to absorb these books uncritically, ironically falling prey to some of the very biases they should be on the lookout for: incomplete information and seductive stories. That is, when people learn about how we irrationally jump to conclusions they form new opinions about how the brain works from the little information they recently acquired. They jump to conclusions about how the brain jumps to conclusions and fit their newfound knowledge into a larger story that romantically and naively describes personal enlightenment.

Tyler Cowen made a similar point in a TED lecture a few months ago. He explained it this way:

There’s the Nudge book, the Sway book, the Blink book… [they are] all about the ways in which we screw up. And there are so many ways, but what I find interesting is that none of these books identify what, to me, is the single, central, most important way we screw up, and that is, we tell ourselves too many stories, or we are too easily seduced by stories. And why don’t these books tell us that? It’s because the books themselves are all about stories. The more of these books you read, you’re learning about some of your biases, but you’re making some of your other biases essentially worse. So the books themselves are part of your cognitive bias.

The crux of the problem, as Cowen points out, is that it’s nearly impossible to understand irrationalities without taking advantage of them. And, paradoxically, we rely on stories to understand why they can be harmful.

To be sure, there’s an important difference between the bias that comes from hearing one side of an argument and (most) narratives. A corrective like “consider the other side” is unlikely to work for narratives because it’s not always clear what the opposite would even be. So it’s useful to avoid jumping to conclusions not only by questioning narratives (after all, just about everything is plausibly a narrative, so avoiding them can be pretty overwhelming), but by exposing yourself to multiple narratives and trying to integrate them as well as you can.

In the beginning of the recently released book The Righteous Mind, social psychologist Jonathan Haidt explains how some books (his included) make a case for how one certain thing (in Haidt’s case, morality) is the key to understanding everything. Haidt’s point is that you shouldn’t read his book and jump to overarching conclusions about human nature. Instead, he encourages readers to always think about integrating other points of view (e.g., morality is the most important thing to consider) with other perspectives. I think this is a good strategy for overcoming a narrow-minded view of human cognition.

It’s natural for us to reduce the complexity of our rationality into convenient bite-sized ideas. As the trader turned epistemologist Nassim Taleb says: “We humans, facing limits of knowledge, and things we do not observe, the unseen and the unknown, resolve the tension by squeezing life and the world into crisp commoditized ideas.” But readers of popular psychology books on rationality must recognize that there’s a lot they don’t know, and they must be beware of how seductive stories are. The popular literature on cognitive biases is enlightening, but let’s be irrational about irrationality; exposure to X is not knowledge and control of X. Reading about cognitive biases, after all, does not free anybody from their nasty epistemological pitfalls.

Moving forward, my suggestion is to remember the lesson from Brenner, Koehler and Tversky: they reduced conclusion jumping by getting people to consider the other information at their disposal. So let’s remember that the next book on rationality isn’t a tell-all – it’s merely another piece to the puzzle. This same approach could also help correct the problem of being too swayed by narratives – there are anyways multiple sides of a story.

Ultimately, we need to remember what philosophers get right. Listen and read carefully; logically analyze arguments; try to avoid jumping to conclusions; don’t rely on stories too much. The Greek playwright Euripides was right: Question everything, learn something, answer nothing.

The Illusion of Understanding Success

In December of 1993, J.K. Rowling was living in poverty, depressed, and at times, contemplating suicide. She resided in a small apartment in Edinburgh, Scotland with her only daughter. A recent divorce made her a single mom. Reflecting on the situation many years later, Rowling described herself as, “the biggest failure I knew.”

By 1995 she finished the first manuscript of Harry Potter and the Philosopher’s Stone, a story about a young wizard she began writing years before. The Christopher Little Literary Agency, a small firm of literary agents based in Fulham, agreed to represent Rowling. The manuscript found its way to the chairman of Bloomsbury, who handed it down to his eight-year-old daughter Alice Newton. She read it and immediately demanded more; like so many children and adults after her, she was hooked. Scholastic Inc., bought the rights to Harry Potter in the United States in the spring of 1997 for $105,000. The rest is history.

Rowling’s story, which includes financial and emotional shortcomings followed by success and popularity, is the rages to riches narrative in a nutshell. It’s the story of an ordinary person, dismissed by the world, who emerges out of adversity onto the center stage. It’s the sword in the stone, it’s the ugly duckling; it’s a story that gets played out time and time again throughout history. Kafka captures it nicely in The Castle: “Though for the moment K. was wretched and looked down on, yet in an almost unimaginable and distant future he would excel everybody.”

The reality of Rowling’s story, however, is just that: it’s a story. It’s a sequence of facts strung together by an artificial narrative. It didn’t necessarily have to have a happy ending and it certainly was not predictable back in 1993. Rowling did not follow a predetermined path. Her life before Harry Potter was complex and convoluted, and, most importantly, luck played a significant role in her eventual success. These variables are always forgotten in hindsight.

Yet, we humans, facing limits of knowledge, to paraphrase one author, resolve the myriad of unknown events that defined Rowling’s life before Harry Potter by squeezing them into crisp commoditized ideas and packaging them to fit a warming narrative. We have, in other words, a limited ability to look at sequences of facts without weaving an explanation into them.

The same problem occurs in science. It’s always the story of invention, the tale of discovery or the history of innovation. These narratives manifest themselves in the form of a quest: A scientist is stuck on a problem, he or she is surrounded by doubt, but after years of hard work an insight prevails that changes the world forever.

In The Seven Basic Plots, Christopher Booker summarizes The Quest, which sounds as much like Darwin on the Beagle, MaGellan aboard the Trinidad or Marco Polo traveling across Asia as it does Frodo traversing Middle Earth. As Booker explains:

Far away, we learn, there is some priceless goal, worth any effort to achieve… From the moment the hero learns of this prize, the need to set out on the long hazardous journey to reach it becomes the most important thing to him in the world. Whatever perils and diversion lie in wait on the way, the story is shaped by that one overriding imperative; and the story remains unresolved until the objective has been finally, triumphantly secured.

Unfortunately, Frodo’s triumph at Mount Doom is more real than natural selection to some. Kahneman is right: “It is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle.”

Our propensity to story tell is also fueled by the survivorship bias, which describes our tendency to believe that successful people possess a special property. For Steve Jobs it was his assertive leadership and vision, for Bob Dylan it was his poetry and willingness to challenge the norm and for Rowling it was her creativity and imagination. But these attributes are post-hoc explanations; there are plenty of people with Dylan’s musical and lyrical caliber who will never match his success. Likewise, many creative geniuses of Rowling’s stature will never sell tens of millions books. Luck, at the end of the day, might be the best explanation.

When trying to answer the question of what makes people successful the best response might be it’s impossible to know. Indeed, hardwork, intelligence and good genes certainly play a role. But the reality of Rowling’s story is that it is highly unlikely. Twelve out of twelve publishing houses rejected the book. In the years leading up to Harry Potter a number of things could have prevented Scholastic from purchasing the rights to her book. If it weren’t for little Alice Newton, the book may have never seen the light of day.

The true test of an explanation, as Kahneman also says, is whether it would have made the event predictable in advance. No story of Rowling’s unlikely success will meet that test, because no story can include all events that would have caused a different outcome. This being said, we will continue to explain Rowling’s story as if it was inevitable and predictable. We will always be obsessed with happy endings.

The takeaway is twofold: first, be suspicious of narratives, especially if they are charming; second, be humble about what you think it takes to be successful. There is good reason to believe that what you think is an illusion perpetuation by a narrative where everybody lives happily ever after.

The Irrationality Of Irrationality

Reason has fallen on hard times. After decades of research psychologists have spoken: we humans are led by our emotions, we rarely (if ever) decide optimally and we would be better off if we just went with our guts. Our moral deliberations and intuitions are mere post-hoc rationalizations; classical economic models are a joke; Hume was right, we are the slaves of our passions. We should give up and just let the emotional horse do all the work.

Maybe. But sometimes it seems like the other way around. For every book that explores the power of the unconscious another book explains how predictably irrational we are when we think without thinking; our intuitions deceive us and we are fooled by randomness but sometimes it is better to trust our instincts. Indeed, if a Martian briefly compared subtitles of the most popular psychology books in the last decade he would be confused quickly. Reading the introductions wouldn’t help him either; keeping track of the number of straw men would be difficult for our celestial friend. So, he might ask, over the course of history have humans always thought that intelligence was deliberate or automatic?

When it comes to thinking things through or going with your gut there is a straightforward answer: It depends on the situation and the person. I would also add a few caveats. Expert intuition cannot be trusted in the absence of stable regularities in the environment, as Kahneman argues in his latest book, and it seems like everyone is equally irrational when it comes to economic decisions. Metacognition, in addition, is a good idea but seems impossible to consistently execute.

However, unlike our Martian friend who tries hard to understand what our books say about our brains, the reason-intuition debate is largely irrelevant for us Earthlings. Yes, many have a sincere interest in understanding the brain better. But while the lay reader might improve his decision-making a tad and be able explain the difference between the prefrontal cortex and the amygdala the real reason millions have read these books is that they are very good.

The Gladwells, Haidts and Kahnemans of the world know how to captivate and entertain the reader because like any great author they pray on our propensity to be seduced by narratives. By using agents or systems to explain certain cognitive capacities the brain is much easier to understand. However, positioning the latest psychology or neuroscience findings in terms of a story with characters tends to influence a naïve understanding of the so-called most complex entity in the known universe. The authors know this of course. Kahneman repeatedly makes it clear that “system 1” and “system 2” are literary devices not real parts in the brain. But I can’t help but wonder, as Tyler Cowen did, if deploying these devices makes the books themselves part of our cognitive biases.

The brain is also easily persuaded by small amounts of information. If one could sum up judgment and decision-making research it would go something like this: we only require a tiny piece of information to confidently form a conclusion and take on a new worldview. Kahneman’s acronym WYSIATI – what you see is all there is – captures this well. This is precisely what happens the moment readers finish the latest book on intuition or irrationality; they just remember the sound bite and only understand brains through it. Whereas the hypothetical Martian remains confused, the rest of us humans happily walk out of our local Barnes and Noble, or even worse, finish watching the latest TED with the delusion feeling that now, we “got it.”

Many times, to be sure, this process is a great thing. Reading and watching highbrow lectures is hugely beneficial intellectually speaking. But let’s not forget that exposure to X is not knowledge of X. The brain is messy; let’s embrace that view, not a subtitle.

The Brain as a Kluge: How We Experience and Remember

It’s impossible to think about the past clearly. When it comes to evaluating your life, your brain is easily tricked into thinking one thing or another: How satisfied are you with your life? How happy are you? How well-off are you? Well, it depends.

In one study psychologists asked college students two questions: “How happy are you with your life in general?” and “How many dates did you have last month?” When asked in this order the researchers found almost no correlation. However, changing the order of the questions influenced the students to focus on the quality of their lives in terms of the quality of their romantic lives. The researchers found that people who had been on a lot of dates rated themselves as being much happier than those who had not been on a lot of dates. As brain scientist Gary Marcus explains, “this may not surprise you, but it ought to, because it highlights just how malleable our beliefs really are. Even our own internal sense of self can be influenced by what we happen to focus on at a given moment.”

Along similar lines, Norbert Schwarz and his colleagues demonstrated that good moods influenced how people evaluate their lives. They asked subjects to complete a questionnaire on life satisfaction. Beforehand, however, Schwartz asked them to photocopy a sheet of paper. (This was the key part of the study.) For half of the subjects Schwarz placed a dime on the photocopier. He and his colleagues found that, as one author says, “the minor lucky incident caused a marked improvement in subjects’ reported satisfaction with their life as a whole.”

Why are our life evaluations so easily swayed? Consider a study done by Daniel Kahneman and Donalnd Redelmeier. They tracked colonoscopy patients to see if there was a difference between how much pain they experienced and how much pain they thought they experienced. As Kahneman explains, “the experience of each patient varied considerably during the procedure, which lasted 8 minutes for patient A and 24 for patient B… [and] the general agreement [is] that patient B had the worse time.” Indeed, the graph illustrates that during the procedure patient B suffered more than patient A. However, they found that patient A reported that the colonoscopy was much worse than patient B. In other words, patient B suffered more but remembered it as better.

The inconsistency is explained by the peak-end rule, which describes our tendency to evaluate experiences by how they end, and duration neglect, which describes our tendency to be insensitive to the duration of an experience. Patient A remembered it as being terrible, even though he suffered less, because it ended terribly whereas Patient B remembered it better, even though he suffered more, because his second half was much less intense (see graph).

(The peak-end rule and duration neglect help explain why people tend to remember failed relationships or marriages only on bad terms – like a good movie with a bad ending, it’s just so hard to judge a relationship without thinking about what happened at the end. Perhaps the most dramatic example of these two cognitive biases is childbirth – extremely painful for most of the time but a great ending seems to dispel this from memory.)

Kahneman and Redelmeier’a ultimate point is that when it comes to understanding happiness psychologists must distinguish between the “remembering self” and the “experiencing self.” The remembering self is the one that “keeps score,” it answers questions like, “How satisfied are you with your life,” or “How is your health.” It is a story teller and its primary job is to tell the story of your life. The experiencing self answers questions like, “How was the concert last night,” or “How was your birthday party.” It is your current mood and reports how you are in the present. This distinction brings me back to my original question: Why are our life evaluations so easily swayed?

When it comes to assessing our life, the remembering self and the experiencing self are not on the same page. The remembering self, for example, will report a satisfied life but it will also be influenced by your experiencing self, which is moreover influenced by everyday occurrences like, say, finding a dime on a photocopier or answering your friend when he asks about your romantic life. Daniel Kahneman puts it this way: “Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.” In other words, our brain is a kluge – an ill-assorted collection of parts assembled to fulfill a particular purpose – and thinking about the past with an objective lens is not a top priority.

Read more

The Evil of Irrelevant Information: Anchoring & The Conjunction Fallacy

I want to think that people are rational consumers, but it’s hard to ignore the overwhelming evidence that says they’re not. You don’t even have to read the academic literature to realize this, just go to the grocery store! As you walk down the aisle and see a delicious bag of chips with “50 percent less calories,” ask yourself this: would you have bought it if it said “with 50 percent as many calories?” Or how about the medication over in the pharmaceutical section that works “99 percent of the time,” would you buy it if it was “ineffective 1 percent of the time”? In both cases the answer is probably not.

We succumb to these silly things because our brains are easily fooled by numerical manipulations. As psychologist Barry Schwartz explains, “when we see outdoor gas grills on the market for $8,000, it seems quite reasonable to buy one for $1,200. When a wristwatch that is no more accurate than the one you can buy for $50 sells for $20,000, it seems reasonable to buy one for $2,000.” Whether you like it or not, your decisions are easily swayed.

Let’s look at some more examples.

Imagine you’re at an auction bidding on a bottle of Côtes du Rhône, a bottle of Hermitage Jaboulet La Chapelle, a cordless keyboard and mouse, a design book and a one-pound box of Belgium chocolates. Before the auction starts the auctioneer asks you to jot down the last two digits of your social security number and indicate if you would be willing pay this amount for any of the products as well as the maximum amount you would be willing to bid for each product. When Dan Ariely, Drazen Prelec and George Loewenstein conducted this auction to a group of MIT undergrads, they found that the social security number greatly influenced the students’ bids. In Ariely’s words:

The top 20 percent (in terms of the value of their s.s), for instance, had an average of $56 for the cordless keyboard; the bottom 20 percent bid an average of $16. In the end, we could see that students with social security numbers ending in the upper 20 percent placed bids that were 216 to 346 percent higher than those of the students with social security numbers ending in the lowest 20 percent.

Ariely’s experiment illustrates a cognitive bias known as anchoring, which illustrates our inability to ignore irrelevant information and assess things at face value. The classic anchoring experiment comes from Daniel Kahneman and Amos Tversky. Two groups were asked whether or not the percentage of African countries in the United Nations was higher or lower than a given value: 10 percent for one group and 65 percent for the other group. They found that “the median estimates of the percentage of African countries in the United Nations were 25 and 45 for groups that received 10 and 65, respectively, as starting points.” Put differently, those who received 10 percent estimated the percentage of African countries in the UN to be 25, whereas those who received 65 percent estimated the percentage of African counties in the UN to be 65. As one author puts it, “the brain isn’t good at disregarding facts, even when it knows those facts are useless.”

Along the same lines is the “conjunction fallacy,” which highlights our propensity to misunderstand probability. Here is a simple example. Which description of my friend Brent is more likely: 1) he is the CEO of Bank of America, or 2) he is the CEO of Bank of America and his annual salary is at least $1,000? Though your intuition strongly favors option two, option one is more likely because there are less contingencies. In other words, though it is very likely that he makes more than $1,000 a year as the CEO of a Bank of America, the probability of option one is higher. As UCLA psychologists Dean Buonomano says, “the probability of any event A and any other event B occurring together has to be less likely than (or equal to) the probability of event A by itself.”

The point I am driving at is that we are easily manipulated by irrelevant information. Why? There is a fairly simple explanation.

For most of human history our species survived in a simple world where there wasn’t TV, the internet, fast food, birth control pills, or economic meltdowns. There was just one thing – survival. This was what our psychologies evolved for. Unfortunately, there is a significant mismatch between the world our psychologies were built for and the world as it is today. Food illustrates this disconnect. In the hunter-gatherer society where food was scarce, it would have been smart to load up on as many fatty and salty foods as possible. Now, it would be stupid, or at least bad for your health, to visit your local McDonalds every day, which relentlessly takes advantage of our primitive appetites.

Here’s the kicker: the same is true for anchoring and the conjunction fallacy. In hunter-gatherer societies humans didn’t have to decide between different priced gas grills or bags of chips, or figure out probabilities. They just had to understand how to get food, build shelter and exist long enough to pass on their genes. Because of this, our poor judgement is “not a reflection of the fact that [our brains were] poorly designed, but… that [they were] designed for a time and place very different from the world we now inhabit,” as Buonomano says.

Unfortunately, this means that unless natural selection speeds up, we won’t be getting better any time soon.

Read more

My $10,000 Blog

I’ve always thought that my blog is good. You’d have to pay me a lot to shut it down. Just how much? Probably a few thousands dollars at least. Of course, it probably isn’t worth more than a few cents, but I’m only human, and it’s natural for me to overvalue items or services that I own. This tendency is referred to as the endowment effect, and the Wendy’s commercial sums it up nicely. The man on the left is more willing to give up a dollar than his equally valued Double Stacker because he owns the Double Stacker.

The endowment effect is well established in psychologyand behavioral economics. It initially appeared in psych literature when Richard Thaler published Toward a Positive Theory of Consumer Choice in 1980, and its effects have been reproduced in a number of experiments.

In one, researchers divided Cornell undergrads into two groups, one that was given coffee cups and the other that was given nothing. Then, they asked the former group to estimate how much they would sell the cups for and the later group how much they would buy the cups for (they were being sold for 6$ at the Cornell bookstore). Their findings clearly illustrate the endowment effect: those with the cups were “unwilling to sell for less than $5.25,” while those without the cups were “unwilling to pay more than $2.25-$2.75.”

Another experiment by Dan Ariely, Michael Norton, and Daniel Mochon illustrates our tendency to overvalue items that we are emotionally attached to – another version of the endowment effect. In it, they set up a booth at the Harvard University Student Center and offered students a chance to create origami frogs. Ariely, Norton, and Mochon, wanted to see if the student who created origami frogs valued them higher than the students who did not. To do this, they asked half of the students to construct origami frogs and estimate their value, and the other half to estimate their value but not to construct the frogs. Their findings confirmed previous examinations of the endowment effect: the students who made the origami frog valued them about 18 cents higher than subjects who did not. (Ariely calls this “The IKEA Effect,” after assembling an IKEA toy chest and noticing how much more he valued it compared to his family members).

Here is the interest part of the endowment effect: for the most part, it has only been studied in contrast to neoclassical economic theory, which holds that consumers are rational actors. And because the endowment effect breaks an axiom of neoclassical theory – that the price of a good is objective – it has been labeled as an irrational behavior.

But I think this is only half the story.

I was reading Richard Dawkins’ book Greatest Show on Earth the other day when I came across an interesting passage. Dawkins was explaining that the reason we think babies are so cute is because early humans who evolved to find their babies cute were more likely to nurture them, care for them, play with them, and raise them to be healthy than those who did not. In other words, it is evolutionarily advantageous to think our babies are cute – one way (of many) our genes make sure they get passed on to the next generation.

I bring this up to say that we are endowed to find our children adorable in the same way that we are endowed to overvalue our possessions. That is, the endowment effect is a survival technique handed to us through natural selection – it exists because it is evolutionarily advantageous to overvalue possessions that are important for survival. Think about it. If you were a prehistoric person, and you possessed a tool that helped you hunt, make fires, and built shelters, wouldn’t it be wise for you to overvalue this possession? I am not the first person to make this argument, and I don’t think that it is that controversial. But things do get contentious when you try to qualify the endowment effect as being rational or not.

Whereas psychologists and economists see the endowment effect as irrational, evolutionary biologists gives us a strong case for it being entirely rational; Ariely and his colleagues suggest that it is irrational relative to economic theory, but Dawkins says the opposite on evolutionary terms. I am afraid that this debate might be one of words, clearly, the qualifications for rational behavior are not absolute, they are relative. As such, it is impossible to objectively say what is or isn’t rational. Perhaps this is an innocuous claim, but as anyone involved in psychology or behavioral economics will tell you, the qualifications of rational behavior are not so easily agreed to.

At any rate, we should stop thinking about the endowment effect only in the context of the reasoning, decision-making, or economics. As Dawkins’ example illustrates, it could also be understood in terms of evolutionary biology.

Read more

The Myth of the SI Jinx

When the 2003 regular season ended, things were looking good for the Chicago Cubs. They had won the National League Central – edging out the Houston Astros by one game – and were lined up to play the Atlanta Braves in the first round of the playoffs. They also had two of the best pitchers in the game, Kerry Wood, who had recorded a career-high 266 strikeouts, and Mark Prior, who went 18-6 with a 2.43 ERA. Earlier in the season, Sports Illustrated dubbed them “Chicago Heat,” and they indeed appeared unbeatable. On top of that, the Cubs had a solid line up that consisted of power hitters Sammy Sosa and Moises Alou, speedster veteran Kenny Lofton, and recently acquired young star Aramis Ramirez. Needless to say, Cubs fans were thinking that this was finally their year.

Then, just as the stars were aligning, everything collapsed. With a 3-0 lead in the 8th inning of the 6th game of the National League Championship Series versus the Florida Marlins, Mark Prior surrendered five straight runs (three earned) and was replaced by reliever Kyle Farnsworth, who gave up three more runs; in the blink of an eye, the Cubs went from a 3-0 lead to a 8-3 loss – Wrigley was speechless. Fortunately, it was a best-of-seven series and there was still one more game to play. Kerry Wood was starting, and he and Prior hadn’t loss back-to-back games the whole season. However, in typical Cubs fashion, Wood pitched poorly, the offensive didn’t score enough runs, and they lost.

Next came the finger pointing, and it seemed like the actual performance of the Cubs was last on the list. On the top, there was the curse of the Billy Goat, Steve Bartman, and of course, the classic SI jinx. For those that don’t know, the SI jinx is an urban legend that states that a team or player’s performance is made worse by being on the cover of Sports Illustrated. Below is a clip from the ESPN show “Mike & Mike In The Morning,” which illustrates the SI jinx debate. Listen closely to Mike Greenberg (the one on the left with the blue collard shirt) explain why he thinks it is a real thing (Begins around 1:00).

I hope that you are smart enough to realize that Greenberg’s pseudo psychological story is completely wrong; the classic “I read a book” example just doesn’t cut it. But I want to tell you why he is wrong, and for that we need to rethink what it means for something to be random.

Leonard Mlodinow’s The Drunkard’s Walk tells the story of how psychologist Daniel Kahneman was first motivated to understand the cognitive biases behind people’s tendency to misunderstand randomness. According to Mlodinow, Kahneman, then a junior psychology professor at Hebrew University, was lecturing to a group of Israeli air force flight instructors on the importance of rewarding positive behavior and not punishing mistakes (a well established finding in psychology). During the lecture, however, Kahneman was met with sharp objections. Many instructors believed that their praises were almost always followed by worse performances and vice versa. But Kahneman knew that in any series of random events, an extraordinary event is most likely to be followed, due purely to chance, by a more ordinary one.

Any especially good or especially poor performance was… mostly a matter of luck. So if a pilot made an exceptionally good landing – one far above his normal level of performance – then the odds would be good that he would perform closer to his norm – that is, worse – the next day. And if his instructor had praised him, it would appear that the praise had done no good. But if a pilot made an exceptionally bad landing – running the plane off the end of the runway and into the vat of corn chowder in the base cafeteria – then the odds would be good that the next day he would perform closer to his norm – that is, better. And if his instructor had a habit of screaming “you clumsy ape” when a student performed poorly, it would appear that his criticism did some good. In this way an apparent pattern would emerge: student performs well, praise does no good; student performs poorly, instructor compares student to lower primate at high volume, student improves. The instructors in Kahneman’s class had concluded from such experiences that their screaming was a powerful educational tool. In reality it made no difference at all (2008, p. 7-9).

This explains why the instructors are wrong, and illustrates what psychologists call the regression fallacy. The regression fallacy refers to the tendency for people to “fail to recognize statistical regression when it occurs, and instead explain the observed phenomena with superfluous and often complicated causal theories.”

As you may have guessed, the SI jinx is a prime example of the regression fallacy. Though people like Mike Greenberg tend to think that SI actually causes a poor performance i.e., “everyone talking about the curse gets in your head and you start squeezing the bat tighter,” the regression fallacy illustrates his mistake. The reality is that athletes are usually on the cover of SI for an extraordinary performance. And like a superb landing by the pilots, an extraordinary performance by an athlete is usually followed by an ordinary performance. So it is not that athletes do worse because they were on the cover of SI, it is that their performance regresses back to its norm. This also explains why Michael Jordan and Wayne Gretzky, two athletes who have been on the cover of SI many times, are not affected by “the jinx.” Relative to most athletes, their ordinary performances are extraordinary.

Returning to the Cubs. Prior and Wood were never the same after the 2003 season. From 2004 to 2006, they went 30-32 and combined for an ERA over 4.00. Prior didn’t play in the majors after 2006, and Wood, who is still active, has only seen moderate success since. This suggests they were average pitchers with one great season, not great pitchers with a handful of average seasons. As they say, “You can have a lucky day, sure, but you can’t have a lucky career.”

So we can safely conclude that the SI jinx is in fact a myth. In addition, we can also understand why it exists in the first place – the regression fallacy. So the next time you see your favorite player or team on the cover of SI, relax, take a deep breath, and realize that SI isn’t actually causing anything.

Follow

Get every new post delivered to your Inbox.

Join 333 other followers

%d bloggers like this: