Skip to content

Getting Mired In Trivial Choices: Why More Options Doesn’t Mean More Important

By now, our tendency to not decide optimally is well documented. When it comes to buying toothpaste or a new pair of jeans social science research has spoken: we’re not only irrational – we’re predictably irrational. What’s more is the fact that too much choice is actually harmful to our well-being. When there is an option for everything, we suffer.

Psychologists term this the paradox of choice, and it describes how we become less satisfied the more choices there are. Think about shopping for jeans. The more there are the more you expect to find a perfect fit. At the same time, it’s less likely that you pick correctly the larger the array. You walk out of the store less confident in your choice while worrying about the pairs that might have been better.

What’s interesting about the paradox of choice is that it doesn’t discriminate much. We struggle with important decisions like buying a new home, finding the right wife or husband, or picking health care plans. This is understandable. But the little things give us stress as well. Finding the perfect toothpaste can’t be that important, can it? The brain, in other words, doesn’t do a good job of realizing what’s at stake when we decide.

This poses a peculiar predicament for psychologists: why do our brains get so caught up in unimportant decisions? This brings me to a new paper (to be published in August) by Aner Sela and Jonah Berger. They ask: “Why do people get mired in seemingly trivial decisions? Why do we agonize over what toothbrush to buy, struggle with what sandwich to pick, and labor over which shade of white to paint the kitchen?”

Sela and Berger use the term “decision quicksand” to describe how we get sucked into unimportant decisions. Their key insight is that the brain conflates excess information with importance. This means that the more options there are, the more time and attention we give, even if we are just picking trivial items. Here are the scientists:

If people form inferences about decision importance from their own decision efforts, then not only might increased perceived importance lead people to spend more time deciding, but increased decision time might, in turn, validate and amplify these perceptions of importance, which might further increase deliberation time. Thus, one could imagine a recursive loop between deliberation time, difficulty, and perceived importance. Inferences from difficulty may not only impact immediate deliberation, but may kick off a quicksand cycle that leads people to spend more and more time on a decision that initially seemed rather unimportant. Quicksand sucks people in, but the worse it seems the more people struggle.

To demonstrate their point, Sela and Berger conducted a series of clever experiments. In one, they gave participants a selection of airline options. The scientists created two groups: the participants in the high-difficult condition were given the options in small, low contrast font; in the low-difficult condition, the participants were given the same options in a larger, high contrast font. The researchers found two things. The first, and less surprisingly discovery, was that participants in the high-difficulty condition spent more time deliberating the options. The more interesting finding is that this extra effort led to perceptions of increased importance. Moreover, the researchers found that this effect was pronounced when participants were told that the choice of flights was actually unimportant. In a world where there is an option for everything, it’s no wonder why we stress over the little things.

The good news is that many companies are beginning to recognize the implications of this cognitive misfiring. Several years ago Proctor & Gamble saw a 10 percent increase in sales when they reduced the number of Head and Shoulders variants from 26 to 15. They found similar results when they deployed the same strategies with Tide and Ivory soap. Likewise, The Golden Cat Corp. reported a 12 percent increase in sales when they eliminated 10 of its worst-selling kitty-litters. Even Wal-Mart is weighing in. Back in 2010 the retail giant dropped two of its five lines of peanut butter, which resulted in an increase in sales. “Folks can get overwhelmed with too much variety,” said Duncan MacNaughton, chief merchandising officer at Wal-Mart in Mississauga. “With too many choices, they actually don’t buy.”

These strategies couldn’t come soon enough. Over the last several years psychologists have documented the negative effects that come with choice overload. They use the term “Decision Fatigue” to describe this phenomenon. The problem is that deciding takes mental effort; it reduces willpower and encourages procrastination. When we are overwhelmed with choice we tend to be more irrational than normal. Here’s John Tierney, New York Times writer and co-author of Willpower with a brief synopsis of the idea:

There is a finite store of mental energy for exerting self-control. When people fended off the temptation to scarf down M&M’s or freshly baked chocolate-chip cookies, they were then less able to resist other temptations. When they forced themselves to remain stoic during a tearjerker movie, afterward they gave up more quickly on lab tasks requiring self-discipline, like working on a geometry puzzle or squeezing a hand-grip exerciser.

How do we remedy this first world problem? When it comes to important decisions, it’s probably a good thing to stress a little bit. But when there isn’t much on the line – what toothpaste to buy for example – remember that the stress you experience is likely a cognitive illusion. With this in mind, try to be less of a maximizer and more of a satisficer; trying to be optimal is nearly impossible – settle for what suffices.

 

Read more

Creativity & Childhood

Growing up has its benefits. As we age, our intellect sharpens and willpower strengthens. We come to control out thoughts and desires; we identify goals and hone our skills.

However, growing up comes at a cost: we lose our natural desire to discover and invent; we become more self-conscious and less willing to fail. A study conducted between 1959 and 1964 involving 350 children found that around 4th grade our tendency to daydream and wonder declines sharply. In other words, Picasso was right: “Every child is an artist. The problem is how to remain an artist once we grow up.”

Age doesn’t necessarily squander our creative juices – creative geniuses like Steve Jobs and Steven Spielberg somehow managed to maintain a sense of wonderment through their adult years – but when we make the leap from elementary school to middle school our worldview becomes more realistic and cynical. The question is: what did Jobs and Spielberg do differently? How do we maintain our naiveté?

A study conducted several years ago by Darya Zabelina and Michael Robinson of North Dakota State University gives us a simple remedy. The psychologists divided a large group of undergraduates into two groups. The first group was giving the following prompt:

 You are 7 years ago. School is canceled, and you have the entire day to yourself. What would you do? Where would you go? Who would you see?

The second group was given the same prompt minus the first sentence. This means they didn’t imagine themselves as seven years olds – they remained in their adult mindset.

Next, the psychologists asked their subjects to take ten minutes to write a response. Afterwards the subjects were given various tests of creativity, such as inventing alternatives uses for an old tire, or completing incomplete sketches. (As well as other tasks from the Torrance test of creativity.) Zabelina and Robinson found that, “individuals [in] the mindset condition involving childlike thinking… exhibited higher levels of creative originally than did those in the control condition.” This effect was especially pronounced with subjects who identify themselves as “introverts.”

What happens to our innate creativity when we age? Zabelina and Robinson discuss a few reasons. The first is that regions of the frontal cortex – a part of the brain responsible for rule-based behavior – are not fully developed until our teenage years. This means that when we are young our thoughts are free-flowing and without inhibitions; curiosity, not logic and reason, guides our intellectual musings. The second is that current educational practices discourage creativity. As famed Ted speaker Ken Robinson said: “the whole system of public education around the world is a protracted process of university entrance. And the consequence is that many highly talented, brilliant, creative people think they’re not, because the thing they were good at at school wasn’t valued, or was actually stigmatized.”

No matter the reasons, the authors stress, adults can still tap into their more imaginative younger selves. The useful cognitive tools that come with adulthood tempt us to inhibit our imagination from wondering about the impossible, but as so many intellectuals and inventors have remarked throughout history, challenging what’s possible is a necessary starting point. As Jobs said, “the people who are crazy enough to think they can change the world, are the ones who do.”

To be sure, it’s often beneficial to approach life with an adult mindset – you probably don’t want to get too creative with your taxes – but when it comes to using your imagination, thinking of oneself as a child facilitates more original thinking.

Jonah Lehrer and the New Science of Creativity

The following is a repost of my latest article originally posted on ScientificAmerican.com. It is a review of Jonah Lehrer’s latest book, Imagine: How Creativity Works, which was released March 19th.

Bob Dylan was stuck. At the tail end of a grueling tour that took him across the United States and through England he told his manager that he was quitting music. He was physically drained – insomnia and drugs had taken their toll – and unsatisfied with his career. He was sick of performing “Blowin’ in the Wind” and answering the same questions from reporters. After finishing a series of shows at a sold out Royal Albert Hall in London, he escaped to a cabin in Woodstock, New York to rethink his creative direction.

What came next would change rock ‘n’ roll forever. As soon as Dylan settled into his new home he grabbed a pencil and started writing whatever came to his mind. Most of it was a mindless stream of conscious. “I found myself writing this song, this story, this long piece of vomit, twenty pages long,” he once told interviewers. The song he was writing started like any children’s book – “Once Upon a Time” – but what emerged was a tour de force that left people like Bruce Springsteen and John Lennon in awe. A few months later, “Like A Rolling Stone” was released to critical acclaim.

Creativity in the 21st Century

Every creative journey begins with a problem. For Dylan it was the predictability and shallowness of his previous songs. He wasn’t challenging his listeners enough; they were too comfortable. What Dylan really wanted to do to was replace the expected with the unexpected. He wanted to push boundaries and avoid appealing to the norm; he wanted to reinvent himself.

For most of human history, the creative process has been associated with higher powers; it was about channeling the muses or harnessing one’s inner Apollonian and Dionysian; it was otherwordly. Science has barely touched creativity. In fact, in the second half of the 20th century less than 1 percent of psychology papers investigated aspects of the creative process. This changed in the last decade. Creativity is now one of the most popular topics in cognitive science.

The latest installment is Jonah Lehrer’s Imagine: How Creativity Works – released today. With grandiose style á la Proust Was Neuroscientist and How We Decide, Lehrer tells stories of scientific invention and tales of artistic breakthroughs – including Dylan’s – while weaving in findings from psychology and neuroscience. What emerges from his chronicles is a clearer picture of what happens in the brain when we are exercising – either successfully or unsuccessfully – our creative juices. The question is: what are the secrets to creativity? 

How To Think

There’s nothing fun about creativity. Breakthroughs are usually the tale end of frustration, sweat and repeated failure. Consider the story of Swiffer. Back in the 1980s Procter and Gamble hired the design firm Continuum to study how people cleaned their floors. The team “visited people’s homes and watched dozens of them engage in the tedious ritual of floor cleaning. [They] took detailed notes on the vacuuming of carpets and the sweeping of kitchens. When the notes weren’t enough, they set up video cameras in living rooms.” The leader of the team, Harry West, described the footage as the most boring stuff imaginable. After months of poring through the tapes he and his team knew as much about how people cleaned their floors as anybody else – very little.

But they stuck with it. And eventually landed on a key insight: people spend more time cleaning their mops than they did cleaning the floor. That’s when they realized that a paper towel could be used as a disposable cleaning surface. Swiffer launched in the spring of 1999 and by the end of the year it generated more than $500 million in sales.

Discovery and invention require relentless work and focus. But when we’re searching for an insight stepping back from a problem and relaxing is also vital; the unconscious mind needs time to mull it over before the insight happens – what Steve Berlin Johnson calls the “incubation period.” This is the story of Arthur Fry, which Lehrer charmingly brings to life.

In 1974 Fry attended a seminar given by his 3M colleague Spencer Silver about a new adhesive. It was a weak paste, not even strong enough to hold two pieces of paper together. Fry tried to think of an application but eventually gave up.

Later in the year he found himself singing in his Church’s choir. He was frustrated with the makeshift bookmarkers he fashioned to mark the pages in his hymnal book; they either fell out or got caught in the seams. What he really needed was glue strong enough so his bookmarkers would stick to the page but weak enough so they wouldn’t rip the paper when he removed them. That’s when he had his moment of insight: why not use Silver’s adhesive for the bookmark? He called it the Post-it Note.

Fry’s story bodes well with tales of insight throughout history. Henrí Poincaré is famous for thinking up Non-Euclidean geometry while boarding a bus, and then there’s Newton’s apple-induced revelation about the law of gravity. Lehrer delves into the relevant research to make sense of these stories from the neurological level. Fascinating studies from Mark Jung-Beeman, John Kounios and Joy Bhattacharya give us good reason to take Lehrer’s advice: “Rather than relentlessly focusing, take a warm shower, or play some Ping-Pong, or walk on the beach.”

When it comes to the creative process, then, it’s important to balance repose with red bull. As Lehrer explains: “the insight process… is a delicate mental balancing act. At first, the brain lavishes the scarce resource of attention on a single problem. But, once the brain is sufficiently focused, the cortex needs to relax in order to seek out the more remote association in the right hemisphere, which will provide the insight.”

Other People

The flip side of the creative process is other people. The world’s great ideas are as much about our peers as they are about the individual who makes it into the textbook. To explore how the people around us influence our ideas Lehrer explains the research of Brian Uzzi who, a few years ago, set out to answer this question: what determines the success of a Broadway musical?

With his colleague Jarrett Spiro, Uzzi thoroughly examined a data set that included 2,092 people who worked on 474 musicals from 1945 to 1989. They considered metrics such as reviews and financial success and controlled for talent and any economic or geographic advantages – big New York City musicals would likely flub the data. They found that productions failed for two reasons. The first was too much like-mindedness: “When the artists were so close that they all thought in similar ways… theatrical innovation [was crushed].” On the other hand, when “the artists didn’t know one another, they struggled to work together and exchange ideas.” Successful productions, in contrast, found an even distribution between novelty and familiarity within its members. This is why West Side Story was such a hit: it balanced new blood with industry veterans.

This is what the website InnoCentive.com teaches us. InnoCentive is a website where, as Matt Ridley would suggest, ideas go to have sex. The framework is simple: “seekers” go to the website to post their problems for “solvers.” The problems aren’t trivial but the rewards are lucrative. For example, the Sandler-Kenner Foundation is currently offering a $10,000 reward for anybody who can create “diagnostic tools for identification of adenocarcinoma and neuroendocrine pancreatic cancer at early stages of development.” Another company is offering $8,000 to anyone who can prevent ice formation inside packages of frozen foods.

What’s remarkable about InnoCentive is that it works. Karim Lakhani, a professor at Harvard Business School, conducted a study that found that about 40 percent of the difficult problems posted on InnoCentive were solved within 6 months. A handful of the problems were even solved within days. “Think, for a moment,” Lehrer says, “about how strange this is: a disparate network of strangers managed to solve challenges that Fortune 500 companies like Eli Lilly, Kraft Foods, SAP, Dow Chemical, and General Electric—companies with research budgets in the billions of dollars—had been unable to solve.”

The secret was outside thinking:

The problem solvers on InnoCentive were most effective when working at the margins of their fields. In other words, chemists didn’t solve chemistry problems, they solved molecular biology problems, just as molecular biologists solved chemistry problems. While these people were close enough to understand the challenges, they weren’t so close that their knowledge held them back and caused them to run into the same stumbling blocks as the corporate scientists.

This is the lesson from West Side Story: great ideas flourish under the right balance of minds. John Donne was right: no man is an island.

Conclusion

There are so many wonderful nuggets to take away from Imagine, and Lehrer does an excellent job of gathering stories from history to bring the relevant psychological research to life. The few stories and studies I’ve mentioned here are just the tip of the iceberg.

When I asked him what the takeaway of his book is (if there could be just one) he said:

The larger lesson is that creativity is a catchall term for a bundle of distinct processes. If you really want to solve the hardest problems you will need all these little hacks in order to solve these problems. This is why what the cognitive sciences are saying about creativity is so important.

He’s right. We think about creativity as being a distinct thing and as people being either creative or not, but the empirical research Lehrer discusses tells a different story. Creativity engages multiple cognitive processes that anybody can access.

This is why Dylan’s story is so important: It’s the story of a musician being discontent with his creative direction, having a moment of insight, working tirelessly to bring this insight and new sounds to life to ultimately change the norm. Dylan’s genius isn’t about a specific skill that nobody else possessed. It’s about his ability to wage through the creative process by using the right parts of the brain at the right times.

Not everybody can be Dylan, but Imagine reminds us that as mysterious and magical as creativity seems, “for the first time in human history, it’s possible to learn how the imagination actually works.”

Why I’m Optimistic About The Future

The history of Earth is a rocky and lifeless story. The first signs of life emerged about a billion years after our planet’s creation. They weren’t much either; mostly single-celled organisms that resembled today’s bacteria. Land animals emerged out of the oceans as recent as 500 million years ago, and the genus homo came onto the scene a mere 2.5 million years ago. Complex life on Earth is the new kid on the block; natural selection spent most of its time keeping species the same, not changing them.

We humans are a different story. 200,000 years ago a few tens of thousands of us dotted the African plains. But then something happened. We spread across the globe creating cities and villages along the way. Language evolved, and with it culture and societies. We began living longer and healthier and our population skyrocketed as a result.

What’s peculiar about the rise of humans is that biologically speaking, nothing changed; the same genes that constituted our hunter-gatherer ancestors constitute us. But somewhere along the line a small change led to profound differences in our behavior within a short period of time. Whereas homo erectus and the Neanderthals spent hundreds of thousands of years making the same tools over and over again, we were able to understand and organize the world better.

Whatever the genetic change was, we eventually gained the ability to learn from others. This was hugely important. Anthropologists call this cultural or social learning, and it not only describes our tendency to copy and imitate by watching others, it highlights our unique ability to realize the best from a number of alternatives and attempt to improve on it. Many animals can learn, but only humans can learn and improve. As evolutionary biologist Mark Pagel explains, “even if there were a chimpanzee-Einstein, its ideas would almost certainly die with it, because others would be no more likely to copy it than a chimpanzee-dunce.”

What’s more is our ability to put ourselves inside the minds of others – what philosophers term a theory of mind. It helps us assign reason, purpose and intentionality to objects and people, which moreover allows us to understand things as being part of a bigger picture. Without a theory of mind we would probably still be using the same tools as we did 200,000 years ago.

In addition, theory of mind gives rise to emotions like empathy and sympathy, which give us the capacity to cooperate with people and groups outside of our kin. Virtually all members of the subfamily homininae (beside bonobos) including chimps, gorillas and orangutans do not exhibit this type of behavior. To borrow a thought experiment from the anthropologist Sarah Hrdy, imagine if you were on a 747 filled with chimps and a baby started to cry. In Hrdy’s words: “any one of us would be lucky to disembark with all their fingers and toes still attached, with the baby still breathing and unmaimed.” Recent psychological research is confirming that our species ability to cooperate is partially innate. As bleak as our current headlines are, it appears we humans are wired with at least a minimal ability to get along with each other and have a sense of justice. We’re not perfect, but no chimp would donate to charity and certainly not group of chimps could set up a charity.

This is important for many reasons. The most obvious is that economics is impossible without the means to cooperate with strangers. This is why, according to Matt Ridley, one of the key developments in our species history took place when we “started to do something to and with each other that in effect began to build a collective intelligence… [we] started, for the very first time, to exchange things between unrelated, unmarried individuals; to share, swap, barter and trade.” The effect of trade was specialization, which gave rise to innovation, which in turn improved technologies and so on and so on. Well before Smith hypothesized the invisible hand and Ricardo thought about how England and Portugal could efficiently trade wine we had already begun to understand that communities were better off when their members honed their skills, pursued their self-interest and traded with other communities.

This is a simplified and incomplete story but you get the idea: humans flourished because they were able to learn from and cooperate with each other. It’s unclear what happened biologically, but the consequences were obviously vast.

What’s interesting is that the same cognitive mechanisms that allowed our species to prosper in the African savannah are the same cognitive mechanisms that are responsible for globalization in the 21st century. However, in place of face-to-face interactions is communication over the web.

In a 2010 Ted lecture Chris Anderson addressed this point by exploring how web video powers global innovation. He explained the following:

A while after Ted Talks started taking off we noticed that speakers were starting to spend a lot more time in preparation… [the previous speakers raised] the bar for the next generation of speakers… it’s not as if [speakers] ended their talks saying ‘step your game up,’ but they might as well have… you have these cycles of improvement apparently driven by people watching web videos.

Anderson terms this phenomenon “crowd accelerated innovation,” and uses it to explain not just how Ted Talks are improving in quality, but how everything is. He is making the same general point as Pagel and Ridley: humans learn and innovate by watching and stealing ideas from others. But what’s unique about Anderson’s point is that it’s describing how the Internet is facilitating this ability. And the exciting part is that people will learn and imitate even faster with YouTube, Wikipedia, Google Books and many more online services that focus on the distribution of content. As Anderson says, “this is the technology that is going to allow the rest of the world’s talents to be shared digitally, thereby launching a whole new cycle of… innovation.”

Whereas a famine could have easily wiped out the only community that knew how to harvest a certain crop, build a certain type of boat or make a certain type of tool – what anthropologist call random drift – the Internet not only ensues our collective knowledge, it makes it widely accessible, something the printing press wasn’t able to achieve to the same degree. This is why I’m optimistic about the future: the Internet will only accelerate our ability and desire to improve upon the ideas of others.

Ted lectures over the years give us plenty of concrete examples to be hopeful about: Hans Rosling illustrated the global rise in GDP and decrease in poverty over the last several decades; Steven Pinker demonstrated the drastic decline in violence; Ridley and Pagel spoke about the benefits of cultural and economic cooperation; and most recently, Peter Diamondis argued that we will be able to solve a lot of the problems that darken our vision of the future. And because all this research is coming to us via the web the next round of ideas will be even better. More importantly, it will inspire a generation of young Internet users who are looking to change the world for the better.

The Illusion of Understanding Success

In December of 1993, J.K. Rowling was living in poverty, depressed, and at times, contemplating suicide. She resided in a small apartment in Edinburgh, Scotland with her only daughter. A recent divorce made her a single mom. Reflecting on the situation many years later, Rowling described herself as, “the biggest failure I knew.”

By 1995 she finished the first manuscript of Harry Potter and the Philosopher’s Stone, a story about a young wizard she began writing years before. The Christopher Little Literary Agency, a small firm of literary agents based in Fulham, agreed to represent Rowling. The manuscript found its way to the chairman of Bloomsbury, who handed it down to his eight-year-old daughter Alice Newton. She read it and immediately demanded more; like so many children and adults after her, she was hooked. Scholastic Inc., bought the rights to Harry Potter in the United States in the spring of 1997 for $105,000. The rest is history.

Rowling’s story, which includes financial and emotional shortcomings followed by success and popularity, is the rages to riches narrative in a nutshell. It’s the story of an ordinary person, dismissed by the world, who emerges out of adversity onto the center stage. It’s the sword in the stone, it’s the ugly duckling; it’s a story that gets played out time and time again throughout history. Kafka captures it nicely in The Castle: “Though for the moment K. was wretched and looked down on, yet in an almost unimaginable and distant future he would excel everybody.”

The reality of Rowling’s story, however, is just that: it’s a story. It’s a sequence of facts strung together by an artificial narrative. It didn’t necessarily have to have a happy ending and it certainly was not predictable back in 1993. Rowling did not follow a predetermined path. Her life before Harry Potter was complex and convoluted, and, most importantly, luck played a significant role in her eventual success. These variables are always forgotten in hindsight.

Yet, we humans, facing limits of knowledge, to paraphrase one author, resolve the myriad of unknown events that defined Rowling’s life before Harry Potter by squeezing them into crisp commoditized ideas and packaging them to fit a warming narrative. We have, in other words, a limited ability to look at sequences of facts without weaving an explanation into them.

The same problem occurs in science. It’s always the story of invention, the tale of discovery or the history of innovation. These narratives manifest themselves in the form of a quest: A scientist is stuck on a problem, he or she is surrounded by doubt, but after years of hard work an insight prevails that changes the world forever.

In The Seven Basic Plots, Christopher Booker summarizes The Quest, which sounds as much like Darwin on the Beagle, MaGellan aboard the Trinidad or Marco Polo traveling across Asia as it does Frodo traversing Middle Earth. As Booker explains:

Far away, we learn, there is some priceless goal, worth any effort to achieve… From the moment the hero learns of this prize, the need to set out on the long hazardous journey to reach it becomes the most important thing to him in the world. Whatever perils and diversion lie in wait on the way, the story is shaped by that one overriding imperative; and the story remains unresolved until the objective has been finally, triumphantly secured.

Unfortunately, Frodo’s triumph at Mount Doom is more real than natural selection to some. Kahneman is right: “It is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle.”

Our propensity to story tell is also fueled by the survivorship bias, which describes our tendency to believe that successful people possess a special property. For Steve Jobs it was his assertive leadership and vision, for Bob Dylan it was his poetry and willingness to challenge the norm and for Rowling it was her creativity and imagination. But these attributes are post-hoc explanations; there are plenty of people with Dylan’s musical and lyrical caliber who will never match his success. Likewise, many creative geniuses of Rowling’s stature will never sell tens of millions books. Luck, at the end of the day, might be the best explanation.

When trying to answer the question of what makes people successful the best response might be it’s impossible to know. Indeed, hardwork, intelligence and good genes certainly play a role. But the reality of Rowling’s story is that it is highly unlikely. Twelve out of twelve publishing houses rejected the book. In the years leading up to Harry Potter a number of things could have prevented Scholastic from purchasing the rights to her book. If it weren’t for little Alice Newton, the book may have never seen the light of day.

The true test of an explanation, as Kahneman also says, is whether it would have made the event predictable in advance. No story of Rowling’s unlikely success will meet that test, because no story can include all events that would have caused a different outcome. This being said, we will continue to explain Rowling’s story as if it was inevitable and predictable. We will always be obsessed with happy endings.

The takeaway is twofold: first, be suspicious of narratives, especially if they are charming; second, be humble about what you think it takes to be successful. There is good reason to believe that what you think is an illusion perpetuation by a narrative where everybody lives happily ever after.

Why The Future of Neuroscience Will Be Emotionless

In Phaedrus, Plato likens the mind to a charioteer who commands two horses, one that is irrational and crazed and another that is noble and of good stock. The job of the charioteer is to control the horses to proceed towards Enlightenment and the truth.

Plato’s allegory sparked an idea that perpetuated throughout the next several millennia in western thought: emotion gets in the way of reason. This makes sense to us. When people act out-of-order, they’re irrational. No one was ever accused of being too reasonable. Around the 17th and 18th centuries, however, thinkers began to challenge this idea. David Hume turned the tables on Plato: reason, Hume said, was the slave of the passions. Psychological research of the last few decades not only confirms this view, some of it suggests that emotion is better at deciding.

We know a lot more about how the brain works compared to the ancient Greeks, but a decade into the 21st century researchers are still debating which of Plato’s horses is in control, and which one we should listen to.

A couple of recent studies are shedding new light on this age-old discourse. The first comes from Michael Pham and his team at Columbia Business School. The researchers asked participants to make predictions about eight different outcomes ranging from American Idol finalists, to the winners of the 2008 Democratic primary, to the winner of the BCS championship game. They also forecasted the Dow Jones average.

Pham created two groups. He told the first group to go with their guts and the second to think it through. The results were telling. In the American Idol results, for example, the first group correctly predicted the winner 41 percent of the time whereas the second group was only correct 24 percent of the time. The high-trust-in-feeling subjects even predicted the stock market better.

Pham and his team conclude the following:

Results from eight studies show that individuals who had higher trust in their feelings were better able to predict the outcome of a wide variety of future events than individuals who had lower trust in their feelings…. The fact that this phenomenon was observed in eight different studies and with a variety of prediction contexts suggests that this emotional oracle effect is a reliable and generalizable phenomenon. In addition, the fact that the phenomenon was observed both when people were experimentally induced to trust or not trust their feelings and when their chronic tendency to trust or not trust their feelings was simply measured suggests that the findings are not due to any peculiarity of the main manipulation.

Does this mean we should always trust our intuition? It depends. A recent study by Maarten Bos and his team identified an important nuance when it comes to trusting our feelings. They asked one hundred and fifty-six students to abstain from eating or drinking (sans water) for three hours before the study. When they arrived Bos divided his participants into two groups: one that consumed a sugary can of 7-Up and another that drank a sugar-free drink.

After waiting a few minutes to let the sugar reach the brain the students assessed four cars and four jobs, each with 12 key aspects that made them more or less appealing (Bos designed the study so an optimal choice was clear so he could measure of how well they decided). Next, half of the subjects in each group spent four minutes either thinking about the jobs and cars (the conscious thought condition) or watching a wildlife film (to prevent them from consciously thinking about the jobs and cars).

Here’s the BPS Research Digest on the results:

For the participants with low sugar, their ratings were more astute if they were in the unconscious thought condition, distracted by the second nature film. By contrast, the participants who’d had the benefit of the sugar hit showed more astute ratings if they were in the conscious thought condition and had had the chance to think deliberately for four minutes. ‘We found that when we have enough energy, conscious deliberation enables us to make good decisions,’ the researchers said. ‘The unconscious on the other hand seems to operate fine with low energy.’

So go with your gut if your energy is low. Otherwise, listen to your rational horse.

Here’s where things get difficult. By now the debate over the role reason and emotion play in decision-making is well documented. Psychologists have written thousands of papers on the subject. It shows in the popular literature as well. From Antonio Damasio’s Descartes’ Error to Daniel Kahneman’s Thinking, Fast and Slow, the lay audience knows about both the power of thinking without thinking and their predictable irrationalities.

But what exactly is being debated? What do psychologists mean when they talk about emotion and reason? Joseph LeDoux, author of popular neuroscience books including The Emotional Brain and The Synaptic Self, recently published a paper in the journal Neuron that flips the whole debate on its head. “There is little consensus about what emotion is and how it differs from other aspects of mind and behavior, in spite of discussion and debate that dates back to the earliest days in modern biology and psychology.” Yes, what we call emotion roughly correlates with certain parts of the brain, it is usually associated with activity in the amygdala and other systems. But we might be playing a language game, and neuroscientists are reaching a point where an understanding of the brain requires more sophisticated language.

As LeDoux sees it, “If we don’t have an agreed-upon definition of emotion that allows us to say what emotion is… how can we study emotion in animals or humans, and how can we make comparisons between species?” The short answer, according to the NYU professor, is “we fake it.”

With this in mind LeDoux introduces a new term to replace emotion: survival circuits. Here’s how he explains it:

The survival circuit concept provides a conceptualization of an important set of phenomena that are often studied under the rubric of emotion—those phenomena that reflect circuits and functions that are conserved across mammals. Included are circuits responsible for defense, energy/nutrition management, fluid balance, thermoregulation, and procreation, among others. With this approach, key phenomena relevant to the topic of emotion can be accounted for without assuming that the phenomena in question are fundamentally the same or even similar to the phenomena people refer to when they use emotion words to characterize subjective emotional feelings (like feeling afraid, angry, or sad). This approach shifts the focus away from questions about whether emotions that humans consciously experience (feel) are also present in other mammals, and toward questions about the extent to which circuits and corresponding functions that are relevant to the field of emotion and that are present in other mammals are also present in humans. And by reassembling ideas about emotion, motivation, reinforcement, and arousal in the context of survival circuits, hypotheses emerge about how organisms negotiate behavioral interactions with the environment in process of dealing with challenges and opportunities in daily life.

Needless to say, LeDoux’s paper changes things. Because emotion is an unworkable term for science, neuroscientists and psychologists will have to understand the brain on new terms. And when it comes to the reason-emotion debate – which of Plato’s horses we should trust – they will have to rethink certain assumptions and claims. The difficult part is that we humans, by our very nature, cannot help but resort to folk psychology to explain the brain. We deploy terms like soul, intellect, reason, intuition and emotion but these words describe very little. Can we understand the brain even though our words may never suffice? The future of cognitive science might depend on it.

Read more

The Future Of Religion

Religious people, that is, people who say that religion is important in their lives, have, on average, higher subjective well being. They find a greater sense of purpose or meaning, are connected to stronger social circles and live longer and healthier lives. Why, then, are so many dropping out of organized religion?

Last year a team of researchers led by Ed Diener tried to answer this question. They found that economically developed nations are much less likely to be religious. On the other hand, religion is widespread in countries with more difficult circumstances. “Thus,” the authors conclude, “it appears that the benefits of religion for social relationships and subjective well-being depend on the characteristics of the society.” People of developed nations are dropping out of organized religion, then, because they are finding meaning and wellness elsewhere.

The real paradox is America, where Nietzsche’s anti-theistic proclamation went unheard. 83 percent of Americans identify with a religious denomination, most say that religion is “very important” in their lives and according to Sam Harris 44 percent “of the American population is convinced that Jesus will return to judge the living and the dead sometime in the next fifty years.” In fact, a recent study even showed that atheists are largely seen as untrustworthy compared to Christian and Muslims.

Why does the United States, one the most economically developed countries in the world, deviate from the correlation between religion and wealth? One answer is that trends always contain outliers. As Nigel Barber explains in an article: “The connection between affluence and the decline of religious belief is as well-established as any such finding in the social sciences…. [and] no researcher ever expects every case to fit exactly on the line… If they did, something would be seriously wrong.”

Whatever the reasons, a recent article by David Campbell and Robert Putnam suggests that Americans are catching up to their non-believing European counterparts. According to Campbell and Putnam, the number of “nones” – those who report no religious affiliation – has dramatically increased in the last two decades. “Historically,” Campbell and Putnam explain, “this category made up a constant 5-7 percent of the American population… in the early 1990s, however, just as the God gap widened in politics, the percentage of nones began to shoot up. By the mid-1990s, nones made up 12 percent of the population. By 2011, they were 19 percent. In demographic terms, this shift was huge.”

A study by Daniel Mochon, Michael Norton and Dan Ariely bodes well with this observation. They discovered that, “while fervent believers benefit from their involvement, those with weaker beliefs are actually less happy than those who do not ascribe to any religion-atheists and agnostics.” It’s possible the “nones” Campbell and Putnam speak of are motivated to abandon their belief by a desire to be happier and less conflicted with their lives. This might be too speculative, but there are plenty of stories, especially in the wake of the New Atheist movement, of people who describe their change of faith as a dramatic improvement for their emotional life. In a recent interview with Sam Harris, for example, Tim Prowse, a United Methodist pastor for almost 20 years, described leaving his faith as a great relief. “The lie was over, I was free,” he said, “…I’m healthier now than I’ve been in years and tomorrow looks bright.”

What does this say about the future of atheism? Hitchens and others suggest that a standoff between believers and non-believers may be inevitable. “It’s going to be a choice between civilization and religion,” he says. However, grandiose predictions about the future of the human race are almost always off the mark, and it’s likely that the decline in religion will remain slow and steady. It’s important to keep in mind that this decline is a recent phenomena. It wasn’t until the 17th century, the so-called Age of Reason, when writers, thinkers and some politicians began to insist that societies are better off when they give their citizens the political right to communicate their ideas. This was a key intellectual development, and in context to the history of civilization, very recent.

To be sure, radical ideologies will always exist; religion, Marx suggested, is the opiate of the people. But the trend towards empiricism, logic and reason is undeniable and unavoidable. Titles including God Is Not Great and The God Delusion are bestsellers for a reason. And if Prowse’s testimony as well as Campbell and Putnam’s data are indicative, there is a clear shift in the zeitgeist.

Do We Know What We Like?

 

People are notoriously bad at explaining their own preferences. In one study researchers asked several women to choose their favorite pair of nylon stockings from a group of twelve. After they made their selections the scientists asked them to explain their choices. The women mentioned things like texture, feel, and color. All of the stockings, however, were identical. The women manufactured reasons for their choices, believing that they had conscious access to their preferences.

In other words: “That voice in your head spewing out eloquent reasons to do this or do that doesn’t actually know what’s going on, and it’s not particularly adept at getting you nearer to reality. Instead, it only cares about finding reasons that sound good, even if the reasons are actually irrelevant or false. (Put another way, we’re not being rational – we’re rationalizing.)”

Our ignorance of our wants and desires is well-established in psychology. Several years ago Timothy Wilson conducted one of the first studies to illustrated this. He asked female college students to pick their favorite posters from five options: a van Gogh, a Monet and three humorous cat posters. He divided them into two groups: The first (non-thinkers) was instructed to rate each poster on a scale from 1 to 9. The second (analyzers) answered questionnaires asking them to explain why they liked or disliked each of them. Finally, Wilson gave each subject her favorite poster to take home.

Wilson discovered that the preferences of the two groups were quite different. About 95 percent of the non-thinkers went with van Gogh or Monet. On the other hand, the analyzers went with the humorous cat poster about 50 percent of the time. The surprising results of the experiment showed themselves a few weeks later. In a series of follow-up interviews, Wilson found that the non-thinkers were much more satisfied with their posters. What explains this? One author says that, “the women who listened to their emotions ended up making much better decisions than the women who relied on their reasoning powers. The more people thought about which posters they wanted, the more misleading their thoughts became. Self-analysis resulted in less self-awareness.”

Wilson found similar results with an experiment involving jams. And other researchers, including Ap Dijksterhuis of Radboud University in the Netherlands, have also demonstrated that we know if we like something, but we don’t know why and the more time we spend deliberating the worse off we are. Freud, then, was right: we’re not even the masters of our own house.

Our tendency to make up reasons for our preferences is of particular importance for advertisers, who sometimes rely on focus groups. But if we don’t know what we like, then how are ad agencies supposed to know what we like? The TV shows The Mary Tyler Moore Show and Seinfeld, for example, are famous for testing terribly even though they went on to be two of the most popular shows in the history of TV. By the same token, many shows that tested well, flopped. As Philip Graves, author of Consumer.ology reminds us: “As long as we protect the illusion that we ourselves are primarily conscious agents, we pander to the belief that we can ask people what they think and trust what we hear in response. After all, we like to tell ourselves we know why we do what we do, so everyone else must be capable of doing the same, mustn’t they?”

Stories of the failures of market research are not uncommon. Here’s one from Gladwell.com:

At the beginning of the ’80s, I was a product manager at General Electric, which at the time had a leading market share in the personal audio industry (radios, clock radios, cassette recorders, etc.). Sony had just introduced the Walkman, and we were trying to figure out how to react. Given the management structure of the day, we needed to prove the business case. Of course, we did focus groups!

Well, the groups we did were totally negative. This was after the Walkman had been on the scenes for months, maybe a year. The groups we did felt that personal music would never take off. Would drivers have accidents? Would bicycle riders get hit by drivers?

If we listened to “typical” consumers, the whole concept was DOA.

This type of reaction is probably the reason that there is the feeling of a “technological determination” on the part of the electronics community. It leads to the feeling that you should NEVER listen to the consumer, and just go about introducing whatever CAN be produced.

At the time, we had a joke about Japanese (Sony/Panasonic/JVC) market research. “Just introduce something. If it sells, make more of it.” It’s one way of doing business. One the other hand, when I was hired by a Japanese company in the mid-80’s, I was asked how GE could get by with introducing such a limited number of models. Simple, I said, “We tested them before we introduced them.”

History tells which method has worked better.

One person who understood this was Steve Jobs. He never cared for market research or focus groups because, as he once said, “people don’t know what they want until you show it to them.” Instead, Jobs was a pseudo- Platonist about his products. He believed that there was an ideal music player, phone, tablet and computer and trusted the customers to naturally recognize perfection when they saw it. When asked what market research went into the iPad, his New York Times obituary reports, Mr. Jobs replied: “None. It’s not the consumers’ job to know what they want.”

I’m not the only we with an ancient Greek take on Jobs. Technology-theory contrarian Evgeny Morozov compared Jobs to Plato a few years back. He said:

The notion of essence as invoked by Jobs and Ive [the top Apple designer] is more interesting and significant—more intellectually ambitious—because it is linked to the ideal of purity. No matter how trivial the object, there is nothing trivial about the pursuit of perfection. On closer analysis, the testimonies of both Jobs and Ive suggest that they did see essences existing independently of the designer—a position that is hard for a modern secular mind to accept, because it is, if not religious, then, as I say, startlingly Platonic.

Does this mean all markers should think platonically? Not necessarily; Jobs, to be sure, was an outliner. But it does remind us that many times we don’t know what we like.  Read more

Do Babies Know What’s Fair?

The blank slate, tabula rasa as the Greek termed it, is one of the worst ideas in science. For a long time scientists avoided asserting that anything about human behavior was innate. If they did, someone would point to a quirky tribe from Papua New Guinea to argue that all behavior comes via experience. This attitude has changed in the last couple of decades. Books ranging from Steven Pinker’s The Blank Slate to David Shenk’s The Genius in All of Us, give us a much more accurate picture of the nature-nurture debate. Now scientists know that the human brain is like a book: the first draft is written at birth and the rest is filled in during life. As NYU psychologist Gary Marcus explains, “Nature provides a first draft, which experience then revises… ‘Built-in’ does not mean unmalleable; it means ‘organized in advance of experience.’”

Understanding human behavior in these terms is vital for moral psychologists. For thousands of years philosophers debated if humans are inherently good or evil. But we now know that this is a false choice. As Marcus explains, everyone possess a moral sense from birth that permits altruism, fairness and justice; the interaction between genes and environment influences how these qualities are drawn out.

Some of the most important work in moral development comes from Paul Bloom, Karen Wynn and Kiley Hamlin. In one of their experiments they used a three-dimensional display and puppets to act out helping/hindering situations for six and ten-month-old infants. For example, a yellow square would help a circle up a hill; a red triangle would push it down. After the puppet show Bloom, Wynn and Kiley placed the helper and hinderer on a tray and brought them to the children. They found that they overwhelmingly preferred the helpful puppet to the hindering one. In an NYTimes article, Bloom concludes that, “babies possess certain moral foundations — the capacity and willingness to judge the actions of others, some sense of justice, gut responses to altruism and nastiness. Regardless of how smart we are, if we didn’t start with this basic apparatus, we would be nothing more than amoral agents, ruthlessly driven to pursue our self-interest.”

This brings me to a brand new study by psychologists Stephanie Sloane, Renée Baillargeon and David Premack published in Psychological Science. There were two experiments and babies watched live scenarios in each. In the first, 19-month-olds watched two giraffe puppets dance as an experimenter cheerfully presented the long-necked puppets with two toys. Here was the ripple: the experimenter gave one toy to each giraffe or both to one of them. Sloane et al then timed how long the babies gazed at the scene until they lost interest. (Longer looking times indicated that the babies thought something was wrong). They found that three-quarters of the infants looked longer when one giraffe got both toys.

In the second experiment, two women played with a pile of small toys when an experimented said, “Wow! Look at all these toys. It’s time to clean them up!” In one scenario both women got a reward even though one put all the toys away while the other kept playing. In the other scenario both women got a reward and both put the toys away. Similar to the results of the first experiment, the researchers found that 21-month-old infants gazed longer when the worker and the slacker were rewarded equally.

Here’s Sloane on the implications of the research:

We think children are born with a skeleton of general expectations about fairness and these principles and concepts get shaped in different ways depending on the culture and the environment they’re brought up in… helping children behave more morally may not be as hard as it would be if they didn’t have that skeleton of expectations.

Sloane’s study and remarks complement other research. A study published last November by Kiley Hamlin (along with Karen Wynn) demonstrated that babies preferred puppets that mistreated bad characters from scenarios similar to the ones created by Paul Bloom et al. Hamlin concluded that babies, “prefer it when people who commit or condone antisocial acts are mistreated.” Moreover, last October a study by Marco Schmidt and Jessica Summerville found that, “the infants [expecting] an equal and fair distribution of food… were surprised to see one person given more crackers or milk than the other.”

This small but significant body of research is giving us a better understanding of morality from the developmental point of view. It also reminds us that behavior is not simply nature versus nurture; it is about the interaction of genes and their environments. A better understanding of where our moral sense comes from and how it develops will hopefully help us draw out what Abraham Lincoln called our Better Angels.

Read more

What Conspiracy Theories Teach Us About Reason

Conspiracy theories are tempting. There is something especially charming about a forged moon landing or government-backed assassination. Christopher Hitchens called them “the exhaust fumes of democracy.” Maybe he’s right: cognitive biases, after all, feast on easy access to information and free speech.

Leon Festinger carried out the first empirical study of conspiracy theorists. In 1954 the Stanford social psychologist infiltrated a UFO cult that was convinced the world would end on December 20th. In his book When Prophecy Fails, Festinger recounts how after midnight came and went, the leader of the cult, Marian Keech, explained to her members that she received a message from automatic writing telling her that the God of Earth decided to spare the planet from destruction. Relieved, the cult members continued to spread their doomsday ideology.

Festinger coined the term cognitive dissonance to describe the psychological consequences of disconfirmed expectations. It is a “state of tension that occurs whenever a person holds two cognitions that are psychologically inconsistent,” as two authors describe it, “the more committed we are to a belief, the harder it is to relinquish, even in the face of overwhelming contradictory evidence.”

Smokers are another a good example; they smoke even though they know it kills. And after unsuccessfully quitting, they tend to say that, “smoking isn’t that bad,” or that, “it’s worth the risk.” In a related example doctors who preformed placebo surgeries on patients with osteoarthritis of the knee “found that patients who had ‘sham’ arthroscopic surgery reported as much relief… as patients who actually underwent the procedure.” Many patients continued to report dramatic improvement even after surgeons told them the truth.

A recent experiment by Michael J. Wood, Karen M. Douglas and Robbie M. Sutton reminds us that holding inconsistent beliefs is more the norm than the exception. The researchers found that “mutually incompatible conspiracy theories are positively correlated in endorsement.” Many subjects, for example, believed Princess Diana faked her own death and was killed by a rogue cell of British Intelligence, or that the death of Osama bin Laden was a cover-up and that he is still alive. The authors conclude that many participants showed “a willingness to consider and even endorse mutually contradictory accounts as long as they stand in opposition to the officially sanctioned narrative.”

The pervasiveness of cognitive dissonance helps us explain why it sometimes takes societies several generations to adopt new beliefs. People do not simply change their minds; especially when there is a lot on the line. It took several centuries for slavery to be universally banned (Mauritania was the last country to do so in 1981). In the United States civil rights movements for women and African-Americans lasted decades. Same-sex marriage probably won’t be legal in all 50 states for several more years. Our propensity to hold onto cherished beliefs also pervades science. As Max Plank said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

Are there ways to dilute the negative effects of cognitive dissonance? I’m afraid that the Internet is part of the problem. Google makes it so easy for us to find something that confirms a belief. But it is also part of the solution. History tells us that cooperation and empathy between individuals, institutions and governments increases as the exchange of information becomes easier. From the printing press to Uncle Tom’s cabin and through the present day (where social networks are the primary means of communication for so many) people tend to consider a point of view other than their own the more they are exposed to other perspectives.

Steven Pinker captures this point well in his latest book: “As literacy and education and the intensity of public discourse increase, people are encouraged to think more abstractly and more universally. That will inevitably push in the direction of a reduction of violence. People will be tempted to rise above their parochial vantage points – that makes it harder to privilege one’s own interest over others.” It shouldn’t come as a surprise then, that the rise of published books and literacy rates preceded the Enlightenment, an era that was vital in the rise of human rights.

This brings me back to Hitchen’s quote. Indeed, a byproduct of democracy is the tendency for some people to believe whatever they want, even in the face of overwhelming contradictory evidence. However, Pinker reminds us that democracy is helping to relieve our hardwired propensity to only look for what confirms our beliefs. That our confirmation biases are innate suggests that they will never disappear, but the capacity to reason facilitated by the exchange of information paints an optimistic future.

%d bloggers like this: