Skip to content

Posts tagged ‘Dean Buonomano’

Religion, Evolution & What The New Atheists Overlook

Lancet flukes (Dicrocelium dendriticum) are a clever little parasite. To reproduce, they find their way into the stomach of a sheep or cow by  commandeering an ant’s brain. Once this happens, ants exhibit strange behavior: they climb up the nearest blade of grass until it falls, then they climb it again, and again. If the flukes are lucky, a grazing farm animal eats the grass along with the ant; a sure win for the flukes, but a sad, and unfortunate loss for the six-legged insect.

Does anything like this happen with human beings? Daniel Dennett thinks so. In the beginning of his book Breaking the Spell, Dennett uses the fluke to suggest that religions survive because they influence their hosts (e.g., people) to do bad things for themselves (e.g., suicide bombing) but good things for the parasite (e.g., Islam). Implicit in Dennett’s example is that religions are like viruses, and people and societies are better of without them.

Dennett’s position is akin to the rest of the New Atheists: religion is a nasty and irrational byproduct of natural selection. This means that religious beliefs were not directly selected for by evolution any more than our noses evolved to help us keep our glasses from sliding off our faces. In the words of Pascal Boyer, “religious concepts and activities hijack our cognitive resources.” The question is: what cognitive resources influenced religion?

Most cognitive scientists agree that the Hypersensitve Agency Detection Device (abbreviated HADD) played an important role. In brief, the HADD explains why we see faces in the clouds, but never clouds in faces. Neuroscientist Dean Buonomano puts it this way: “We are inherently comfortable assigning a mind to other entities. Whether the other entity is your brother, a cat, or a malfunctioning computer, we are not averse to engaging it in conversation.” This ability endows will and intention to other people, animals and inanimate objects. The HADD produces a lot of false positive errors (e.g., seeing the virgin Mary in a piece of toast), and God might be one of them.

Another feature of the human mind that religion might have co-opted is a natural propensity towards a dualistic theory of mind. Dualism is our tendency to believe that people are made up off physical matter (e.g., lungs, DNA, and atoms) as well as an underlying and internal essence. Even the strictest materialist cannot escape this sentiment; we all feel that there is a “me” resting somewhere in our cortices. A belief in disembodied spirits could have given rise to beliefs in supernatural entities that existed independent of matter. Yale psychologist Paul Bloom is a proponent of this view and supports his conclusions with experimental evidence highlighted in his book Descartes’ Baby.

Although the by-productive hypothesis, as it is known, is incomplete, it all points to the same logic: “a bit of mental machinery evolved because it conferred a real benefit, but the machinery sometimes misfires, producing accidental cognitive effects that make people prone to believing in gods.”

This is an important piece of the puzzle for the New Atheists. If religion is the off shoot of a diverse set of cognitive modules that evolved for a variety of problems, then religious beliefs are nothing more than a series of neural misfires that are “correctable” with secular Enlightenment thinking.

Not everyone agrees. The evolutionary biologists David Sloan Wilson and Edward O. Wilson propose that religiosity is a biological adaptation that created communities by instilling a “one for all, all for one” mentality in its members. This is important because it allowed group members to function as a superorganism, which moreover gave them an advantage on the African savannah; “An unshakable sense of unity among… warriors,” Buonomano says, “along with certainty that the spirits are on their side, and assured eternity, were as likely to, as they are now, to improve the chances of victory in battle.” The binding power of religion would have also helped communities form objective moral codes – do unto others as you would have others do unto you – and protected against free riders.

Jonathan Haidt is making a name for himself by advocating this point. In addition to the group selection hypothesis, Haidt points to our species ability to experience moments of self-transcendence. The world’s religions, he believes, are successful because they found a way to facilitate such experiences. Here’s how he explained it in a recent TED:

If the human capacity for self-transcendence is an evolutionary adaptation, then the implications are profound. It suggests that religiosity may be a deep part of human nature. I don’t mean that we evolved to join gigantic organized religions — that kind of religion came along too recently. I mean that we evolved to see sacredness all around us and to join with others into teams that circle around sacred objects, people and ideas. This is why politics is so tribal. Politics is partly profane, it’s partly about self-interest. But politics is also about sacredness. It’s about joining with others to pursue moral ideals. It’s about the eternal struggle between good and evil, and we all believe we’re on the side of the good.

What’s interesting about Haidt’s angle is that it sheds a bad light on Enlightenment and secular ideals that western civilization was founded on. We exult liberty, individualism and the right to pursue our self-interest. But are we ignoring our innate desire to be part of something greater? Are we denying our groupish mentalities? The modern world gives us fixes – think big football games or raves – but I think some atheists are deprived.

And this brings me back to the fluke and the New Atheists. If Haidt is right, and our religiosity was an evolutionary adaptation, then religious beliefs are a feature of, not a poison to, our cognition. The fluke, therefore, is not a parasite but an evolutionary blessing the facilitated the creation of communities and societies. This is not to deny all the bloodshed on behalf of religion. But if religion is an adaptation and not a byproduct, then “we cannot expect people to abandon [it] so easily.”

The Importance of Forgetting: Why a Bad Memory is a Good Memory

I wish my memory was like a computer’s. I’ve lost car keys, a cellphone, a driver’s license and on the eve of an overseas trip, a passport; wouldn’t things be easier if I could effortlessly organize millions of pieces of information and retrieve them like with a mental Google search?

Alas, my memory – and yours – evolved according to different plans. Instead of neatly storing pieces of information into a neural bookshelf, memory organizes itself more like a web where experiences are stored contextually and in relation to one another. “An item is stored in relation to other items,” neuroscientist Dean Buonomano explains, “and its meaning is derived from the items to which it is associated.” This is why, for instance, thinking about “Africa,” “Animal,” “Stripes,” and “Black and White” automatically pops “Zebra” into your conscious mind. And as Proust famously illustrated in his lengthy classic Remembrance of Things Past, a single recognized combination of taste and smell can trigger an avalanche of memories.

At its extreme, our imperfect memory is sometimes the difference between life and death. About 6 percent of skydiving deaths are caused by forgetting to tug on the rip-chord and scuba divers too often forget to check their oxygen cage. It is also responsible for numerous false eye-witness accounts, including that of rape victim Jennifer Thompson, which landed the innocent Ronald Cotton in jail for 11 years. In addition, our tendency to foolishly believe that so-called “flashbulb” memories are accurate isn’t very impressive.

Indeed, our memory is far from perfect. However, is this a bad thing? A recent Scientific American Mind article by Ingrid Wickelgren suggests that for all its setbacks memory is a fairly well oiled cognitive capacity. She explains:

For most people, the concept of forgetting conjures up lost car keys, missed appointments and poor scores on exams. Worse, it augurs dementia. Psychologists traditionally shared this view, and most of them studied memory with an eye toward closing the cracks through which knowledge can slip… An early challenge to that downbeat view of forgetting emerged in 1970, when psychologist Robert A. Bjork, now at the University of California at Los Angeles, reported that instructions to forget some learned items could enhance memory for others. Forgetting is therefore not a sign of an inferior intellect—but quite the opposite. The purpose of forgetting, he wrote, is to prevent thoughts no longer needed from interfering with the handling of current information—akin to ridding your home of extraneous objects so that you can find what you need.

Memory is much more efficient in this light. Because 99 percent of our experiences are fairly uneventful and meaningless, the mind does a good job of only holding onto the important stuff while discarding the rest. Sure there are obvious downsides to this, but there are upsides also. Wickelgren goes onto explain the importance of forgetting:

In a study published in 2001 [Michael C.] Anderson and his student Collin Green… gave 32 college students what they called a think/no-think task. The students learned 40 word pairs such as ordeal-roach, with the first word serving as a cue for the second. Next they presented the cues and asked participants either to think about and say the word that went with it or to suppress (not think about) the associated word. Suppression seemed to work. The students even recalled fewer of the suppressed word associations than the “baseline” words—ones they learned but neither practiced nor inhibited…

[But] forgetting does not come easily to everyone…. This skill, or lack of it, has ripple effects on personality. If you cannot shake negative memories, for example, you might be easily sucked into a bad mood. Although the inability to forget does not cause depression, research shows that depressed patients have difficulty putting aside dark thoughts. In one experiment, published in 2003, psychologist Paula T. Hertel of Trinity University in San Antonio and Melissa Gerstle, now at the Texas Children’s Hospital and Baylor College of Medicine, found that depressed students recalled many more words they had practiced suppressing than other students did. The students who had the most trouble forgetting scored the highest on measures of rumination—which is the tendency to dwell on a concern—and the frequency of unwanted thoughts.

Those who do not suffer from depression, on the other hand, benefit from the brain’s natural tendency to remember the good and forget the bad. This cognitive advantage influences us to remember a spoiled camping trip as a “great time” with friends or a disastrous trip to Disney World as a “good bonding experience” for the family. It also seems to cause women to only remember the good parts of childbirth – the end; a nice evolutionary quark that influences them to continue reproducing. Perhaps most importantly, the brain’s automatic ability to forget the bad helps people get over most personal losses, emotional trauma and bad breakups. As Jim Carrey’s character Joel Barish illustrates in the movie Eternal Sunshine of the Spotless Mind, people will go a long way to forget certain experiences if they cannot do so naturally. With this in mind, I’m glad my memory is not like a computer.

If there is a bottom line it is this. The purpose of human memory is not to store information but to organize information so we can understand and predict the world. The downsides of this abound, but I say evolution did a pretty good job. Car keys, cell phones, licenses and passports weren’t very important over the last few million years when our ancestors were evolving, after all.

Read more

Stereotype Threat: Overcoming Stereotypes One Neuron at a Time

The SAT is experiencing an existential crisis. In 1990 the name changed from “Scholastic Aptitude Test” to “Scholastic Assessment Test” because it wasn’t clear if the SAT actually measured intelligence. Then, a few years later in 1993, the name changed to “SAT I: Reasoning Test” to distinguish it from “SAT II: Subject Tests.” In 2004 the roman numerals were dropped and it became the “SAT Reasoning Test,” and a writing section was added. Now, it doesn’t stand for anything, it’s simply the “SAT.” It isn’t clear what it measures either, and there are plenty of issues to debate including possible cultural and socioeconomic biases. But one question that isn’t asked is: Does the name of the test influence performance?

Surely, as one author says, “it doesn’t matter whether the test is described as a measure of IQ or a set of puzzles.” But a study done by Claude Steele a few years back illustrates otherwise. Steele, a professor of psychology at Stanford University who studies the effects of performance anxiety on standardized-tests, gave Stanford sophomores questions from the Graduate Record Examination (GRE) and prompted them with one of two descriptions: that the test measured their innate intelligence or that it did not measure their innate intelligence – he said it was just a preparatory drill for the later group. He found, alarmingly, that white students out performed black students when the test “measured innate intelligence,” but all students performance virtually identical when the test was “just a drill.”

What’s going on here is termed stereotype threat, and it describes a “psychological state that people experience when they feel they are at risk of confirming a negative stereotype about a group to which they belong.” This means that black students performed worse when the questions measured “intelligence” because “worrying about confirming a negative stereotype uses up mental resources and triggers anxiety, which makes it harder to concentrate.” It does not only apply to racial stereotypes either. Women, for example, perform worse on math tests when gender differences are highlighted. Likewise, they do worse in stimulated driving tests that are designed to “study why men are better drivers than women.”

My favorite of these studies (though this one did not test stereotype threat directly) comes from Dutch researchers Ap Dijksterhuis and Ad van Knippenberg out of the Universtiy of Nijmegen. They created two groups of participants and had them answer 42 difficult Trivial Pursuit questions. Here was the catch: they told one group to take five minutes to contemplate what it would be like to be a professor and write down everything that came to mind and the other group to think about soccer hooligans. They found subjects in the “professor” group answered correctly 55.6 percent of the time while subjects in the “hooligans” group answered correctly only 42.6 percent of the time. What’s remarkable about this study is that priming the subjects not only activated associated traits (e.g., professor with smart and hooligan with dumb), it influenced behavior. That is, it didn’t make them more intelligent, it merely brought the best out.

So it turns out that it does matter what the SAT stands for and what it claims to test. An SAT that measures intelligence favors white students because it puts them in the “professor” mind-set and it is disadvantaging to black students because it puts them in the “hooligan” mindset. None of this is intentional of course, but that’s the larger problem.

All of these studies can be partially explained by how the brain stores information. While we tend to think of our memories like the memories of computers, where bits of information are neatly stored as time goes by, the reality is that human memory is far more associative. This means that whereas a computer simply shelves bits of information by date and name, humans store memories by date, name, place, smell, color and any other relevant bit of information that was present when the memory was stored. The result is a memory web, much like a spider web, where everything is associated (to different degrees) with everything else. Here’s a quick test to illustrate this point. Answer these questions out loud and quickly

  1. What continent is Kenya in?
  2. What are the two opposing colors in the game of chess?
  3. Name any animal.

If you are like twenty percent of people you just blurted out, zebra! This is because your “zebra neurons” are associated with your “Africa neurons” and your “black and white neurons.” Moreover, when your Africa and black and white neurons are activated at the same time your zebra neurons are much more exited than, say, your octopus neurons. This is why only one percent of people say zebra when they are asked to name an animal out of the blue. To be sure, neurons are not specific to objects or ideas (and there are different types of memory to be distinguished), but you get the point. Our memory is much more web-like and associative than the memory of a computer.

This helps us understand stereotype threat. When Steele told his participants that his test was going to measure intelligence, the white students performed better because intelligence is more associated with their race at the neuronal level. That is, intelligence primed them to perform better in the same way that Africa and chess primed you to say zebra. Of course, it is just a stereotype – no credible studies show that one race of people is more intelligent than another. But what’s concerning is that stereotype threat is difficult to reverse. Just like it is almost physically impossible to not think about zebras when you think about Africa, animals, and blackness and whiteness, it is also difficult to not think about certain groups of people having to do with certain traits and behaviors. As the neuroscientist Dean Buonomano explains, “the brain is well designed to form new links between concepts, but the converse is not true: there is not specific mechanism for ‘unlinking.’ My brain can adjust to the new turn of events by creating new links between Pluto and dwarf planet, Pluto and Kuiper Belt Object, or Pluto and not a planet. But the Pluto/planet link cannot be rapidly erased and will likely remain ingrained in my neural circuits for the rest of my life.”

Buonomano’s point also helps explain why it usually takes an entire generation to overcome major societal and academic shifts; the civil rights movement didn’t happen over night after all, and the Copernican revolution took even longer. But that is certainly not to say that it is fruitless to try to persuade older generations of what is better for society and more scientifically accurate. Just the opposite is true. It is only when large groups of people get together with a single message that negative associations can be overturned for the better.

Hopefully, we can band together and get rid of the SAT all together! Cause we all know how awful standardized tests are…

Ideas: What They Are & Where the Great Ones Come From

Here’s an easy quiz I stole from Dean Buonomano’s latest book Brain Bugs:

 Answer the first two questions below out loud, and then blurt out the first thing that pops into your mind in response to sentence 3:

1. What continent is Kenya in?

2. What are the two opposing colors in the game of Chess?

3. Name any animal

If you’re like 20% of people, you just blurted out “zebra.” Why? As Buonomano explains, zebras are intimately connected with Africa and black and white because knowledge is stored in an associative manner. This means that when you learned about zebras, you also learned about where they live and what color they are (e.g., Africa and black and white), which moreover caused your neurons associated with zebras to physically connect with your neurons associated with Africa and black and white.

Canadian psychologist Donald Hebb first developed a model to explain this. In his words:

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

Put simply, two neurons form synaptic connections when they are near each other and activate at the same time. In colloquial terms, “when cells fire together, they wire together.”

This general principle, that knowledge is associative, explains why you think about orange trees when you think of the fall, why you think about September 11th when you think about airplanes crashing into buildings, and why you think about water when you think about jet skiing. In each of these examples, one idea is activating an associated idea or ideas.

It also explains a lot of behavior. For example, I have written about Yale psychologist John Bargh who demonstrated that reading words associated with oldness (Florida, gray, grandmother) causes people to walk slower, and holding warm drinks causes people to assess others as friendly and trustworthy. Again, it is because our experiences with oldness correspond to slowness and our experiences with warmth correspond to affection that we do this.

As we grow up, our understanding of the world becomes less and less flexible as our neural connections become more and more rigid. By the time we enter adulthood, our neurons have literally made up their mind about how the world works: beaches are associated with the ocean, dark clouds with rain, killing an innocent person with evil, and so on. Although this helps us navigate the world, there is a negative consequence to having everything so nicely categorized.

The more we come to know the world, the more difficult it is to think outside of the box; it is easy to think about a concept or understand something in the way you learned it, but it is extraordinary difficult to do the opposite. However, this is exactly what all great thinkers did; they physically broke rock hard neural synapses, formed new ones, and understood the world better as a result. This is what makes an idea great – its’ ability to alter the most grounded neural connections.

How is this done?

I think there are two things necessary for a great idea. The first is a lot time, focus, and diligence. As Steve Johnson, author of Where Good Ideas Come From suggests, it is only when small observations are “incubated” over long periods of time that great ideas flourish. There are many cases of this: Darwin spent decades assiduously observing and taking notes on plants and animals before thinking of natural selection, and Newton and Einstein spent their entire careers formulating their groundbreaking theories. Pick your famous scientific or intellectual breakthrough, and you’ll find the same story. The second is other people. Rarely are novel ideas thought up in isolation. This is why the English coffee house was central to the Enlightenment; it was a place people could go to exchange ideas to create new ones.

The point is that Plato and Descartes had it backwards. Ideas aren’t isolated entities of non-material fluff; they are pockets of interconnected neurons that form with and from associated ideas. An idea, then, is something that best represents a vast but related collection of concepts. When we exchange ideas, we weigh and test this collection. And every once in a while, someone comes along a breaks this web and forces us to rearrange our neurons. This is intellectual progress.

Read more

Follow

Get every new post delivered to your Inbox.

Join 329 other followers

%d bloggers like this: