Skip to content

Posts tagged ‘moral psychology’

Political Empathy & Moral Matrices

It’s difficult to make objective predictions about our future self. No matter how hard we try, we’re always influenced by the present. In one study, for example, researchers phoned people around the country and asked them how satisfied they were with their lives. They found that “when people who lived in cities that happened to be having nice weather that day imagined their lives, they reported that their lives were relatively happy; but when people who lived in cities that happened to be having bad weather that day imagined their lives, they reported that their lives were relatively unhappy.”

Similarly, a few years ago researchers went to a local gym and asked people who had just finished working out if food or water would be more important if they were lost in the woods. Like good social scientists, they asked the same question to people who were just about to work out. They found that 92 percent of the folks who just finished working out said that water would be more important; only 61 percent of people who were about to work out made the same prediction.

Physical states are difficult to transcend, and they often cause us to project our feelings onto everyone else. If I’m cold, you must be too. If I like the food, you should too. We are excellent self-projectors (or maybe that’s just me). Sometimes there are more consequential downsides to this uniquely human ability. And this brings me to a new study led by Ed O’Brien out of the University of Michigan recently published in Psychological Science. (via Maia Szalavitz at

The researchers braved the cold for the first experiment. They approached subjects at a bus stop in January (sometimes the temperature was as low as -14 degrees F) and asked them to read a short story about a hiker who was taking a break from campaigning when he got lost in the woods without adequate food, water and clothing. For half of the subjects the lost hiker was a left leaning and pro-gay rights Democrat; the other half read about a right-wing Republican. Next, the researchers asked the subjects their political views and which feeling was most unpleasant for the stranded hiker – being thirsty, hungry or cold. (For female participants, the hiker was described as female; for men, the hiker was male.) While these chilly interviews were being conducted O’Brien and his team ran the same study in a cozy library. Did the two groups show different answers?

The first thing O’Brien found was consistent with the gym study: 94 percent of the people waiting for the bus said the cold was the most unpleasant feeling for the hiker compared to only 57 percent of the library dwellers. Here’s were things got interesting: “If participants disagreed with the hiker’s politics… their own personal physical state had no bearing on their response: people chose the cold in equal numbers, regardless of where they were interviewed.” In other words, we don’t show as much empathy towards people who don’t share our political beliefs.

Their findings are disheartening given the current political climate in the United States. If we cannot empathize with someone who doesn’t share our political views, how are we supposed to engage in rational discourse with them? In order to work out our differences, it seems like we need to first recognize that we are the same deep down.

The larger problem is that compassion, empathy and moral sentiments towards other people binds and blinds. As one author says, “we all get sucked into tribal moral communities, circling around something sacred and then sharing post-hoc arguments about why we are so right and they are so wrong. We think the other side is blind to truth, reason, science, and common sense, but in fact everyone goes blind when talking about their sacred objects.”

How do we break out of our political matrices? Here’s one idea: let’s take the red pill and realize that we all can’t be right while remembering that we all have something to contribute. This is what the Asian religions nailed on the head. Ying and Yang aren’t enemies because like night and day they are necessary for the functioning of the world. Vishnu the preserver (who stands for conservative principles) and Shiva the destroyer (who stands for liberal principles), the two of the high Gods in Hinduism, cooperate to preserve the universe. It’s a cliché worth repeating: let’s work together to get along.

Read more

Is There Anything Wrong With Incest? Emotion, Reason and Altruism in Moral Psychology

Meet Julie and Mark, two siblings who are vacationing together in France. One night after dinner and a few bottles of wine, they decide to have sex. Julie is on the pill and Mark uses a condom so there is virtually no chance that Julie will become pregnant. They enjoy it very much but decide to never tell anyone or do it again. In the end, having sex brought them together and they are closer than ever.

Did Julie and Mark do anything wrong?

If incest isn’t your thing, your gut-reaction is probably yes – what Julie and Mark did is wrong. But the point of Julie and Mark’s story, which was created by University of Virginia professor of social psychology Jonathan Haidt, is to illustrate how easy it is to feel that something is wrong and how difficult it is to justify why something is wrong. This is what happens when Haidt tells the Julie and Mark story to his undergrads. Some say that incest causes birth defects, or that Julie and Mark will cause pain and awkwardness to friends and family, but birth control and secrecy ensured that none of these problems will occur. Students who press the issue eventually run out of reasons and fall back on the notion of it  “just being wrong.” Haidt’s point is that “the emotional brain generates the verdict. It determines what is right and what is wrong… The rational brain, on the other hand, explains the verdict. It provides reason, but those reasons all come after the fact.”

So the question is: when it comes to our moral sentiments and deliberations, what system is in charge, the rational one or the emotional one?

The reason-emotion debate runs throughout the field of moral psychology. On one hand, cognitive science clearly shows that emotion is essential to our rationality, on the other hand, psychologists argue if reason really is the “slave of the passions,” as David Hume suggested. Haidt tends to take on the later position (and this is what the incest debate illustrates), but psychologists such as Paul Bloom and Steven Pinker believe that reason can persuade our emotions; this is why we have moral progress they argue.

Neuroscience is weighing in too. It demonstrates that we use different parts of the brain when we think deliberately versus when we go with our guts. As one author explains, “subjects who choose [rationally] rely on the regions of the brain known as the dorsolateral prefrontal cortex and the posterior parietal cortex, which are known to be important for deliberative reasoning. On the other hand, people who decide [with their guts] rely more on regions of the limbic cortex, which are more closely tied to emotion.”

So which system sets the agenda, the intuitive one or the rational one? Should I go with my gut as Gladwell advertises? Or would that lead me into predictably irrational mistakes as Ariely warns? Should I listen to my unconscious as Gerd Gigerenzer and Timothy Wilson suggest? Or, as the Invisible Gorilla folks advise, should I take note of how intuitions deceive us? And finally, will we ever know if anything is objective wrong with incest?

Moral psychology is young, so are relevant neuroscience and evolutionary psychology studies, so I hesitant to draw any conclusions here. So what about more general moral feelings? Is it nature, nurture, or somewhere in between? Thanks to several recent studies we now have some answers.

One experiment, which I briefly mentioned a couple of months ago, comes from Paul Bloom, Kiley Hamlin and Karen Wynn. Bloom summarizes in the following article:

In one of our first studies of moral evaluation, we decided… to use… a three-dimensional display in which real geometrical objects, manipulated like puppets, acted out the helping/hindering situations: a yellow square would help the circle up the hill; a red triangle would push it down. After showing the babies the scene, the experimenter placed the helper and the hinderer on a tray and brought them to the child. In this instance, we opted to record… which character they reached for, on the theory that what a baby reaches for is a reliable indicator of what a baby wants. In the end, we found that 6- and 10-month-old infants overwhelmingly preferred the helpful individual to the hindering individual.

Does this mean that we are born with a moral code? No, but it does suggest that we have a sense of compassion and favor those who are altruistic from very early on.

Another experiment comes from Marco Schmidt and Jessica Sommerville. Schmidt and Sommerville showed 15 months year old babies two videos, one in which an experimenter distributes an equal share of crackers to two recipients and another in which the experimenter distributes an unequal share of crackers (she also did the same procedure with milk). Then, they measured how the babies looked at the crackers and milk while they were distributed. According to “violation of expectancy,” babies pay more attention to something when it surprises them. This is exactly what they found; babies spent more time looking when one recipient got more food than the other.

What does this suggest? According to the researchers, “the infants expected an equal and fair distribution of food, and they were surprised to see one person given more crackers or milk than the other.” This doesn’t mean that the babies felt something was morally wrong, but it does mean that they noticed something wasn’t equal or fair.

Schmidt and Sommerville followed up the experiment with another. In the second, they offered the babies two toys, a LEGO block and a LEGO doll. They labeled whichever toy the babies chose as their preferred toy. Then an experimenter asked the baby if he could have the preferred toy. They found that about one-third of the babies gave away their preferred toy, another third gave away the toy that wasn’t preferred, and the last third didn’t share at all. They also found that 92 percent of the babies who shared their preferred toy spent considerably more time looking when the food was unequally distributed; 86 percent of babies who shared their less-preferred toy were more surprised when there was an equal distribution of food. In other words, the altruistic sharers (those who gave the preferred dolls away) noticed more when the crackers and milk weren’t distributed equally while the selfish sharers (those who gave the less-perferred dolls away) showed the opposite.

Taken together, Bloom’s and Schmidt and Sommerville’s work encourages the fact that our moral instincts form early on. But these two studies are just a tiny sampling. It is still difficult to say with certainty if we are born with a moral instinct or not. It is also difficult to say what this moral instinct entails.

Back to incest.

To be sure, evolutionary psychology easily explains why we morally reject incest  – obviously, reproducing with our siblings would be counter productive – but there are many other topics such as why we act altruistically, why we show compassion towards strangers and why we give to charity that remain fairly mysterious. Fortunately, moral psychology is making great progress. It is an exciting new field and I look forward to more findings like the ones outlined here. In addition, I hope that one day in the near future psychologists will come to a consensus regarding the emotion-reason debate.

Read more

%d bloggers like this: