Executives, like most other people, have long believed that human beings are interested only in advancing their material interests.
However, recent research in evolutionary biology, psychology, sociology, political science, and experimental economics suggests that people behave far less selfishly than most assume. Evolutionary biologists and psychologists have even found neural and, possibly, genetic evidence of a human predisposition to cooperate.
These findings suggest that instead of using controls or carrots and sticks to motivate people, companies should use systems that rely on engagement and a sense of common purpose.
Several levers can help executives build cooperative systems: encouraging communication, ensuring authentic framing, fostering empathy and solidarity, guaranteeing fairness and morality, using rewards and punishments that appeal to intrinsic motivations, relying on reputation and reciprocity, and ensuring flexibility.
Artwork: Geoffrey Cottenceau and Romain Rousset, Flamme, 2009
In 1976, evolutionary biologist Richard Dawkins wrote in The Selfish Gene, “If you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from biological nature. Let us try to teach generosity and altruism, because we are born selfish.” By 2006, the tide had started to turn. Harvard University mathematical biologist Martin Nowak could declare, in an overview of the evolution of cooperation in Science magazine, “Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world. Thus, we might add ‘natural cooperation’ as a third fundamental principle of evolution beside mutation and natural selection.”
Why is this deep-rooted belief about human selfishness beginning to change? To some extent, the answer is specific to evolutionary biology. But similar ideas challenging the notion that people are born selfish have surfaced in several other fields, such as psychology, sociology, political science, and experimental economics. Together, these ideas are tracing a new intellectual arc in the disciplines concerned with human action and motivation.
Until the late 1980s, our understanding of what made people tick was marked by the rise of an ever more precisely defined model of self-interested rationality—the rational actor theory—which provided the basis for thinking about human behavior, institutions, and organizations. Assuming that we are uniformly rational and concerned only with advancing our material interests provided good enough predictions about our behavior—or so we thought—and convinced us that we are best off designing systems as though we are selfish creatures. Moreover, people who don’t cooperate can ruin things for everyone, so to save ourselves from freeloaders we built systems by assuming the worst of everyone.
Nowhere are the assumptions about the effective harnessing of self-interest, and the terrible consequences, expressed more clearly than in former Federal Reserve chairman Alan Greenspan’s 2008 testimony to the U.S. Senate after the collapse of the banking and credit system. “Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity—myself, especially—are in a state of shocked disbelief,” Greenspan said. “I’ve been going for 40 years or more with very considerable evidence that it was working exceptionally well.”
The widespread conviction about the power of self-interest is based on two long-standing, partly erroneous, and opposing assumptions about getting people to cooperate. One of them inspired the philosopher Thomas Hobbes’s Leviathan in 1651: Humans are fundamentally and universally selfish, and governments must control them so that they don’t destroy one another in the shortsighted pursuit of self-interest. The second is Adam Smith’s alternative solution: the invisible hand. Smith’s 1776 book, The Wealth of Nations, argued that because humans are self-interested and their decision making is driven by the rational weighing of costs and benefits, their actions in a free market tend to serve the common good. Though their prescriptions are very different, both the Leviathan and the invisible hand have the same starting point: a belief in humankind’s selfishness.
Models of self-interested rationality increasingly came to be seen as universally correct and applicable across an ever-expanding range of human practices. Economics became the primary medium of expression. For example, Nobel laureate Gary Becker argued in 1968 that the calculus of criminals is best understood as a set of rational trade-offs between the benefits of crime and the costs of punishment, discounted by the probability of detection. Imposing harsher punishments and increasing police enforcement, people concluded, are the obvious ways to tackle crime. The same year, Garrett Hardin described the tragedy of the commons—the parable about farmers who shared a piece of land with no restrictions on the number of cattle each could graze on it. They kept letting more cattle graze on the commons until the grass was gone, leaving nothing for anyone. No one stopped grazing animals, Hardin argued, for fear of losing out to the other farmers, who would continue overexploiting the commons. The conclusion was that as self-interested actors, human beings will inevitably destroy shared resources unless the latter are subject either to regulation or to property rights.
Like biology, however, the discipline of economics has changed over the years. In 2009, Elinor Ostrom was awarded the Nobel Prize in economics for showing how commons can—and do—sustain themselves for centuries as well-functioning systems. The most striking example is in Spain, where thousands of farmers have been managing their access to water through self-regulated irrigation districts for more than five centuries. To take another example, 75% of U.S. cities with populations of more than 50,000 have successfully adopted some version of community policing, which reduces crime not by imposing harsher penalties but by humanizing the interactions of the police with local communities.
The Success of the Commons
In Spain, thousands of farmers have been managing their access to water through self-regulated irrigation districts for more than five centuries. In the United States, 75% of cities with populations of more than 50,000 have successfully adopted some version of community policing, which reduces crime not by imposing harsher penalties but by humanizing the interactions of the police with local communities.
Overcoming our assumptions about self-interest is critical to diagnose the risks that new business rivals pose. In 1999, two experts showed how Microsoft’s entry into the encyclopedia market with Encarta symbolized the transformation made possible by networked information economics. Here was a major player leveraging a powerful position, gained by early-mover advantages and network effects, to bundle a product and distribute it widely at a low cost. Britannica’s lumbering 32-volume, multi-thousand-dollar offering didn’t stand a chance. Ten years later, Britannica had been pushed to a different model—but not by Encarta. Microsoft stopped producing Encarta in 2009 because of competition from a business model that is inconceivable according to the belief in self-interested rationality: Wikipedia.
If you feel that Wikipedia—the seventh or eighth most trafficked website, with more than 300 million visitors a month—is unique, ask Zagat’s how the user-generated Yelp has affected its market or Fodor what it thinks about TripAdvisor. The rise of open source software is an example of the same dynamic. For more than 15 years, companies have used open source Apache software for mission-critical web applications, with Microsoft’s server software trailing a distant second. Companies such as Google, Facebook, and Craigslist have also found ways to become profitable by engaging people. Our old models of human behavior did not—could not—predict that.
The way these organizations work flies in the face of the assumption that human beings are selfish creatures. For decades, economists, politicians, legislators, executives, and engineers have built systems and organizations around incentives, rewards, and punishments to get people to achieve public, corporate, and community goals. If you want employees to work harder, incorporate pay for performance and monitor their results more closely. If you want executives to do what’s right for shareholders, pay them in stock. If you want doctors to look after patients better, threaten them with malpractice suits.
Yet, all around us, we see people cooperating and working in collaboration, doing the right thing, behaving fairly, acting generously, caring about their group or team, and trying to behave like decent people who reciprocate kindness with kindness. The adoption of cooperative systems in many fields has been paralleled by a renewed interest in the mechanics of cooperation among researchers in the social and behavioral sciences. Through the work of many scientists, we have begun to see evidence across several disciplines that people are in fact more cooperative and selfless—or behave far less selfishly—than we have assumed. Perhaps humankind is not so inherently selfish after all.
Dozens of field studies have identified cooperative systems, many of which are more stable and effective than incentive-based ones. Evolutionary biologists and psychologists have found neural and possibly genetic evidence of a human predisposition to cooperate, which I shall describe below. After years of arguments to the contrary, there is growing evidence that evolution may favor people who cooperate and societies that include such individuals.
In fact, a distinct pattern has emerged. In experiments about cooperative behavior, a large minority of people—about 30%—behave as though they are selfish, as we commonly assume. However, 50% systematically and predictably behave cooperatively. Some of them cooperate conditionally; they treat kindness with kindness and meanness with meanness. Others cooperate unconditionally, even when it comes at a personal cost. (The remaining 20% are unpredictable, sometimes choosing to cooperate and other times refusing to do so.) In no society examined under controlled conditions have the majority of people consistently behaved selfishly.
Predisposed to Cooperate
In experiments testing cooperative behavior, 50% of participants systematically and predictably behave cooperatively. Some do so conditionally; they treat kindness with kindness and meanness with meanness. Others cooperate unconditionally, even when it comes at a personal cost.
That’s perhaps why using controls or carrots and sticks to motivate people isn’t effective. We need systems that rely on engagement, communication, and a sense of common purpose and identity. Most organizations would be better off helping us to engage and embrace our collaborative, generous sentiments than assuming that we are driven purely by self-interest. In fact, systems based on self-interest, such as material rewards and punishment, often lead to less productivity than an approach oriented toward our social motivations.
The challenge we face today is to build new models based on fresh assumptions about human behavior that can help us design better systems. The image of humanity this shift requires will allow us to hold a more benevolent model of who we are as human beings. No, we are not all Mother Teresa; if we were, we wouldn’t have heard of her. However, a majority of human beings are more willing to be cooperative, trustworthy, and generous than the dominant model has permitted us to assume. If we recognize that, we can build efficient systems by relying on our better selves rather than optimizing for our worst. We can do better.
The Science of Cooperation
What would the world be like if some people consistently operated as self-interested rational actors while others did not? Take the experiments that Lee Ross and his colleagues conducted with American college students and Israeli fighter pilots. As we know, in prisoner’s dilemma games, the two players will both be better off if they cooperate, but neither can trust the other to do so. Game theory predicts that both players will choose not to cooperate instead of taking the risk of losing out by cooperating. Extensive experimental work, however, has shown that people actually cooperate more than the theory predicts.
Ross and his collaborators told half the players in their experiments that they were playing the Community Game and the other half that they were playing the Wall Street Game. The two groups were identical in all other respects. Yet, in the Community Game group, 70% started out playing cooperatively and continued to do so throughout the experiment. In the Wall Street Game group, the proportions were reversed: 70% of the players didn’t cooperate with one another. Thirty percent started out playing cooperatively but stopped when the others didn’t respond.
The Wall Street Game
Lee Ross and collaborators told one group of participants that it would be playing the Community Game and another group that it would play the Wall Street Game. In the first group, 70% started out playing cooperatively and continued to do so throughout the experiment. In the second group, 70% of the players didn’t cooperate with one another. Thirty percent started out playing cooperatively but stopped when the others didn’t respond.
This experiment illustrates a couple of points. One, we are not all the same. About 30% of players cooperated even in the Wall Street Game while another 30% acted with self-interested rationality even when told they were in the Community Game. Two, many of us are influenced by context. According to Ross, the framing of the games influenced 40% of the sample. The players who thought they were acting in a context that rewarded self-interest behaved in a manner consistent with that expectation; participants who felt they were in a situation that demanded a prosocial attitude conformed to that scenario. When Ross and his colleagues asked the subjects’ teachers or commanders to predict who would and wouldn’t cooperate, it turned out that the game’s framing forecast behavior better than the teachers and commanders could. It seemed that participants who were seen as self-interested could be induced to cooperate if the games they were playing were reframed.
Anyone designing a cooperative system—be it an organizational process, a legal regime, or a technical platform—and optimizing it for only 30% of the population leaves on the table massive amounts of human potential. Moreover, such systems have to rely on monitoring, rewards, and punishments; their efficiency is limited by information-gathering techniques. Systems that harness intrinsic motivations and self-directed cooperative behavior don’t need to limit themselves to knowledge of what people will do. Every participant becomes his or her own monitor, bringing insight and initiative to the task—whether or not someone is monitoring behavior.
What might account for human cooperation? The first generation of explanations in evolutionary biology began with the theory of kin selection, which predicts that human beings will incur costs only to save others who carry their genes, such as siblings and cousins. Evolutionary biologist J.B.S. Haldane put it in less than romantic terms: “I will jump into the river to save two brothers or eight cousins.” That explained the cooperative behavior in ant and bee colonies as well as in smaller family groups. From there, it was a small hop to accepting reciprocity between individuals not genetically related as an important source of cooperation: “I’ll scratch your back if you immediately scratch mine.”
However, these theories still could not explain field observations in the wild, such as those of coyotes and badgers in the National Elk Refuge in Wyoming. Scientists there observed that the two groups of animals collaborated to hunt ground squirrels. Coyotes, which are faster and have a larger range, would scout, and once they spotted a squirrel, they would signal to the badgers. The badgers, which are underground hunters and catch their prey by trapping it in dead-end tunnels, would then burrow and lie in wait. The squirrels were trapped between a hammer and an anvil: If they escaped the badgers by going above ground, the coyotes would catch them. If they evaded the coyotes by ducking below ground, the badgers would corner them. At the end of a hunt, only one or the other would eat the squirrel, but still the badgers and coyotes collaborated.
Over the years, researchers have developed models of indirect reciprocity, networked reciprocity, and even group selection to explain observations of looser and more remote cooperation. The findings in biology meet human society directly in the work on gene-culture coevolution of anthropologists Peter Richerson and Robert Boyd. They have been gathering evidence for the proposition that cultural practices, too, are subject to evolutionary pressures and that human individuals and cultures evolve toward more-successful strategies.
Imagine two groups. In one, the practice of serving in the army is valued; in the other, it’s not. In the first group, people are willing to fight and risk their lives for their group or donate special skills such as weapon making or intelligence gathering. In the second, they aren’t. If these two groups go to war, the outcome will never be in doubt. And populations don’t have to wait until genetic changes disperse these traits; they can copy one another’s best practices if they seem to work better.
Boyd and Richerson argue that cultures evolve not only through the copying of practices but also through genetic changes; in other words, genes and cultures coevolve. Cultural practices can influence the genetic development of populations that adopt them, favoring genetic predispositions that benefit most from the cultural practice or make following it easier. The researchers’ most physiologically striking example is adult lactose tolerance, which is widespread among descendents of European people who drink milk but is rare among those who created yogurts and cheeses to break down lactose so that they could consume milk products. Lactose tolerance is a genetic trait, but it can be attributed to a cultural practice—drinking milk rather than eating yogurt or cheese—that has existed for a very short time in evolutionary terms.
What might the genetic components of a cooperative culture look like? Political scientists such as James Fowler and his collaborators found that the decision to vote has a strong genetic component. In a 2008 paper in the American Political Science Review, they described their analysis of the voting behavior of 400 identical and nonidentical twins in the Los Angeles area. All the twins in the study were raised together, which meant that the effects of early upbringing, socioeconomic status, and political affiliation didn’t compromise the results. The study found that identical twins were more likely to show the same behavior—either vote or not vote—than were nonidentical twins. Statistical analyses conducted by Fowler and his collaborators suggest that slightly more than 50% of the concordance in behavior was due to genetics.
How could genes account for a practice that has been widespread in the modern world for only around 100 years? That’s a blip on the evolutionary radar; a gene for voting couldn’t possibly have evolved in so short a time. Besides, voting is a puzzle for the rational actor model. The probability that an individual’s vote will affect the outcome of a policy he or she cares about is infinitesimally small, so much so that any cost, including a 15-minute detour, should outweigh it. Still, hundreds of millions of us around the world violate self-interested rationality in public every year. We vote.
What on earth does the propensity to vote have to do with collaboration, you might ask? Imagine there are such things as personality traits, which typify an individual’s behavior. Imagine that one such trait is conscientiousness. People who have that trait—in personality psychology, it is one of the Big Five—tend to be happier with themselves and do what they think is right according to the cultural context. Voting happens to be one way—a relatively inexpensive way—of making conscientious people comfortable in their own skin.
Now let’s bring Boyd and Richerson’s theory back into the equation. Imagine that over the millennia, some cultures rewarded and valued conscientiousness. In those cultures, people who had a genetic predisposition to be conscientious would thrive. Because they would be considered desirable mates, they would reproduce relatively more often, meaning there would be more of them over time. The cultures, in turn, would be able to sustain cooperation more effectively because people would be driven to do the right thing even when they weren’t directly monitored, punished, or rewarded.
Several studies suggest that personality traits are partly heritable. A few years ago, Thomas Bouchard and Matt McGue published an extensive review of twin, adoption, and biological studies that looked at genetic influences on psychological and personality differences. They concluded that personality traits such as extraversion, neuroticism, agreeableness, and openness were on average between 42% and 57% heritable, while shared environmental factors such as the home, which most people believe are major influences, did not correlate with personality.
The biology of cooperation draws our attention because it speaks with the authority of the most reliable way we know how to know: science. If we simply say the word empathy, it sounds mushy. If a scientist like Tania Singer shows, using fMRI scans, that women’s brains light up in three places when they get electric shocks, and that when their partners are shocked, their brains light up in two of the same three places, we understand empathy not as a hard-to-define feeling but as something that people experience in a physical sense. This phenomenon was originally discovered by neurophysiologist Giacomo Rizzolatti, who also found that our brains mirror not only pain and motor movements but pure emotions as well. When Rizzolatti and his colleagues showed subjects videos in which people were expressing disgust on their faces, the same neurons fired in the subjects’ brains as the ones that had been activated when they themselves were exposed to disgusting smells. Cognitively and emotionally, we may be able to “feel” what others are feeling.
Neuroscience also shows that a reward circuit is triggered in our brains when we cooperate with one another, and that provides a scientific basis for saying that at least some people want to cooperate, given a choice, because it feels good. Kevin McCabe and his collaborators have shown that people are rewarded when they trust others; James Rilling and his team have demonstrated that our brains light up differently when we are playing with another human being than they do when we are using a computer.
As we learn more about the biology of behavior, we are gaining a better grasp of the role that genes play in interactions with culture. The ability to trust is a key element in cooperation. It appears to have a biological component, suggesting it may even have a genetic basis. One animal study recently looked at the effects of the brain chemical oxytocin on trust formation in voles. The researchers compared monogamous prairie and pine voles, and polygamous mountain and meadow voles, which mated more promiscuously. They found that the monogamous voles had higher-density oxytocin receptors in many areas of the brain than did the polygamous voles. This meant more-trusting partnerships occurred between the animals whose brains had better oxytocin uptake. Researchers later found that when human beings were given oxytocin nasal spray, they, too, were more likely to trust their partners.
We are far from having a clear model that connects all these dots; what I offer is conjecture based on research that has not made the leap to claim those connections. However, the argument is suggestive; it gives us a framework to grapple with the idea that many of us are, by a combination of nature, nurture, and the interactions between us, much better and less selfish than our standard models predict, as philosophers such as Jean-Jacques Rousseau and David Hume have argued. In fact, it brings the centuries-old debate between Hobbes and Rousseau—or between the Adam Smith of The Wealth of Nations and the Adam Smith of The Theory of Moral Sentiments —to the present, with genetics and fMRI studies thrown in as fresh evidence. Over the past decade, Rousseau seems to have gained the advantage over Hobbes.