News

We cooperate to survive. But, if no one’s looking, we compete

Reading classic works in evolutionary biology is unlikely to make you optimistic about human nature. From Charles Darwin’s The Descent of Man (1871) onwards, there is a fundamental understanding among biologists that organisms, especially humans, evolved to maximise self-interest. We act to promote our own success or that of our family. Niceness, by contrast, is just a mirage, and morality more broadly is just an illusion. Sociobiology – the infamous movement of the second half of the 20th century – forced us to confront the cold, calculated nature of having evolved biologically.

More recently, however, anthropologists and psychologists have pushed back against this pessimistic view. Dozens of books over the past decade have focused on human cooperation, promoting it as the secret ingredient to our conquest of the planet. We work together, using our intelligence, language and a diverse skillset to build on complex cultures, develop technologies, and solve problems in our societies and environments. We learn at a young age what the rules of our groups are, and those rules, imprinted on us culturally, govern the safe, cohesive units that allowed us to conquer inhospitable parts of the world and out-compete unfriendly groups of people who don’t work well together.

This narrative saves us the embarrassment of accepting that biological selfishness – acting only to maximise our Darwinian success – is the foundation of all behaviour. It also matches some claims by anthropologists that ancient humans were egalitarian, living in small groups with little permanent rank, where leaders (if any) had limited authority and people collectively pushed back against anyone trying to dominate.

Yet, as with sociobiology, it is only half true. Instead, our collective predilection for exploitation, deceit and competition is equally important to cooperation in the story of human evolution. We evolved not to cooperate or compete, but with the capacity for both – and with the intelligence to hide competition when it suits us, or to cheat when we’re likely to get away with it. Cooperation is consequently something we need to promote, not presume.

The modern dispute about whether humans are fundamentally cooperative or competitive dates back to the publication of Mutual Aid (1902) by Pyotr Kropotkin, an anarchist who took his views about human nature from observing animals helping each other in the unhospitable wilds of Siberia. Kropotkin believed that it’s only through interdependence that any species can survive in the struggles against predation, violence and the environment, which characterise the omnipresent dangers individuals face. Like so many other species, fish, flesh and fowl, we work together to survive and reproduce.

On the surface, Kropotkin’s views are at odds with Darwin’s, who championed the individual struggle for survival and mating as the fundamental driver of evolution by natural selection. The twin pillars of competing for survival and competing for mates – natural and sexual selection, respectively – were, for Darwin, the foundations of biological life. For Kropotkin and his colleagues, by contrast, the emphasis was on how individuals acted for the good of the species: mutual aid meant a better, safer life for everyone.

Today, the debate is substantially the same, though the language and tools we use to make our points are different. Experiments conducted by anthropologists and psychologists across the world evaluate how cooperatively people behave in a multitude of conditions, with obvious ideological battle lines between those who espouse a self-interested versus a beneficent model of human nature.

For example, in one famous study from 2001, anthropologists worked with 15 different small-scale societies to see how they behaved in an economic experiment called the ultimatum game. In this game, the researcher gives one player a set amount of money – in this case, the local value of one or two days’ worth of wages. That player then chooses an amount of the money to offer to the second player, who may either accept or decline. In the case of acceptance, the players receive the amounts of money agreed upon; in the case of rejection, both receive nothing.

We are thought to treat each other more fairly than you’d expect using a cold economic calculus

In a calculated world governed only by self-interest, we’d expect the first player to offer the smallest possible amount, and for the second player to accept any offer. Something is better than nothing, no matter whether that something is unfair on either side.

Of course, the participants in the small-scale societies didn’t play the game in this way. The offers were almost never lower than 25 per cent of the overall pot, and in some groups, like with the Aché people of Paraguay and the Lamelara people of Indonesia, the offers were often greater than half the total amount.

Some scientists, notably the economist Ernst Fehr, used this outcome to defend the idea that humans are ‘inequity averse’ – that is, we are a species that almost universally dislikes unfairness. (‘Prosociality’ is also a term you see in the literature a lot.) As a consequence of this alleged collective aversion, we are thought to treat each other more fairly than you’d expect using a cold economic calculus.

These ideas have broadened out into a modern theory of super-cooperation, with a caveat: instead of the ‘good of the species’ view advocated for by Kropotkin, researchers focus on how people behave within groups. We learn to cooperate within groups because we are interdependent on one another for survival: reciprocal relationships are essential when anyone meets with failure in hunting, gathering or agriculture. Need-based transfers – where people ask from others only when they need help, for example when their own crops fail – characterise small-scale societies across the world.

See also  Charles Crawford: Mississippi executes a man convicted of raping and killing a college student

Local norms determining how people cooperate spread through social learning. So, while need-based transfer is a common practice worldwide, its appearance is determined by the culture in question. Osotua (which translates to ‘umbilical cord’) is a bond linking two Maasai people of Kenya and Tanzania in lifelong interdependence. Betrayal of osotua is reportedly unheard of, and a person’s descendants can even inherit a family member’s bond with another.

According to this way of thinking, groups that cooperate more effectively out-compete groups that don’t. This is part of a broader process called cultural group selection, the modern-day version of the mutual aid concept that Kropotkin championed more than a century ago. Except, instead of acting for the good of our species, we act for the good of our groups. Interdependence breeds loyalty, the hypothesis holds.

If the notion of cultural group selection bears out, then the problems we see in the world today should be seen as a consequence of friction between groups, not within them. Issues like international conflicts would derive from differences in social norms and values, rather than because of a missing commitment to prosociality shared by all group members. Cultural group selection encourages us to look for problems outside rather than within.

But the idea starts to look shaky upon closer inspection. Polly Wiessner, an anthropologist who has worked with the Ju/’hoansi of the Kalahari for decades, described what happened when she ran similar experiments herself. As part of her execution, she made it clear to volunteers that she was acting on behalf of someone else, the well-known economist Ernst Fehr; this was his interest, she emphasised; she did not care at all how they played the game and no matter what they did, there would be no consequence at all. She wrote:

A few asked me once more if it was really true that their identity would not be revealed; with confirmation, they slid more coins, one by one, over to their own sides. Occasionally the subject would hesitate and say: ‘Are you sure you are not deceiving me?’

For Wiessner, the point wasn’t that the Ju/’hoansi were uniquely selfish; it was that the experiment created a social situation unlike everyday life. Put someone in a game where identities are hidden and consequences are explicitly ruled out, and you remove many of the ordinary pressures that govern cooperation – reputation, ongoing relationships, the possibility of retaliation, the cost of being seen to take too much. What you end up measuring, in other words, is not ‘how cooperative this person is’, but how they behave in a stripped-down context where cooperation and betrayal carry very different risks.

Cooperating is not the same thing as being a cooperator

That basic insight runs through decades of work on the biology of cooperation. Even the earliest mathematical models that made reciprocity central to human social life treated betrayal as context-dependent: defection becomes attractive when there’s little chance of future interaction, when the other person can’t meaningfully respond, or when your reputation is unlikely to suffer. Cooperation, from this perspective, isn’t something we can simply assume; it’s something social life must make possible – and worth sustaining.

Over the 1970s, ’80s and arguably ever since, thousands of computer models purporting to explain how and why people cooperate have missed this point. Most often, researchers have explored how cooperation evolves in the Prisoner’s Dilemma. In the simplest form of this game, two players may choose to cooperate or defect. While mutual cooperation is mutually beneficial, and mutual defection is mutually damaging, defecting against a cooperative partner is the individual optimum – and cooperating against a defector yields the worst possible payoff. (The game is called the ‘Prisoner’s Dilemma’ because the theoretical scenario is one where two criminals are separately asked by the police to inform on one another. If you inform on your partner, you get a much lighter sentence.)

Researchers have developed an astounding number of variations of this dilemma for explaining how cooperation is sustainable more broadly. Some invoke punishing defectors; some just explore the likelihood that one player will meet another again in the future. But, critically, virtually all of them treat ‘cooperators’ and ‘defectors’ as defining individual features. A player is defined by their propensity for cooperation – much as we might say of a criminal who rats on his friend that ‘once a rat, always a rat’.

I have always found this assumption problematic. Much as any person might cheat a partner when the likelihood of being discovered is low, so are we wrong to assume that anyone who cooperates in one game is likely to cooperate in every game. Cooperating is not the same thing as being a cooperator.

Models don’t and can’t know the difference between forced and prosocially motivated cooperation

In my academic work, I’ve explored this distinction, with the aim of determining the importance of what lies beneath appearances in social interactions. A few years ago, I created a computer model to explore how false appearances can affect cooperation. If, for example, an agent – representing a person in the world of the computer model – determines that defection in the Prisoner’s Dilemma is likely to be exposed and punished, the agent cooperates. If, however, defection is likely to go unnoticed, the agent defects.

The model shows that cooperation stays high – at about two-thirds of interactions – even if the vast majority of agents prefer to defect when possible. While older models evaluating cooperation in the dilemma showed that punishment removes defectors from the population altogether – in line with what people defending cultural group selection say – the difference between appearance and motivation makes removal more difficult. You can’t punish defectors if you don’t know who they are.

See also  Chicago braces for Trump's National Guard plan

I’ve called this the problem of opportunity. When anything cooperates – whether computer agent, bacterium, mole rat or person – we have no way of establishing, with certainty, whether cooperation was intended or happened because there wasn’t a good opportunity for defection.

How people use language to talk about cooperation in the real world illustrates the problem in action. Models are, by design and requirement, vague: they don’t tell you anything more about a situation than that some computerised agents cooperated, defected, were punished, and so on. A model can’t tell you whether an agent chose to cooperate or was forced to (the latter case, in everyday language, we call coercion). And too often in everyday life, we’re forced to cooperate with others when we don’t want to – whether that’s paying high prices for food and travel, voting for a politician who seems just a bit less bad than another, or signing a non-disclosure agreement to get a job. (Think about this next time you hear the phrase ‘thank you for your cooperation’.)

Models don’t and can’t know the difference between forced and prosocially motivated cooperation. Yet, sometimes, behavioural experiments can. Far from being a species that dislikes inequity and acts against it, we are more likely to profess a desire for fairness, reserving our singularly self-interested behaviours for when there’s unlikely to be cost for them.

Evidence for a phenomenon called ‘moral credentialing’ supports this. In short, if I believe I’ve acted morally in the past – through making donations, working in a soup kitchen for the homeless, and so on – I’m more likely to justify my unethical actions in the future.

In 2011, researchers showed that participants are more likely to cheat on a mathematics test if they have the opportunity to profess support for moral principles beforehand – but only if they could rationalise about how cheating didn’t violate their moral codes. Notably, in 2024, two researchers showed that businesses voluntarily signing up to the Business Roundtable’s ‘Statement on the Purpose of a Corporation’ (2019)– which promotes the importance of value to everyone, not just shareholders – were more likely to violate both environmental and labour laws.

And more recently still, research into the use of large language models like ChatGPT illustrates just how much opportunity links to dishonesty. In this set of studies, researchers evaluated how participants behave when they can delegate behaviours to AI models. The setting was a die-rolling game, where higher numbers meant a higher financial benefit. While players were broadly honest when reporting their die rolls directly, delegating reporting to an AI agent changed behaviours markedly. When the players could give vague instructions to the AI such as ‘maximise profits’, their honesty decreased enormously, with less than a fifth of rolls reported accurately.

There are plenty of examples of people dodging moral responsibility through credentialing (touting past good deeds), rationalisation, and plain opportunism. In aggregate, the belief that you’re a moral person because of the principles you profess or the good things you’ve done before can make it easier to rationalise seizing the opportunity to act unfairly now.

When cutting corners brings a benefit and no one notices, it’s a winning move almost anywhere

The behavioural scientist Jason Dana and colleagues report that people often seek ‘moral wiggle room’ in economic games – ways to choose unfairly without feeling culpable. What matters most, the team suggests, is often not fairness but insulation from blame, sometimes by claiming ignorance about who is harmed and how:

In the spate of recent [financial] scandals, often high-level figures accused of transgressions must be shown to have known about harms in order to be held liable. We note that this ignores the efforts that executives may take to remain ignorant.

When you see how quickly people reach for loopholes and excuses, it’s tempting to blame the system – to say that Western law, markets or politics teach us to act this way. But I don’t think opportunism starts there. Opportunism is more basic than that: when cutting corners brings a benefit and no one notices – think tax avoidance – it’s a winning move almost anywhere. We can design all sorts of rules that encourage cooperation. But we can’t erase the underlying fact that cheating will often pay when it’s hidden.

Increased group sizes, reflected in the large, stratified societies in which most people live today, create far more opportunities for cheating than encountered over our evolutionary past. The egalitarianism so often noted in small-scale societies, such as the Aché, may then represent a lack of opportunity for free-riding, rather than an evolved propensity for fairness. Knowing everyone in your camp, choosing to live with relatives, and a collective expectation that people will follow local norms, maintains cooperation – though even in small-scale societies people often find ways of exploiting each other. Older men, for example, often dominate their social groups, with exploitation of women and young men reported in the ethnographic literature in nomadic tribes and forager groups across the world.

There are many other examples of exploitation in ethnographic records from across the world. The idea that we lived in a state of equality until the invention of agriculture is mostly a myth that I think helps us feel better about human nature. It fosters the hope that, one day, we’ll overcome the inequality imposed on us by our abandonment of the hunter-gatherer lifestyle.

Rather than attributing our problems today to competition between groups and the structure of our societies, the governing rule for any social system is to expect exploitation where it is possible. Every group, society and culture, no matter its size, has weaknesses that some people will try to exploit for personal benefit. The question is how those weaknesses affect culture more broadly, and whether we live in a society that rewards fairmindedness – or cleverness, subtlety and opportunism.

See also  Can Strategy’s $60B Bitcoin Bet Survive Debt and Market Stress in 2026?

In the modern world, as with our evolutionary past, the answer is the latter. All that’s changed since the advent of agriculture is the number and varieties of opportunities for free-riding and exploitation. Consequently, as technology improves and groups increase in size, we should expect people to develop creative ways for defecting more effectively – with evolution favouring those who do it best.

This proclivity for developing new strategies to compete is part of the social brain hypothesis, originally formulated by the psychologist Nicholas Humphrey. In his seminal paper on the topic in 1976, Humphrey argued that the primary function of the human intellect is to navigate the social, rather than the physical, environment.

One implication of the social brain hypothesis is the assumption that every society hosts opportunistic people who may follow local norms for only as long as it is beneficial to do so. Elsewhere, I have called these people ‘invisible rivals’. For example, religious zealots and political adherents across the world may observe all the rules linked with their group – whether ritual or ideological – until they reach a position of power. Thereafter, they can exploit others and act selfishly as it suits them. This may help to explain why studies show that people with psychopathic tendencies are more likely to enter positions of power, for example in corporate or political systems. Following rules without believing in them is an effective strategy for gaining power.

Admittedly, these arguments make our world sound hopeless. It’s tempting to think that, if the story of human evolution isn’t the rosy picture of cooperation, fairmindedness and mutual aid championed by thinkers for more than a century, we can’t expect much from our future. There are just too many problems – from raging inequality and low public trust to a rapidly warming planet and the growing risk of technology like AI – to hope that a species with a dark and ignoble past can overcome itself and create a better future.

I think, however, that this pessimism is misplaced, and that facing ourselves honestly is the first and most important step we can collectively take. This requires adopting a realistic perspective about the kind of animal that Homo sapiens is. First, we are not inherently cooperative but have the capacity for cooperation – just as we have the capacity for exploitation and selfishness. What matters at the individual level is the way we choose to behave towards others.

The real question is what kinds of environments make it easier to do the right thing

Second, just as there is no such thing as a cooperator, there is no such thing as a free-rider. These are behaviours that we apply in models and experiments for convenience. How people behave – and critically, how we describe social behaviours – is a matter of circumstance. The same person who behaves ethically in one circumstance may not do so in another, as research into moral credentialing shows. Our behavioural plasticity, or ability to adapt the way we act to context, is one of our defining features. The evolved psychological processes driving our decisions cannot be captured by simplistic models or games. Anyone can be an invisible rival.

That is precisely why local social norms matter so much. If cooperation isn’t a fixed trait but a fragile, context-dependent outcome, then the real question is what kinds of environments make it easier to do the right thing – and harder to get away with quiet defection. The Nobel laureate Elinor Ostrom argued that local social norms are the bedrock of any serious effort to promote cooperation: look at how people behave in their immediate surroundings to understand their methods for restraining unbridled selfishness. Just as organisms evolve immune defences against selfish cells that quietly undermine the whole, societies need norms – and the institutions that uphold them – that can detect and restrain rivalries that flourish out of sight.

Fostering community-level interdependence – and the norms that evolved to help them function cooperatively – is therefore essential for combatting the exploitation that results from invisible rivalry. Never try to enforce cooperation from above. Instead, just as the economist Noreena Hertz argues we should replace ‘greed is good’ maxims in the capitalist framework with a community-oriented, cooperation-promoting mindset, appreciating that we are all better off when we work together is the critical insight needed for building a prosocial and equality-focused environment for the future.

Education is where this begins, not as moral uplift but as collective self-knowledge: it helps us see our own temptations clearly and translate that insight into practical scaffolding – laws, schools and civic rules that reward cooperation and raise the costs of exploitation. Cheating will never vanish, and some people will always look for an edge, but our distinctive intelligence lies as much in recognising exploitation and organising against it as in exploiting in the first place. Invest in that knowledge and in the local institutions that make fairness both appreciated and rewarded, and we will widen the space in which cooperation and equality can endure.


Source link

Digit

Digit is a versatile content creator with expertise in Health, Technology, Movies, and News. With over 7 years of experience, he delivers well-researched, engaging, and insightful articles that inform and entertain readers. Passionate about keeping his audience updated with accurate and relevant information, Digit combines factual reporting with actionable insights. Follow his latest updates and analyses on DigitPatrox.
Back to top button
close