Skip to content
News
Superpuzzle
Ep. 9Game TheoryTipping PointNetwork Effects

The Domino Effect

Robert Axelrod's Tit-for-Tat never beat a single opponent — and won both tournaments. Martin Nowak reduced cooperation to five inequalities. Damon Centola found the tipping point at 25%. Cooperatives survive at twice the rate. And 2023 network data suggests value scales not as n-squared but n-cubed. The shift from zero-sum to positive-sum is itself positive-sum. Each adoption makes the next one easier. That is not optimism. It is arithmetic.

Supercivilization·May 4, 2026·6 min read

In 1984, Robert Axelrod published the results of two tournaments that changed how biologists, economists, and political scientists think about competition. He had invited game theorists to submit strategies for the iterated Prisoner's Dilemma — a game where two players repeatedly choose to cooperate or defect. The winning entry was four lines of code. Tit-for-Tat: cooperate first, then copy whatever the other player just did. It won both tournaments. It never beat a single opponent. That sentence is worth sitting with. The strategy that dominated the field could not, by design, outscore anyone in a head-to-head match. The best it could manage against any individual was a tie.

We think this result is one of the most important findings in the social sciences. We also think its implications are still being underestimated.

How do you win without beating anyone?

The mechanism is simple enough to sketch on a napkin. Against cooperators, Tit-for-Tat cooperates — both players score well. Against defectors, it defects — both score poorly, but the defector gains no advantage. Across hundreds of matchups, the cooperators pull each other up while the defectors drag each other down. Tit-for-Tat accumulates the highest total score not through dominance but through a kind of quiet gravitational pull. It makes cooperation the path of least resistance.

Here is the part that still surprises us: Axelrod published the full results before the second tournament. Every entrant knew exactly how Tit-for-Tat worked. Several designed strategies specifically to exploit it. Tit-for-Tat won again. A strategy that is nice (never defects first), retaliatory (punishes defection immediately), and forgiving (returns to cooperation the moment the opponent does) is resilient even when everyone can see exactly what it is doing.

This was not a thought experiment. It was a controlled competition with real strategies submitted by professional game theorists. And the result has been replicated in evolutionary simulations, behavioral experiments, and field studies for four decades since.

What if cooperation is just math?

In 2006, Martin Nowak published "Five Rules for the Evolution of Cooperation" in Science. The paper reduced the question of when cooperation wins to five mathematical inequalities:

Kin selection — benefit to a relative, weighted by genetic relatedness, exceeds the cost. Direct reciprocity — the probability of meeting again exceeds the cost-to-benefit ratio. Indirect reciprocity — reputation travels far enough that cooperators can find each other. Network reciprocity — the benefit-to-cost ratio exceeds the average number of neighbors. Group selection — between-group competition outweighs within-group advantage for defectors.

These are not moral arguments. They are conditions. When the structure of interaction satisfies any of these inequalities, cooperation does not require persuasion or goodwill. It emerges because it pays better. The coordination tools we covered in Episode 6 — quadratic funding, conviction voting, hypercerts — work precisely because they tilt these inequalities. They make reputation visible. They increase the probability of repeated interaction. They cluster cooperators together. They are plumbing, not preaching.

We are not entirely certain which of Nowak's rules matters most in digital coordination environments. The honest answer is that the interactions between them are still poorly understood. But the direction is clear.

What happens at 25%?

Damon Centola's 2018 paper in Science answered a question that had been debated for decades: how large does a committed minority need to be before a social convention flips?

The answer, across multiple experimental conditions, was approximately 25%. Below that threshold, the minority is visible but dismissible — a curiosity. At 25%, something discontinuous happens. The convention tips. Not gradually, the way a river erodes a bank. Abruptly, the way a dam breaks. The minority's behavior becomes the new default, often within a few interaction cycles.

We find this both encouraging and humbling. Encouraging because it means the cooperative economy does not need to become a majority before the transition occurs. Humbling because we genuinely do not know where the current committed minority stands in most domains. The $2.79 trillion in cooperative turnover, the 41 million Bluesky accounts, the 50-plus DeSci projects — these are growing. Whether they are at 8% or 22% in their respective fields, we cannot say with confidence.

What we can say: the phase transition, when it comes, will not announce itself with a gradual trend line. It will look sudden even if the buildup was slow.

Why do cooperative systems stick?

Ernst Fehr and Simon Gachter ran a series of experiments that revealed something economists found uncomfortable: people will pay a personal cost to punish defectors, even when the punishment brings them no direct benefit. They called it altruistic punishment. In a one-shot game it looks irrational. In repeated interactions — which is to say, in life — it is the immune system of cooperation.

Elinor Ostrom documented the same pattern in the wild. Fishing communities in Turkey. Irrigation systems in the Philippines. Forest management in Switzerland. Her Nobel Prize in 2009 was awarded for showing that communities reliably build and enforce cooperative norms without top-down regulation, provided the structural conditions are right: repeated interaction, visible reputation, graduated sanctions.

The practical consequence is that cooperative equilibria are sticky. Once a group crosses the threshold into cooperation, defection becomes expensive — not because a regulator punishes it, but because the community does. Each new cooperator raises the cost of defecting for everyone else.

Is the network effect stronger than we thought?

Metcalfe's Law has been the standard model for network value since the 1990s: the value of a network scales as the square of its participants. V proportional to n-squared.

Recent empirical work — and we want to flag that this is still debated — suggests the relationship may be closer to n-cubed. The argument: Metcalfe counts pairwise connections. But real networks generate value through groups, sub-communities, and multi-party coordination, which scale combinatorially. Cubic scaling sits between the pairwise floor and the combinatorial ceiling.

If the cubic model holds, and we stress the if, it means that every new participant in a cooperative network adds more value than the last. The incentive to join accelerates. The incentive to defect shrinks. The system becomes a flywheel.

We pair this with one more data point. Cooperatives — businesses structured around shared ownership and mutual benefit — survive at approximately twice the rate of conventional businesses. This finding holds across every jurisdiction where it has been studied, across decades. A 2x survival advantage, compounded over economic cycles, means the cooperative share of any mixed economy grows over time through sheer persistence, even without any new conversions.

Where does that leave us?

Assemble the pieces. Cooperation wins in repeated interactions (Axelrod). It emerges when structural conditions are met (Nowak). It tips at 25% (Centola). Once established, it self-reinforces (Fehr, Gachter, Ostrom). Network value may compound faster than we modeled (Metcalfe, updated). And cooperative structures outlast extractive ones two-to-one.

Each of these is an independently published finding. Together they describe a system with a specific property: the transition from zero-sum to positive-sum is itself positive-sum. Each adoption changes the conditions to make the next adoption more likely, more rewarding, and harder to reverse.

We call this the domino effect — not the popular image of sequential collapse, but the physics of cascading phase transition. Early dominoes require direct force. Later ones fall from the vibration of their neighbors. The last ones fall before you touch them.

We do not know how many dominoes have fallen. We are not sure anyone does. But the mathematics of what happens next is not in serious dispute.

If you are building cooperative infrastructure, funding public goods, or simply choosing repeated-game strategies in your own work — you are changing the structural conditions. Not metaphorically. Measurably. The math is on your side, and it gets more on your side with each person who joins.


Episode 9 of the Superpuzzle Developments series. The degen-to-regen transition is not a hope — it is a mathematical structure with a 25% tipping point, a self-reinforcing equilibrium, and a compounding survival advantage. We do not know exactly where the tipping point is. We know the direction.