Jump to ratings and reviews
Rate this book

The Evolution of Cooperation

Rate this book
The Evolution of Cooperation provides valuable insights into the age-old question of whether unforced cooperation is ever possible. Widely praised and much-discussed, this classic book explores how cooperation can emerge in a world of self-seeking egoists—whether superpowers, businesses, or individuals—when there is no central authority to police their actions. The problem of cooperation is central to many different fields. Robert Axelrod recounts the famous computer tournaments in which the “cooperative” program Tit for Tat recorded its stunning victories, explains its application to a broad spectrum of subjects, and suggests how readers can both apply cooperative principles to their own lives and teach cooperative principles to others.

264 pages, Paperback

First published April 15, 1984

Loading interface...
Loading interface...

About the author

Robert Axelrod

21 books86 followers
From wikipedia:

Robert Axelrod (born 1943) is a Professor of Political Science and Public Policy at the University of Michigan. He has appointments in the Department of Political Science and the Gerald R. Ford School of Public Policy. Prior to moving to Michigan, he taught at the University of California, Berkeley (1968-1974). He holds a BA in mathematics from the University of Chicago (1964) and a PhD in political science from Yale University (1969).

He is best known for his interdisciplinary work on the evolution of cooperation, which has been cited in numerous articles. His current research interests include complexity theory (especially agent-based modeling), and international security. Among his honors and awards are membership in the National Academy of Sciences, a five-year MacArthur Prize Fellowship, the Newcomb Cleveland Prize of the American Association for the Advancement of Sciences for an outstanding contribution to science, and the National Academy of Sciences Award for "Behavioral Research Relevant to the Prevention of Nuclear War".

Recently Axelrod has consulted and lectured on promoting cooperation and harnessing complexity for the United Nations, the World Bank, the U.S. Department of Defense, and various organizations serving health care professionals, business leaders, and K-12 educators.

Axelrod was the President of the American Political Science Association (APSA) for the 2006-2007 term. He focused his term on the theme of interdisciplinarity.

In May 2006, Axelrod was awarded an honorary degree by Georgetown University.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
974 (46%)
4 stars
788 (37%)
3 stars
292 (13%)
2 stars
45 (2%)
1 star
18 (<1%)
Displaying 1 - 30 of 192 reviews
Profile Image for Jerry Jose.
368 reviews59 followers
October 27, 2017
Remember that iconic scene in Wonder Woman, where she crosses No Man’s Land amidst enemy bullets and inflicts damage at the other side. Well, she was ruining a relatively peaceful ecosystem built on mutual restraint over mutual punishment. World War I, on a national level, was a zero sum game where loss on one side meant gain on the other. But on local levels, specifically along the Western Front, between France and Germany, a curious system of ‘live and let live’ emerged. Trench Warfare, limited within narrow trenches few hundred yards apart, with all its disgusting horrors became the stage for something amazing in history- a classic example of reciprocal altruism in a world of unconditional defection. A feeling of solidarity developed among enemy soldiers over time, and it was characterized by ad hoc weather truces, common lunch times and even combined Christmas celebration. During Christmas Germans put up decorated table top trees over the trenches, and British-French soldiers responded by singing Carol songs. They left their weapons in trenches and came up to shake hands in no man’s land; swapped presents, traded stuff, buried the dead, shared barrels of beers and cigars till the whole morning. In some portions along this hundreds of miles stretch, the period of goodwill lasted as long as a whole week. This restraint was not due to weakness, but rather the rationale of defection being self-defeating; much like modern day deterrence between Nuclear States over fear of mutually assured destruction.

In this seminal work, Robert Axelrod, with unusual clarity discredits friendship or kinship as the essential necessities for cooperation based on reciprocity, even in inception stage. And argues that, under suitable characteristics, cooperative relationships can well arise even between antagonists. According to Hobbes, and other earlier political theorists like Rousseau and Locke, human beings are primordially selfish individuals, who competed among each other for their own solitary, brutal and short life. Strong central authority (Leviathan) later entered into society by rationality of mutual interaction and hence, however unnatural of basic human nature, is required for maintaining cooperation among individuals. One can argue against this with evolutionary and biological examples that show social cooperation being hardwired into living consciousness. What Axelrod did was to arrange a computer tournament for an iterated Prisoner’s Dilemma problem and invite computer program strategies from his friends and colleagues.

Prisoner’s Dilemma is a thought experiment, a set of circumstances that forms the building block of Game Theory. In a Minority Report scenario, two potential prisoners are captured and interrogated separately for conviction. Each of them can stab the other in their back for lesser sentence or cooperatively deny for a favourable outcome of walkaway. The possible scenarios in ascending order of pay offs are as follows- unilateral defection from partner, mutual defection, mutual cooperation and unilateral restraint from partner. Since individuals cannot control the other person’s behaviour, each player is in a dilemma whether to rat out ones partner for maximum pay off or to cooperate for the mutually preferable outcome. Game theory expands this to behavioural problem with mathematical formalities and tries to optimize strategies for negotiations in economics, diplomacy, biology, psychology etc. In Axelrod’s tournament, various computer programs competed against each other for over 200 times in this non zero sum setting.

In all the possible interactions one strategy came out dominant pushing every other programs into extinction and its relative success had nothing to do with its author or brevity or length. This simple strategy known, rather appropriately, as TIT for TAT, was just two lines of code.
First line - be nice.
Second line - do what the other player just did.
So T4T starts off cooperating with the opponent and continue doing that till the other player defects. It will then defect and again switch back to cooperation once the opponent starts to cooperate. Though mathematically the optimum option is to defect in all moves, the nicer strategies was found to be outweighing the meaner ones in competition, with TIT for TAT dominating them all. It was a very robust program- nice to begin with, retaliatory when required, and was forgiving and clear; but not free of failures. A signal glitch or mistake in translation might cause a string of recriminations and counter recriminations between players employing T4T. So, it would be extremely important to reduce the echo effects while employing the strategy for high stake environments, as defection strings can cause escalations as far-fetched as Cuban Missile Crisis. A forgiving TIT for TAT was found to be effective, though not immune of exploitation, in such conditions, where it switches to a forgiving strategy after certain rounds of mutual defection. Axelrod argues that the maximization of outcome depends on characteristics of a particular strategy, nature of other strategies with which it most interacts and the history of interactions.

Coming back to the Trench Warfare, No Man’s Land basically represented a dynamic equilibrium of stalemate. The troops at both sides were large enough for accountability and small enough for controlling individual behaviour. Since not every bullet, grenade or shell fired in earnestness would hit the exact target, there was an inherent tendency towards descalation. Demonstration of retaliatory capacities and verbal arguements were internally suppressed by superiors, and during the rotation of troops, outgoing soldiers made it their business to familiarize the new recruits with the status quo. Infantries often offered delicacies for Artilleries as gentle incentives for not provoking the enemy side, since they were relatively safe with fewer stakes in this ‘live and let live’ system than them. And on a macroscopic scale, especially after the joint Christmas celebration, High Command of German, French and Britain wanted an end to these tacit truces as a pacified system will only sap morale from war’s ceaseless policy of offense.

Though I might come out as a heretic in this comparison, war time General’s behaviour can be observed in Wonder Woman too whose primary objective was killing Aries. The immediate and extended payoffs for both Aries and Diana were big enough to justify their actions. Though offensive demonstrations and firings can be heard in background, there was no direct enemy attack towards the trenches, even in the movie depiction. And it was her crossing of No Man’s Land that destroyed the truce and escalated the war on both fronts, killing the microscopic payoffs for macroscopic ones. Also, it would be worthwhile to note that the attack was instigated by outsiders (Trevor and Team) than the soldiers involved in the counterbalance, who might have been in the moral enigma of breaking their side of trust. Similarly in further history of actual WWI, High Command imposed raids and retaliatory efforts eventually collapsed the Trench Warfare system. Even during those orders for mandatory offense, ethics of cooperation was maintained keeping the per-functionary and routine firing aggressive enough to satisfy high command and contained enough to avoid any retaliation, as long as they could.

Axelrod extended his computer simulation strategies into an evolutionary scenario, where winning program gets to create copies of themselves, and running them for many generations. Even in a horribly hostile world, if the nice programs had enough chance to interact, it was found that they can eventually take over the world full of meaner strategies.

The beauty of altruistic behaviour in negative spaces can be observed in evolutionary biology as well. Though rational agents always defect in Prisoner’s Dilemma, knowledge about options of other party drastically changes the scene. When third parties are watching, the stakes of current situation expands from those immediately at hand to the reputation and future interactions of players. Internet bullying under the mask of anonymity and friendliness a regular customer might enjoy in a shop can be considered as crude examples. Anyway it gets even more complicated with changing pay offs and concentrating interactions. I was constantly drawn towards Cixin Liu’s Dark Forest deterrence theory and its aftermath in later installments of Three Body Problem, as my very own literary example for applying the half cooked nuances of game theory and prisoner’s dilemma.

Real world problems are far more complex than Axelrod’s computer simulation, with multiple players and complex pay offs that demands sophistication in analysis. But this propensity for reciprocal altruism among antagonists was a theory robust and well-articulated enough to have my undivided attention. And this karma based upright, forgiving and yet retaliatory strategy is a nice take away for life, maybe with more leaning towards reconciliation.
Profile Image for Andrew Breslin.
Author 3 books75 followers
September 19, 2014
It’s not overstating the case to say that Robert Axelrod’s The Evolution of Cooperation is literally the most important book of the last 100 years and will change the course of human history.

Um, Andy, I think you need to revisit the definitions of certain words, like “literally” and “overstating.”

OK, granted. But the implications of the game theory research and analysis presented here are so profoundly important, it’s difficult not to descend into hyperbole. Or ascend, as the case may be. Sure, at first glance it appears to be an account of legions of math and computer nerds trying to beat one another at a silly game. Who would suspect that these nerds may have essentially discovered the philosophical glue that holds all of society together? And done so with programs written in just a handful of code lines written in BASIC and FORTRAN, no less?

I did, but I cheated, of course. I was already convinced of game theory’s profound significance before I ever picked this up. I was doing research, reading everything I could find on the subject for my own book about game theory, my premise being that these astoundingly important ideas might be better received if put in the context of a novel with snappy dialogue and lots of horrific violence.

No horrific violence here, nor snappy dialogue either. Just a clear and lucid explanation of a very complex subject. Few of us will ever find ourselves in a literal equivalent of the prisoner’s dilemma (PD), forced to decide whether to testify against a partner-in-crime to receive a lighter prison sentence. But we find ourselves in analogous situations literally every day, and mind you: I’ve double checked my usage of “literally” here and this time I’m sticking by it.

These PD situations are so omnipresent they are virtually invisible, as are the long-established cooperative equilibria that prevent all of us from continuously stabbing one another in the back as we fall into the alluring Nash equilibrium of mutual defection, leaving behind vast hordes of stabbed people and a few smug mathematicians mumbling “I told you so” as they tend to their wounds.

Even more important than providing a mathematical explanation of why (most) people aren't just total and complete pricks (please refrain from pointing out noteworthy exceptions. We all know some. Oftentimes I'm one of them.), and why we’ve been able to develop a complex society that could not exist without seemingly irrational altruism, modern game theory teaches us how we can create an even better society.

Cooperation has evolved over the course of millions of years by participants lacking the slightest knowledge of game theory, or, indeed, opposable thumbs. Evolution is a highly effective way to produce desirable results without any thought or effort whatsoever, provided one has a few million years to spare and doesn’t mind a lot of pain and suffering along the way. For beings with the cognitive abilities to understand the forces at play, they can achieve the same results far more efficiently.

It’s a very complex non-zero-sum game we’re all playing. We can all score a lot more points if we play it right.

……………………………

I’m going to stick in one last plug for my book, Practical Applications of Game Theory , which dramatizes an actual prisoner in prison facing a literal example of the prisoner’s dilemma. It's full of math, snappy dialogue, and, I should reiterate: horrific violence.

Profile Image for Lauren.
94 reviews
September 1, 2018
This is a great book to read to get a deep understanding of how cooperation evolves and how to promote it. Most of the theory is based on simulations of the prisoner's dilemma game, with different strategies being empirically tested and strategies most similar to 'tit-for-tat' coming out far ahead. It's important to be 'nice' and avoid being the first to defect. It's also important to respond in turn, and cooperate while the cooperation is returned, but to 'defect' if your cooperation is responded to with defection. This prevents being taken advantage of by players using 'mean' strategies and sends a clear signal that defection won't be tolerated.

Unfortunately, unclear signals (signals that are too late or badly communicated) can create misperceptions about the intention of other players. Misperception can cause problems, and often create an 'echo chamber' of defection. For this reason, strategies that were also forgiving were more successful as they were able to revert back to patterns of cooperation.

Cooperation can evolve in many ways within species and organizations. In many cases it doesn't even require trust to evolve - only a durable relationship that will continue indefinitely and a memory of past interactions with different players. Many of the conclusions from this book were encouraging - cooperative strategies can form and thrive effectively in many different conditions.



Profile Image for Augusto Pascutti.
12 reviews11 followers
December 4, 2022
In a world where a book sells for its cover, it is easy to grab a book titled "The cure for cancer" and read a love story about a dying girl. It is easy to underestimate this book by its title. Don't.

A mathematical formula, expressing that cooperation increases with the number of interactions, is applied against competing algorithms on a tournament and wins. More tournaments follow, the formula is applied to other sciences and it remains working.

How can such a complex thing like the "cooperation among individuals" boil down to such simple terms? Guess we just have to live to see it proven wrong. Or right.
Profile Image for Lucas Ou-Yang.
18 reviews10 followers
February 24, 2019
This book is phenomenal, it is in the same vein as and even reuses studies from Richard Dawkins’ Selfish Gene.

If you don’t have time or would rather not purchase this book, check out Axelrod’s paper “The Evolution of Norms”
Profile Image for Bart Thanhauser.
220 reviews17 followers
November 3, 2010
A very good book that makes me interested in reading more game theory. The first two chapters are a bit dense (but really not too bad) as Axelrod goes over the "Computer Prisoner's Dilemma Tournament" that sparked this book. These chapters are an analysis of computer programs (not as dull as it sounds), but it proves to be the evidence for his theory and the meat of the book.

A quick synopsis of the book: In the late 70s, Axelrod, a University of Michigan poli sci professor, held a Computer Tournament. The Computer Tournament invited participants to create computer programs to wage battle in a Prisoner's Dilemma. A Prisoner's Dilemma is one of the foundational game theory scenarios/models. In it two prisoners are caught and are being interrogated. Each prisoner has two options. They can either keep quiet (cooperate with their accomplice) or they can rat on their accomplice (defect). As each prisoner has two options, there are four possible results: mutual cooperation (CC), mutual defection (DD), or defection/cooperation (CD or DC) by either of the two prisoners. Game theorists have assigned points to these results--if both prisoners cooperate they both get minimal sentences (3 points). If both defect they get strict sentences (each gets 1 point). If one cooperates and the other defects, the cooperator receives a life sentence (0 points) while the defector gets freedom (5 points). The conclusion from this scenario is that it is in each of the prisoner's best interests to rat out their accomplice. If they rat out their accomplice they are guaranteed at least some points. Sure, they receive a harsh sentence if their partner also defects--but if their partner doesn't defect they get freedom! The best result of all. And more importantly, if they cooperate, they risk getting the harshest sentence--life in prison. So what often ends up happening with rational actors is that they both end up with harsh sentences (mutual defection) rather than the more appealing and mutual-beneficial light sentences (mutual cooperation). The Prisoner's Dilemma explains that in acting rationally, actors often forgo mutually beneficial solutions. Downer of a conclusion.

But in steps Axelrod. Axelrod, with the clarity of a political scientist, makes a plain and convincing argument. He answers this book's central question: in a world of egoists without a central authority (some power to force or incentivize), how did cooperation ever come to exist? The key to cooperation is ensuring that a relationship is sufficiently long. He makes this point clearer than I can:

"What makes it possible for cooperation to emerge is the fact that the players might meet again. This possibility means that the choices made today not only determine the outcome of this move, but can also influence the later choices of the players. The future can therefore cast a shadow back upon the present and thereby affect the current strategic situation."

In short, people have incentive to cooperate if they meet again and again. If the prisoner's dilemma is run indefinitely (tens, hundreds, thousands of times), then pretty soon people will realize that it's in their best interest to cooperate rather than mutually defect. The two rounds of Axelrod's Prisoners' Dilemma Computer Tournament back this up. The highest finishers are the "nice programs" and the winner of both tournaments is "Tit for Tat" (T4T)--a program that cooperates with its partner on the first move, and then copies whatever it's partner's previous move was. As a result, T4T is never the first to defect and therefore can build a cooperative relationship with other programs. But at the same time, if T4T meets a "mean program" it will respond to defection with defection of its own. So it's no pushover.

I've already gone into too much detail with Axelrod's theory and results, but that's only a testament to how easily understood his ideas are. Pick up the book for more details--chapter 6, "Advice for Participants and Reformers" is particularly good.

A few thoughts on the book in general: at times I thought this book could be significantly shortened so that it was a long academic article. The book itself is pretty short (~190 pages) and Axelrod repeats the same main points over and over. I really enjoyed this repetition, as I felt like I had a firmer understanding of it each time he repeated it. But nonetheless, I think this book could've been a long journal article without losing too much important information. Also, I'd be interested to see this applied to international relations issues. In some ways I'm glad that Axelrod didn't do this himself--otherwise, he would've stamped a Cold War mentality on to his book. The book and its theory seems purer without too many real world applications. But I'd still love to read other people's use of and twists on this theory when applied to international relations.

Lastly, I know this isn’t a self-help book. I know that life is more complex than game theory, but there are things from this book that are helpful/tempting for me to apply to life. And Axelrod encourages this. In the last pages of his conclusion, he writes, "Perhaps it is not too much to hope that people can use the surrogate experience of the Computer Tournament to learn the value of reciprocity for their own Prisoner’s Dilemma interactions” (189). But I'll take these with a grain of salt: people are more complex than computers.

Nonetheless, the idea that life is not a zero-sum game is important to be reminded of. What's more, the fact that T4T, won the tournament and yet lost every single interaction it had resonated. It won by cooperating and establishing mutually beneficial relationships (which it slightly lost/tied), not by defeating its opponents. And the last take away point I'll keep is that, reciprocity is important to interactions. If a partner cooperates, you cooperate. And if someone defects, you respond immediately and clearly with defection of your own. Responding to defection with defection seems sort of rash--especially since it can be tough to interpret other people's actions--but T4T makes a convincing argument that reciprocity is the most stable strategy in a wide range of environments.

In short, this simple read provides a lot of tempting fluff for real world interactions and makes me excited to pick up some more game theory.
Profile Image for Raoul G.
176 reviews18 followers
March 23, 2022
This book is wholly and totally concerned with the (iterated) Prisoner's Dilemma. In case you don't know, the Prisoner's Dilemma is a mathematical game analyzed in Game Theory. Albert W. Tucker presented it as follows:
"Two members of a criminal organization are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The possible outcomes are:

- If A and B each betray the other, each of them serves two years in prison
- If A betrays B but B remains silent, A will be set free and B will serve three years in prison
- If A remains silent but B betrays A, A will serve three years in prison and B will be set free
- If A and B both remain silent, both of them will serve only one year in prison (on the lesser charge)".

The punishments could of course also be configured differently. But what makes this situation really interesting is to let these two prisoners meet again and again in the same situation while they are fully aware of the actions the other prisoner took in the past interactions. This, then, is called the Iterated Prisoner's Dilemma.

Axelrod wanted to know which strategies would get the best result in such an iterated game, especially when the total number of rounds to be played would not be known by the players in advance. To find this out he organized a tournament in which people from different disciplines should submit strategies to be used by the players in the game. He then let those strategies play against each other.

Although there is no strategy that always gets the best result, as the success is dependent on the kind of strategy the other player is using, one strategy has emerged as a clear winner in Axelrod's tournaments. This strategy is called Tit for Tat. This strategy is very simple and works like this: the player initially cooperates and afterwards always replicates the action of the other player in the last move. This means there is always an equivalent retaliation for defections. But what makes this strategy so successful? First of all, it is nice: It is never the first to defect which means it prevents unnecessary trouble (for example if the other player also uses a nice strategy). The second thing is, it is provocable: If it is defected against, it retaliates with the next occasion, which should discourage the other side from continuing defecting. Another important property of the strategy is that it is forgiving, meaning it does not keep defecting if the other side is cooperating after defection. This allows to restore mutual cooperation. Finally, Tit for Tat is a clear and easy to recognize strategy. Once the other player sees the pattern behind the strategy, it can easily realize that cooperation is the best way of dealing with it.

You might think that these theoretical games are far removed from reality, but the author successfully shows that this is not the case. A real-life example of the Prisoner's Dilemma was seen in the trench warfare in France during WW1. There a live-and-let-live system, that is described in more detail in the book, emerged between the enemy fronts. This shows that cooperation can emerge even between antagonists under suitable circumstances.
Furthermore, there are also a great deal of interesting examples from the field of biology. The author puts cooperation in a evolutionary perspective and shows how it is likely to have evolved even in environments of competition and selfish actors.

What starts as interesting theoretical games in the first half of book, turns into solid wisdom applicable to many situations in life in the last chapters. While most people think about life as a zero-sum game, actually most of the time this is not the case. There are countless situations where both sides can do well and mutual cooperation would be possible. The only thing standing in the way of this often times is envy and the thinking that one must be more successful than the other to be successful. The results of the tournament teach an important lesson here:
"TIT FOR TAT achieves either the same score as the other player, or a little less. TIT FOR TAT won the tournament, not by beating the other player, but by eliciting behavior from the other player which allowed both to do well. TIT FOR TAT was so consistent at eliciting mutually rewarding outcomes that it attained a higher overall score than any other strategy."

Besides giving the reader a small taste of the world of game theory, this book offers a lens through which many interactions can be understood and it proposes concrete measures to increase the chances for successful cooperation in different situations. It also makes for an overall interesting read and is still very relevant even though it was published in 1984.
Profile Image for Denis Vasilev.
681 reviews97 followers
October 13, 2021
Актуальная для децентрализованных сообществ тема кооперации без контроля сверху. Теория, рассуждения и практические примеры в политике, войне, повседневной жизни, биологии
98 reviews11 followers
February 2, 2013
I know some basic game theory, so I thought that this book would be redundant. Yes, I get it - Tit-For-Tat is the winning strategy in iterated Prisoners' Dilemma games, etc. I frankly bought the book because I found it for cheap and I decided to actually read it because it's a seminal piece of work.

But once I started, I could not put this book down. It was at times very uplifting and at other times horrifying. I was so engrossed that it took real discipline to remind myself throughout that the conclusions being drawn stem from simulations that require very important assumptions that are not present in many types of interactions. It's to the author's credit that the book was so engrossing that I constantly had to catch myself over-enthusiastically misapplying its lessons.

In some respects the book is incomplete. For instance, (1)the author rightly advises someone in a PD to maximize one's her own benefit rather than base her success on how she compares to the other player (in fact, one of the interesting and in retrospect obvious points is that Tit-for-Tat will never do better than the particular partner it is playing with). This is sound advice, even if it may run contrary to some hardwired behavioral tendencies (e.g.: see Mansbridge, ed., Beyond Self-Interest for some arguments to that effect), but it dismisses the possibility that there is a meta-game going on, in which someone with a sufficiently large proportion of "points" may then try to coerce the other. In practical terms: it's great if both of our living standards increase, but if yours increases at a rate that far exceeds mine, I may need to defect in order to prevent future domination. This would not be possible in a PD proper, but Axelrod points out that his analysis does not require payoffs to be symmetric (p.17). To take another example, (2)the author demonstrates that short time horizons are conducive to increased rational likelihood of defection, whether we are microbes or people. It turns out that getting old is a very dangerous thing to do, as formerly cooperative organisms will stop being so cooperative. So why don't we systematically screw over old people?

Of course, this is not in any way a critique of the book. And I am aware that these are elementary points. There are answers to these types of problems. With respect to (1), some might say that I just didn't describe a PD after all. Others might say that I did do so, but that it is a PD that is nested in a larger game. With respect to (2), there are certainly stories of social stability, etc., that can be told. I bring this up not to try to point out "flaws" (again: they are not flaws), but actually to say that even the "incomplete" bits left me fascinated.

I have no idea if this is objectively a five-star book, but I couldn't give it anything less given its importance and the effect it had on me while I read it.

(edit: actually, I just remembered that he does address (1) to some extent in his discussion of the bombing of Pearl Harbor)
This entire review has been hidden because of spoilers.
Profile Image for Steven Peterson.
Author 19 books307 followers
February 9, 2010
Robert Axelrod’s “The Evolution of Cooperation” is a classic in our understanding of why cooperation occurs in humans. The book begins with a simple question (Page vii): “When should a person cooperate, and when should that person be selfish, in an ongoing interaction with another person?” The ultimate explanation for the choice, according to Axelrod (and evolutionary theorist William Hamilton) is evolution. This is thoroughly discussed in Chapter 5, which outlines how cooperation could evolve as an adaptive behavior within a species.

One key part of this book is a round robin tournament in which a variety of strategies are tested in an “Iterated Prisoner’s Dilemma” game. Prisoner’s Dilemma is a game among the variety of those used in what is called Game Theory. The premise is simple: two thieves have been caught by the police. The policy offer a deal: Rat out your partner, s/he gets a tough sentence and the turncoat gets off. If both rat one another out, they both get a tougher sentence (although not as bad as if one person keeps quiet and the other one squeals). The assumption is that neither trusts the other; they rat one another out; they are both worse off than if they had kept their mouths shut.

One approach to Prisoner’s Dilemma that did the best was Anatol Rapoport’s entry, “TIT FOR TAT.” Here, one keeps one’s mouth shut, for example, as long as one’s confederate does not rat him/her out. If the other person rats one out, you turn the tables and rat him/her out. If the other player quits ratting out, then one ceases ratting the other out. This was the simplest of the entries, according to Axelrod. But it is the one that worked best. In short, cooperate as long as the other player cooperates; be nasty when the other player is nasty; don’t hold grudges and keep punishing if the other person begins cooperating.

Of course, things are actually more complex than this, but the above gives a sense of the nature of the winning strategy. It does raise questions about the common assumption that everyone is selfish and will always do what is needed to advance one’s interests at the expense of others. It begins to raise approaches to understanding “altruism,” self-sacrifice that benefits others (and would also benefit the altruist in the long run).

At any rate, this is a thought provoking book and a genuine classic, with many applications in public life and in our own daily lives.
Profile Image for Ricardo Moreno Mauro.
462 reviews27 followers
July 26, 2021
¿Porqué cooperamos? ¿Porque a veces nos traicionan? Es mejor seguir las reglas o ser un estafador? Es por las castigos o por algo más.
Este libro explora como diferentes programas de computación generan interacciones y dependiendo de ellos algunos gana o no. Adivinen, los que mejores dan resultados son los que cooperan y son "buenas personas". El gran mensaje es que al coorperar realmente ganamos en una sociedad o grupo de personas.

Es un libro fascinante.


R.
28 reviews1 follower
March 13, 2023
"El verdadero fundamento de la cooperación no es la confianza, sino la perdurabilidad de la relación".

Un libro sobre teoría de juegos revestido de expresiones de carácter ético que trata de demostrar una suerte de principio ético-matemático que no formula jamás.
Profile Image for Sietze.
66 reviews
March 11, 2022
How to win the iterated prisoners dilemma?

Never defect first. After that, always reciprocate the opponents behavior. Want to know why? Read the book.

Nice book about game theory, more specifically about the iterated prisoners dilemma. It answers the question above and also what this means for our lives, organizations, nations, etc. Bit technical but not too much. Highly recommended.
Profile Image for Anuj Apte.
20 reviews2 followers
July 13, 2021
Remarkably insightful for such a small book. Life and the world if full of iterated prisoner dilemmas and this book is great if you want to learn how to get better at solving them. The discussions on ecology, evolution, international conflict and the stability of hierarchy in labels was particularly illuminating.
Profile Image for Brian Powell.
174 reviews32 followers
November 20, 2016
Axelrod takes on the problem of how cooperation can emerge in a world of self-seeking egoists without a central authority. The question has important implications for the evolution of cooperation among inherently selfish organisms in biological systems.

Axelrod begins by examining the problem from the standpoint of game theory. Specifically, he considers how cooperation might emerge in the course of an iterated prisoner’s dilemma. The key here is that the game is iterated – that is, the players meet again and again. This means that choices not only determine the outcome of the present move, but can also influence the later choices of the players. So, for example, it does not necessarily pay to defect like it would in a single round match. Axelrod’s approach was to solicit strategies from colleagues and professionals across a range of academic fields, play them against each other, and examine the characteristics of those that performed best.

While continuing interaction between two players is necessary for cooperation to emerge, it is not sufficient. Axelrod identifies four properties that tend to make a strategy successful: it should be nice (it should never be the first to defect), retaliatory (it should defect when defected against), forgiving (it should cooperate when cooperation is sought), and have clear behavior so that the other player can recognize the strategy and adapt to it. One entry in the prisoner's dilemma "tournament" possessed all of these qualities and it proved to be the most successful strategy overall. Called TIT FOR TAT, the strategy cooperates until defected against, after which it defects until it is cooperated with again.

Axelrod’s use of the prisoner’s dilemma is beautiful in its simplicity and power – when filtered through the classic game-theoretic workhorse, of the wide range of possible strategies, successful schemes are found to have a small number of simple characteristics in common.

Armed with the results from these initial game-theoretic considerations, Axelrod turns to the emergence of cooperation in biology, drawing on the seminal work of biologist W. D. Hamilton. Of particular interest is the question of how certain modes of cooperation evolve within a population of individuals, and given a prevailing strategy for dealing with conflict, is it possible for alternative strategies to “invade” the population and shift the global behavior? Is it possible for TIT FOR TAT to govern the cooperative interactions of individuals in a population? The goal is to identify those strategies that are collectively stable: those strategies that are the only ones that the entire population can maintain in the long run in the face of any possible mutant strategy. Axelrod confirms that the strategy that always defects (ALL D) is collectively stable, and that no strategy that ever cooperates can invade a population using ALL D. This result, however, applies to individual mutants; consider instead what happens when a small group of TIT FOR TAT invades a population of ALL D. In this arrangement, if the individuals in the TIT FOR TAT group interact enough, they will manage a higher average “pay out” than the ALL D will generate among themselves. And so TIT FOR TAT not only forms the basis of cooperative behavior, it appears to be the target strategy of any evolutionary process with the reward structure of the prisoner's dilemma.

This observation leads to an important conclusion of this work: Mutual cooperation can emerge in a world of egoists without central control within a cluster of individuals who rely on reciprocity. This has decisive implications for biological evolution, because it underscores the importance of reciprocity based generally on symbiosis rather than kinship or true altruism. Relatedness is not necessary for the emergence of cooperation – what matters is the probability that two individuals will meet again in the future, and that they know it. It’s an exciting validation of this proposal that there are many examples in nature of symbiotic relationships in which the individuals have developed the ability to recognize one another. This, to me, is why Axelrod’s work is so noteworthy and intriguing: using a simple, theoretical device he is able to develop a predictive model of cooperative behavior in complex, biological systems. There are limitations, to be sure, but the method appears to successfully resolve the broad characteristics of evolved cooperation.
Profile Image for Wendelle.
1,740 reviews51 followers
July 22, 2018
This book is often touted as one of the major texts in international relations. It has applicability everywhere, from one's personal/professional dealings to US' foreign relations with Kim and Putin. Its central idea is that the Tit-for-Tat strategy, or initial cooperation followed by penalizing an opponent's defection with defection, is the ideal strategy towards any partner we engage in repeated interactions.

Tit-for-Tat is nice (does not defect first), provocable, forgiving, and consistent. It is robust even when faced with invasive attempts of other strategies, such as aggressive defection. This book devotes two chapters of case studies applying Tit-for-Tat in World War I trench warfare and in evolutionary biology. It also contains advice about how to practice Tit-for-Tat:
1) don't be envious
2) don't be first to defect
3) reciprocate both cooperation and defection
iv) don't be too clever
We are also advised to adjust to Tit-for-2-Tat when a more forgiving stance is called for in a particularly tense political environment.
Profile Image for Tara.
490 reviews28 followers
August 30, 2017
This book contained fascinating subject matter which was unfortunately greatly diminished by lackluster writing and presentation, and a significant lack of editing. I love math, and for a book to make math seem dull is kind of criminal. Also, there was way too much repetition. It was repetitive. And redundant. And, believe it or not, repetitive.

To sum up the points of interest, the most successful tactics to use in an iterated Prisoner’s Dilemma-type situation (and these situations are truly abundant in life) are: 1) avoidance of unnecessary conflict (don’t be a dick), 2) provocability in the face of an uncalled for defection (don’t be a doormat), 3) forgiveness after responding to a provocation (alas! no Sicilian-style blood feuds), and 4) clarity of behavior so that the other player can adapt to your pattern of action (it seldom pays to be inscrutable or erratic).

Also, envy is self-destructive, and it’s best not to shit where you eat.
30 reviews12 followers
April 14, 2019
This is a very good read, right from the start. But the best part is that as you reach the final chapters, you see it's applications right around you. There are some very powerful ideas in there.
What will perhaps stay with me for a long time, is the strength deriving from having a reputation as a bully. If you're willing to pay the costs needed to establish your reputation, the benefits might far exceed them. I see the sense in US engaging in the occasional conflict, despite the immediate costs it faces, or big companies severely undercutting new competitors.
What more, it's also just as imporant to respond to deceptions quickly. Not something I do, thinking that it's best for me to do what helps me, not what hurts others. But if you get the reputation of being a pushover, there will be consequences. You need to retaliate, even if that hurts you in the short term.
I'll definitely go back and reread the final chapters someday.
3 reviews
November 18, 2017
Although this book was written over 30 years ago it has a striking relevancy today as the social sciences increasingly look towards ecological and evolutionary theories to help explain social phenomena. I thoroughly enjoyed the extended discussion around an relatively simple game (the prisoner dilemma) that can characterise many social and commercial interactions.

Readers who are interested in experimental economics or complexity theory can draw out many parallels in this work with many of the modern stream of research which highlight just how progressive Alexrod was with his research in the early 1980s
Profile Image for John Kaufmann.
674 reviews58 followers
May 23, 2015
Short book full of a big idea - how cooperation emerges. Axelrod describes an experiment where players develop strategies that compete against each other in an iterated game of Prisoner's Dilemma to see which strategy fares best - cooperation, defection, punishment, or some combination thereof. The reason I gave the book 5-stars is that it stimulated my mind to ponder the implications and ask a host of what-if questions.
Profile Image for Jared Peterson.
37 reviews1 follower
August 14, 2017
Probably one of the more interesting books I have read. It may not be for everybody, but it really helped me to have a more intuitive understanding of how societies and morality form, and how agents (people) make decisions when in a group context. After reading this, I look at almost all societal problems a little bit differently. It is Game Theory applied to society, and the conclusions are fascinating.
Profile Image for Melanie.
168 reviews
November 30, 2017
I read this for my World Politics class that is heavily focused on International Relations theory. This class had a section on game theory and bargaining models and this book has definitely influenced the way I process relationships.
Profile Image for Jano Suchal.
18 reviews86 followers
September 2, 2017
Loved it. A simple model, that explains a lot of cooperation phenomenons from real world.
1 review
January 13, 2019
A foundational text in game theory. A great start, and not overly technical, although there is sufficient technical details there for those who enjoy them.
Profile Image for Sean Rosenthal.
197 reviews26 followers
November 28, 2015
Interesting Quotes:

"The overall record of TIT FOR TAT [in iterated Prisoners' Dilemma competitions] is very impressive. To recapitulate, in the second round, TIT FOR TAT achieved the highest average score of the sixty-two entries in the tournament. It also achieved the highest score in five of the six hypothetical tournaments which were constructed by magnifying the effects of different types of rules from the second round. And in the sixth hypothetical tournament it came in second. Finally, TIT FOR TAT never lost its first-place standing in a simulation of future generations of the tournament. Added to its victory in the first round of the tournament, and its fairly good performance in laboratory experiments with human subjects, TIT FOR TAT is clearly a very successful strategy.

"Proposition 1 says that there is no absolutely best rule independent of the environment. What can be said for the empirical successes of TIT FOR TAT is that it is a very robust rule: it does very well over a wide range of environments . . . What accounts for TIT FOR TAT's robust success is its combination of being nice, retaliatory, forgiving, and clear. Its niceness prevents it from getting into unnecessary trouble. Its retaliation discourages the other side from persisting whenever defection is tried. Its forgiveness helps restore mutual cooperation. And its clarity makes it intelligible to the other player, thereby eliciting long-term cooperation."

-Robert Axelrod, the Evolution of Cooperation

--------------------------

"The significance of this proposition is that if everyone in a population is cooperating with everyone else because each is using the TIT FOR TAT strategy, no one can do better using any other strategy *providing* that the future casts a large enough shadow onto the present . . . One specific implication is that if the other player is unlikely to be around much longer because of apparent weakness, then the perceived value of w falls and the reciprocity of TIT FOR TAT is no longer stable . . .

"There are many other examples of the importance of long-term interaction for the stability of cooperation. It is easier to maintain the norms of reciprocity in a stable small town or ethnic neighborhood. Conversely, a visiting professor is likely to receive poor treatment by other faculty members compared to the way these same people treat their regular colleagues."

-Robert Axelrod, the Evolution of Cooperation

-----------------------------

"During World War I . . . [t]he live-and-let-live system was endemic in trench warfare. It flourished despite the best efforts of senior officers to stop it, despite the passions aroused by combat, despite the military logic of kill or be killed, and despite the ease with which the high command was able to repress any local efforts to arrange a direct truce. This is a case of cooperation emerging despite great antagonism between the players . . .

"[T]he historical situation in the quiet sectors along the Western Front was an iterated Prisoner's Dilemma. In a given locality, the two players can be taken to be the small units facing each other. At any time, the choices are to shoot to kill or deliberately to shoot to avoid causing damage. For both sides, weakening the enemy is an important value because it will promote survival if a major battle is ordered in the sector. Therefore, in the short run it is better to do damage now whether the enemy is shooting back or not. This establishes that mutual defection is preferred to unilateral restraint, and that unilateral restraint by the other side is even better than mutual cooperation. In addition, the reward for mutual restraint is preferred by the local units to the outcome of mutual punishment, since mutual punishment would imply that both units would suffer for little or no relative gain . . . Moreover, both sides would prefer mutual restraint to the random alternation of serious hostilities. Thus the situation meets the conditions for a Prisoner's Dilemma between small units facing each other in a given immobile sector . . .

"To the army headquarters, the important thing was to develop an offensive spirit in the troops. The Allies, in particular, pursued a strategy of attrition whereby equal losses in men from both sides meant a net gain for the Allies because sooner or later Germany's strength would be exhausted first. So at the national level, World War I approximated a zero-sum game in which losses for one side represented gains for the other side. But at the local level, along the front line, mutual restraint was much preferred to mutual punishment.

"Locally, the dilemma persisted: at any given moment it was prudent to shoot to kill, whether the other side did so or not. What made trench warfare so different from most other combat was that the same small units faced each other in immobile sectors for extended periods of time. This changed the game from a one-move Prisoner's Dilemma in which defection is the dominant choice, to an iterated Prisoner's Dilemma in which conditional strategies are possible. The result accorded with the theory's predictions: with sustained interaction, the stable outcome could be mutual cooperation based upon reciprocity. In particular, both sides followed strategies that would not be the first to defect, but that would be provoked if the other defected.

"Before looking further at the stability of the cooperation, it is interesting to see how cooperation got started in the first place. The first stage of the war, which began in August 1914, was highly mobile and very bloody. But as the lines stabilized, nonaggression between the troops emerged spontaneously in many places along the front. The earliest instances may have been associated with meals which were served at the same times on both sides of no-man's land . . . A key factor was the realization that if one side would exercise a particular kind of restraint, then the other might reciprocate. Similarities in basic needs and activities let the soldiers appreciate that the other side would probably not be following a strategy of unconditional defection . . .

"Once started, strategies based on reciprocity could spread in a variety of ways. A restraint undertaken in certain hours could be extended to longer hours. A particular kind of restraint could lead to attempting other kinds of restraint. And most importantly of all, the progress achieved in one small sector of the front could be imitated by the units in neighboring sectors . . .

"When a defection actually occurred, the retaliation was often more than would be called for by TIT FOR TAT. Two-for-one or three-for-one was a common response to an act that went beyond what was considered acceptable . . . There was probably an inherent damping process that usually prevented these retaliations from leading to an uncontrolled echo of mutual recriminations. The side that instigated the action might note the escalated response and not try to redouble or retriple it. Once the escalation was not driven further, it would probably tend to die out. Since not every bullet, grenade, or shell fired in earnest would hit its target, there would be an inherent tendency toward deescalation . . .

"[R]ituals of perfunctory and routine firing sent a double message. To the high command they conveyed aggression, but to the enemy they conveyed peace. The men pretended to be implementing an aggressive policy, but were not . . . Thus these rituals helped strengthen the moral sanctions which reinforced the evolutionary basis of the live-and-let live system.

"The live-and-let-live system that emerged in the bitter trench warfare of World War I demonstrates that friendship is hardly necessary for cooperation based upon reciprocity to get started. Under suitable circumstances, cooperation can develop even between antagonists."


-Robert Axelrod, The Evolution of Cooperation (math omitted)


---------------------------------------

"[E]nvy is self-destructive.

"Asking how well you are doing compared to how well the other player is doing is not a good standard unless your goal is to destroy the other player. In most situations, such a goal is impossible to achieve, or likely to lead to such costly conflict as to be very dangerous to pursue. When you are not trying to destroy the other player, comparing your score to the other's score simply risks the development of self-destructive envy. A better standard of comparison is how well you are doing relative to how well someone else could be doing in your shoes. Given the strategy of the other player, are you doing as well as possible? Could someone else in your situation have done better with this other player? This is the proper test of successful performance.2

"TIT FOR TAT won the tournament because it did well in its interactions with a wide variety of other strategies. On average, it did better than any other rule with the other strategies in the tournament. Yet TIT FOR TAT never once scored better in a game than the other player! In fact, it can't. It lets the other player defect first, and it never defects more times than the other player has defected. Therefore, TIT FOR TAT achieves either the same score as the other player, or a little less. TIT FOR TAT won the tournament, not by beating the other player, but by eliciting behavior from the other player which allowed both to do well. TIT FOR TAT was so consistent at eliciting mutually rewarding outcomes that it attained a higher overall score than any other strategy."

-Robert Axelrod, the Evolution of Cooperation

------------------------------

"Hierarchy and organization are especially effective at concentrating the interactions between specific individuals. A bureaucracy is structured so that people specialize, and so that people working on related tasks are grouped together. This organizational practice increases the frequency of interactions, making it easier for workers to develop stable cooperative relationships. Moreover, when an issue requires coordination between different branches of the organization, the hierarchical structure allows the issue to be referred to policy makers at higher levels who frequently deal with each other on just such issues. By binding people together in a long-term, multilevel game, organizations increase the number and importance of future interactions, and thereby promote the emergence of cooperation among groups too large to interact individually. This in turn leads to the evolution of organizations for the handling of larger and more complex issues."

-Robert Axelrod, the Evolution of Cooperation

--------------------------------

"A label can be defined as a fixed characteristic of a player that can be observed by other players when the interaction begins. When there are labels, a strategy can determine a choice based not only on the history of the interaction so far, but also upon the label assigned to the other player.

"One of the most interesting but disturbing consequences of labels is that they can lead to self-confirming stereotypes. To see how this can happen, suppose that everyone has either a Blue label or a Green label. Further, suppose that both groups are nice to members of their own group and mean to members of the other group. For the sake of concreteness, suppose that members of both groups employ TIT FOR TAT with each other and always defect with members of the other group. And suppose that the discount parameter, w, is high enough to make TIT FOR TAT a collectively stable strategy (in accordance with proposition 2 of chapter 3). Then a single individual, whether Blue or Green, can do no better than to do what everyone else is doing and be nice to one's own type and mean to the other type.

"This incentive means that stereotypes can be stable, even when they are not based on any objective differences. The Blues believe that the Greens are mean, and whenever they meet a Green, they have their beliefs confirmed. The Greens think that only other Greens will reciprocate cooperation, and they have their beliefs confirmed. If you try to break out of the system, you will find that your own payoff falls and your hopes will be dashed. So if you become a deviant, you are likely to return, sooner or later, to the role that is expected of you. If your label says you are Green, others will treat you as a Green, and since it pays for you to act like Greens act, you will be confirming everyone's expectations.

"This kind of stereotyping has two unfortunate consequences: one obvious and one more subtle. The obvious consequence is that everyone is doing worse than necessary because mutual cooperation between the groups could have raised everyone's score. A more subtle consequence comes from any disparity in the numbers of Blues and Greens, creating a majority and a minority. In this case, while both groups suffer from the lack of mutual cooperation, the members of the minority group suffer more. No wonder minorities often seek defensive isolation."

--------------------------------

"In the territorial system, things work differently. By getting five of the rules which are not nice to apologize, NYDEGGER wins a great many converts from its neighbors. When one of these apologizers is next to NYDEGGER and the other three neighbors are nice rules, NYDEGGER is likely to do better than any of its four neighbors or even any of their neighbors. In this way, it can convert not only the apologist, but some or all of its other neighbors as well. Thus, in a social system based on diffusion by imitation, there is a great advantage to being able to attain outstanding success, even if it means that the average rate of success is not outstanding. This is because the occasions of outstanding success win many converts."

-Robert Axelrod, the Evolution of Cooperation

--------------------------

"Ordinary business transactions are also based upon the idea that a continuing relationship allows cooperation to develop without the assistance of a central authority. Even though the courts do provide a central authority for the resolution of business disputes, this authority is usually not invoked. A common business attitude is expressed by a purchasing agent who said that 'if something comes up you get the other man on the telephone and deal with the problem. You don't read legalistic contract clauses at each other if you ever want to do business again.' This attitude is so well established that when a large manufacturer of packaging materials inspected its records it found that it had failed to create legally binding contracts in two-thirds of the orders from its customers. The fairness of the transactions is guaranteed not by the threat of a legal suit, but rather by the anticipation of mutually rewarding transactions in the future.

"It is precisely when this anticipation of future interaction, breaks down that an external authority is invoked. According to Macaulay, perhaps the most common type of business contracts case fought all the way to the appellate courts is an action for a wrongful termination of a dealer's franchise by a parent company. This pattern of conflict makes sense because once a franchise is ended, there is no prospect for further mutually rewarding transactions between the franchiser and the parent company. Cooperation ends, and costly court battles are often the result."

-Robert Axelrod, the Evolution of Cooperation

-----------------------------

"I came to this project believing one should be slow to anger. The results of the Computer Tournament for the Prisoner's Dilemma demonstrate that it is actually better to respond quickly to a provocation. It turns out that if one waits to respond to uncalled for defections, there is a risk of sending the wrong signal. The longer defections are allowed to go unchallenged, the more likely it is that the other player will draw the conclusion that defection can pay. And the more strongly this pattern is established, the harder it will be to break it. The implication is that it is better to be provocable sooner, rather than later. The success of TIT FOR TAT certainly illustrates this point. By responding right away, it gives the quickest possible feedback that a defection will not pay."

-Robert Axelrod, the Evolution of Cooperation

Profile Image for Eva.
487 reviews1 follower
March 16, 2013
A bit academic, but still pretty easy to read and a fun exploration of the prisoner's dilemma in a wide variety of hypothetical and concrete contexts.

You may also recognize parts of this from a Radiolab episode :)

Notes:

We all know that the success of the tit-for-tat strategy depends on repeated interactions, the assumption that "the future casts a large enough shadow onto the present." So "one specific implication is that if the other player is unlikely to be around much longer because of apparent weakness, then...the reciprocity of tit for tat is no longer stable. We have Caesar's explanation of why Pompey's allies stopped cooperating with him. 'They regarded his [Pompey's] prospects as hopeless and acted according to the common rule by which a man's friends become his enemies in adversity.' Another example is the case where a business is on the edge of bankruptcy....[Even] his best customers begin refusing payment for merchandise, claiming defects in quality, failure to meet specifications, tardy delivery, or what-have-you. The great enforcer of morality in commerce is the continuing relationship." - p59-60

The Live-and-Let-Live System in World War I:

“Locally, the [prisoner’s] dilemma persisted: at any given moment it was prudent to shoot to kill, whether the other side did so or not. What made trench warfare so different from most other combat was that the same small units faced each other in immobile sectors for extended periods of time. This changed the game from a one-move Prisoner’s Dilemma in which defection is the dominant choice, to an iterated Prisoner’s Dilemma in which conditional strategies are possible.” – p77

“In one section the hour of 8 to 9 A.M. was regarded as consecrated to ‘private business,’ and certain places indicated by a flag were regarded as out of bounds by the snipers on both sides.”

“Just as important as getting cooperation started were the conditions that allowed it to be sustainable. The strategies that could sustain mutual cooperation were the ones which were provokable. During the periods of mutual restraint, the enemy soldiers took pains to show each other that they could indeed retaliate if necessary. For example, German snipers showed their prowess to the British by aiming at spots on the walls of cottages and firing until they had cut a hole.” – p79

“What finally destroyed the live-and-let-live system was the institution of a type of incessant aggression that the headquarters could monitor. This was the raid…Raiders were ordered to kill or capture the enemy in his own trenches. If the raid was successful, prisoners would be taken; and if the raid was a failure, casualties would be proof of the attempt. There was no effective way to pretend that a raid had been undertaken when it had not. And there was no effective way to cooperate wit the enemy in a raid because neither live soldiers nor dead bodies could be exchanged.” – p83

“Additional developments are the emergence of ethics and ritual….’I was having tea with A Company when we heard a lot of shouting and went out to investigate. We found our men and the Germans standing on their respective parapets. Suddenly a salvo arrived but did no damage. Naturally both sides got down and our men started swearing at the Germans, when all at once a brave German got on to his parapet and shouted out ‘We are very sorry about that; we hope no one was hurt. It is nor our fault, it is that damned Prussian artillery.’” – p85

“Just as ability to recognize the other player is invaluable in extending the range of stable cooperation, the ability to monitor cues for the likelihood of continued interaction is helpful as an indication of when reciprocal cooperation is or is not stable. In particular, when the relative importance of future interactions, w, falls below the threshold for stability, it will no longer pay to reciprocate the other’s cooperation. Illness in other partner leading to reduced viability would be one detectable sign of declining w. Both animals in a partnership would then be expected to become less cooperative. Again of a partner would be very like disease in this respect, resulting in an incentive to defect so as to take a one-time gain when the probability of future interaction becomes small enough.

These mechnisms could operate even at the microbial level. Any symbiont that still has a chance to spread to other hosts by some process of infection would be expected to shift from mutualism to parasitism when the probability of continued interaction with the original host lessened. In the more parasitic phase, it could exploit the host more severely by producing more of the forms able to disperse and infect. This phase would be expected when the host is severely injured, has contracted some other wholly parasitic infection that threatens death, or when it manifests signs of age. In fact, bacteria that are normal and seemingly harmless or even beneficial in the gut can be found contributing to sepsis in the body when the gut is perforated, implying a severe wound….One tumor-causing virus, that of Burkitt’s lymphoma, may have alternatives of slow r fast production of infectious stages…The point of interest is that, as some evidence suggests, lymphoma can be triggered by the host’s contracting malaria. The lymphoma grows extremely fast and so can probably compete with malaria for transmission (possibly by mosquitoes) before death results.” – p103

“Tit for Tat won the tournament because it did well in its interactions with a wide variety of other strategies. On average, it did better than any other rule with the other strategies in the tournament. Yet Tit for Tat never once scored better in a game than the other player! In fact, it can’t. It lets the other player defect first, and it never defects more times than the other player has defected. Therefore, Tit for Tat achieves either the same score as the other player, or a little less. Tit for Tat won the tournament, not by beating the other player, but by eliciting behavior from the other player which allowed both to do well.” = p112

“If the other player is not likely to be seen again, defecting right away is better than being nice. This fact has unfortunate implications for groups known to move from one place to another….In a California community, Gypsies were again found not to pay all of a doctor’s bill, but municipal fines were paid promptly. These fines were usually for breaking garbage regulations. This was among a group of Gypsies who returned to the same town every winter. Presumably, the Gypsies knew that they had an ongoing relationship with the garbage collection service of that town, and could not shop around for another service. Conversely, there were always enough doctors in the area for them to break off one relationship and start another when necessary.” – p115

“The role of time perspective has important implications for the design of institutions. In large organizations, such as business corporations and governmental bureaucracies, executives are often transferred from one position to another approximately every two years. This gives executives a strong incentive to do well in the short run….This gives two executives a mutual incentive to defect when wither of their terms is drawing to an end. The result of rapid turnover could therefore be a lessening of cooperation within the organization.

As pointed out in chapter 3, a similar problem arises when a political leader appears to have little chance of reelection. The problem becomes even more acute with a lame duck. From the point of view of the public, a politician facing an end of career can be dangerous because of the increased temptation to seek private goals rather than maintain a pattern of cooperation with the electorate for the attainment of mutually rewarding goals.

Since the turnover of political leaders is a necessary part of democratic control, the problem must be solved another way. Here, political parties are useful because they can be held accountable by the public for the acts of their elected members. The voters and the parties are in a long-term relationship, and this gives the parties an incentive to select candidates who will not abuse their responsibilities. And if a leader is discovered giving in to temptation, the voters can take this into account in evaluating the other candidates of the same party in the next election. The punishment of the Republican party by the electorate after Watergate shows that parties are indeed held responsible for the defections of their leaders.” – p183

Displaying 1 - 30 of 192 reviews

Can't find what you're looking for?

Get help and learn more about the design.