Today I turn to the comment made by Annie Rhodes on the first posting on this Blog (Disappearing Bonobos and Our Faint Hope for Altruism). Recall that in that post I recalled a conversation with John, the molecular biologist, in which we speculated about the possibility of splicing the altruism gene (recently found in our closest cousin, pan paniscus) into the human genome. (In subsequent posts I have argued that altruism is necessary to rally us to make the sacrifices clearly needed to leave the planet in shape for future generations. No known form of cooperation or reciprocity will do. Altruism is necessary, although not sufficient.) I also noted in that first post that if the altruism gene is somehow successfully introduced in H. sapiens, it would then have to be isolated and protected, as ordinary humans would surely eradicate our better brethren before they could reach critical mass and make a difference.
Annie, also a biologist, suggested that as long as we were hypothetically splicing altruism into our genome, we might as well make it dominant. But then she poses the very creative follow-on question, and I am paraphrasing: would a fictional future world of altruistic humans then implode under the weight of the sort of forces that thwarted past efforts at creating Utopian communities? That is, once H. sapiens was stripped of his single-minded selfishness, doesn’t social science theory, especially economics, predict that he will fail to have the incentives to make an economic society function effectively? Might we see altruistic humans gravitate first to socialism, and then to the dismal denouement of past socialist experiments?
This is an immensely creative and challenging set of questions, to which I do not wish to imply that I have answers, at least not to the whole of it. But I am excited to try to break this problem down into its parts.
First I will look at the premise: would making the altruism gene dominant change the likely result that the good guys would be killed off by the rest of us before they could save us? To examine this question, I need to backtrack and return to one of my favorite topics these days – my idea of a Cooperative Tipping Point or CTP.
Imagine a structured social experiment in which there are only two types of people. The first type is the Pure Competitor, PC, who is rational, self-regarding and competitive, and seeks only to maximize his own payoff. The second is the Contingent Cooperator, CC, who also seeks to maximize his payoff, but is aware of social norms or understandings that may create opportunities for expanding the set of opportunities available to engender cooperation, and so enhance the well being of both him and others simultaneously. (This experiment can be written down mathematically, and can be played experimentally, and both have been done many times, so this is not remotely speculative.) Note that the CC player is not a glutton for punishment. He wants to maximize payoff to himself, just as the PC player does. But he is willing to take a chance and initially offer to cooperate at some cost to himself. If accepted, he will continue to cooperate, but if rejected, he will compete. In Robert Axelrod’s The Evolution of Cooperation, this strategy is famously called Tit for Tat: TFT cooperates if his rival cooperated with him last round, and competes if his rival competed with him last round. Not a noble affect, but one that experimentally does the job quite nicely.
The way this experiment is usually designed, the PC player ties other PC players (they compete with each other throughout their time together). The PC player also beats the CC player (the PC player takes advantage of the CC player in their first interaction, and afterward they compete with each other – but the PC player retains the small upfront gain). Hence, the PC strategy is a dominant strategy in the parlance of game theory: it is the better strategy against other PCs and it is the better strategy against CCs.
Yet, in a wide variety of social and experimental settings, many people “irrationally” behave as if they are CCs – Contingent Cooperators – rather than PCs. Further – and this is the big kicker – they often do better than the “rational” competitors in the long run.
It is actually not hard to see why this is true. Imagine a version of this experiment in which two people meet and earn $1 for competing, $3 for cooperating, and $5 if one competes and rejects the cooperative advances of the other. The rejected cooperator earns nothing in such an interaction. If people live in a large, anonymous society in which they never meet the same person more than once, the only sustainable behavior would be non-cooperative. If you think the next stranger to come along will compete, it is better to compete as well ($1>$0). If you think the next stranger to come along will cooperate, it is still better to compete ($5>$3). No matter what that next stranger will do, you do better not to trust him, and to just compete. You – and everyone else eventually – earns $1 per interaction.
But what if society is smaller than this? What if you can be expected to meet people on ten occasions, and they will remember how you last treated them? As so many academic disciplines know well, the size of the social unit and the values placed on one’s reputation in that group, are critically important on many levels. To see how it might matter here, think about it this way. Now a PC earns $10 in total from his long term relationships with other PCs ($1 each of ten times). He earns $5 the first time he meets a CC, but then $1 afterward for nine interactions, for a total of $14 (=$5+$1(9)). The CC in turn only earns $9 from his interactions with PCs (as he loses a dollar trying to solicit cooperative behavior unsuccessfully each time he meets a new PC, but then he competes with him when he meets him again). Note that the PC earns $10 from other PCs and $14 from CCs. The CC earns only $9 from interacting with PCs but what does he earn in his relationships with other CCs? When they meet, they both tentatively cooperate, hoping against hope, and they each earn $3. Having learned the other was willing to “irrationally” cooperate, they continue to do so, earning $3 each time they meet, and thus $30 overall.
Assume further that if one strategy proves better in a given environment than the other strategy, then a selection mechanism akin to evolution takes place, and the less successful strategy is partially replaced by the more successfully strategy and the next phase of the experiment begins.
What happens in our experiment? We know that PCs do somewhat better against one another than CCs do against them – as the cooperators get at first taken advantage of at some cost to themselves. But the CCs do much better with other CCs than the PCs do with CCs. Even though the CCs tie each other, and the PCs beat the CCs, the CCs do so much better with one another that they do much better than “winning” competitors do against them. So it should be clear that if there is a critical mass, or tipping point, of cooperators in society, then cooperation will thrive.
In our example, it turns out that as long as the initial population of CCs is greater than about 5.9% of the population, the cooperators will do better than the competitors. (Do the algebra for yourself; it is not hard.) After all the dust is settled, some competitors will “go extinct” (convert to cooperation) and the share of cooperation will go up. What happens then? Well, the more cooperators, the better it is for cooperation, so cooperation evolves even more dramatically in later phases of the game. On the other hand, if at the outset the population of cooperators is less than 5.9%, then the PCs will do better, and in the next phase of the experiment, there will be even fewer cooperators. This makes it even worse for cooperation, and in each subsequent phase the share of cooperation shrinks even further.
This is an unstable equilibrium. This means that if cooperation begins with a share of behavior above the cooperative tipping point (which in this example is somewhat small), cooperation eventually takes over, and you have a very rich cooperative society with an average income of $30 per lifetime interaction. If, on the other hand, cooperation is just a small share of the initial society, it doesn’t stand a chance, and eventually dies out, leaving you with a poor, purely competitive society, with everyone earning $10 from each interaction. Everything depends upon the good luck of having a critical mass or tipping point of cooperators at the outset of the experiment. Too few, and society becomes aggressively competitive; more than enough, and society moves to a blissfully cooperative equilibrium. Further, a completely cooperative society is stable against invasion: there is just no point to a cluster of PCs to jump in, as they will do less well in the minority than they would do if they cooperated like everybody else.
Before I move on there are interesting points to be made. First, I would argue that this example does not excessively value reputation, and hence cooperation. I think it is quite common to interact personally and professionally many more than ten times with others in one’s community. So the CTP may well be overstated: perhaps only 1% of a society needs to begin cooperating to lead the way? The actual % is impossible to pin down, but cooperative tipping points, as critical as they are, may be quite small. Two, the payoff figures I used are not entirely arbitrary: they are the numbers used in thousands of such experiments. Nonetheless, one should care about whether the results are robust to changes in those numbers (for example, what happens if PCs earn $2 from each interaction?) Some simple algebra will show that the CPT is still rather low. Three, the size of the community is critical. In very large societies, random meetings of strangers occur frequently and the number of expected interaction with most people is low. This makes cooperation harder to sustain. In small communities it is easier to maintain the like-mindedness and high levels of interaction needed to sustain cooperative behavior. The folk myth of the small town life, or the ethnic enclave, is actually based in solid theory. Four, and this is a more subtle point, one must not forget that the CCs above are contingent cooperators, meaning they cooperate long term only with those who cooperate with them. They are not unnecessarily self-sacrificing (although they take a chance at first). If they were universal cooperators, UCs, and cooperated with everyone, without contingency, the results would be very different. In fact, perhaps surprisingly, UCs cannot survive against PCs in any number, and if an invading force of PCs comes upon a happy community of UCs, the result is a disaster for cooperation. The only way universal cooperators survive is by isolating themselves. Think of the Amish for the clearest example of this (and think of the character John book in Witness for a brilliant example of how an outsider to that value system cannot in the end conform, despite great motivation to do so.) Five, and this is related to the last point, universal cooperators, UCs, actually make it harder, not easier, for CCs to survive. Imagine a small wagon train of UCs come upon our earlier experiment of CCs and PCs and ask to join the society. You might think intuitively that this will advance the cause of long term cooperation – after all, a wagon train of Amish are surely not going to advance the cause of competition, are they? But you would be wrong. Keep in mind that the CCs only lose a little bit in their interactions with the PCs ($5) and gain a lot from engendering maximal cooperation with their own kind ($20 relative to PC-PC interactions). But UCs are easy prey for the PCs, and vastly increase the payoff to their non-cooperative behavior. PCs earn $50 against every UC they meet, but CCs only earn $30 (they treat them as cooperators, not suckers). After the UCs arrive, the CTP goes up! There is even a chance that a community formally destined to be fully cooperative in the long run will now, instead, plummet towards pure competitive behavior.
For an example of this, consider a city plagued by street crime largely consisting of the mugging of defenseless victims. Suppose the police successfully reduce the levels of muggings (with police decoys, for example -- woe be the mugger who tried to outrun Dennis Pelkey, formerly of the Schenectady PD, as Dennis was many times the police world champion in the sprints). And then imagine that overnight a flock of defenseless old ladies took to the streets with their life savings in their pocketbooks. You can see that the mugging life strategy would be re-invigorated by a new and more profitable incentive structure.
There is a sixth point I need to make of a different sort and most relevant to our discussion. I have not even defined altruism technically, but I ask that you consider the following suggestive idea first. Consider the community in which cooperation is below the CTP. That is, cooperators are doing worse than competitors, and expect that in the long run, unless things radically change, they are doomed. But they stick it out, anyway, perhaps because they are “activists” hoping to change the hearts and minds of a critical mass of others. In so doing, they may actually believe that they will, even selfishly, profit from their hard headed commitment to cooperation, even when it doesn’t pay.
Or they may feel, as did Martin Luther King, Jr. in 1963 that they would not get to the Promised Land themselves, but they had to sacrifice to hasten the arrival of others. In other words, early below-tipping point cooperators may actually be altruists. They may be sacrificing their interests now and even long term to keep up hope. After all, every contingent cooperator makes it more profitable for and likely that the next one will come along. My very first comment came from Anonymous, who said rightly, I think, that some people are now clean, green vegetarians not because they think they can themselves save the planet, but because they know someone must be first – and altruistically blaze that path for others.
In this sense – the sense that below-tipping point cooperators (such as people who are still fighting to create multilateral, effective climate change action plans against what our commenter Richard rightly noted were astonishingly bad odds) – might be altruists holding on for the rest of us to join them. In that sense, Annie’s idea that making the gene dominant – which would obviously vastly increase the share of population expressing the gene – might get us over the tipping point and make all the difference in the world. So by this version of altruism, I think it can be said that Annie may be right that the dominance, or increased expression, of altruism could well matter.
I must not deceive, however, and this is not the ordinary use of altruism in the research literature. What most researchers mean by altruism is that altruists care about themselves, but they also care about other things. Just what those other things are varies by research project, but common ideas include these ideas: one, altruists care not only about their own direct well being, but they care about the fairness of the process by which people are rewarded; two, altruists care about the fairness of outcomes, whatever the process; altruists care about the well-being of others enough to at least pay a small price to make a big difference in the lives of others. These are just a few, and the most interesting definitions of altruism I will actually save for another posting, as they need more preparation to be understood. But for now let me be clear that altruists are not identical to cooperators (who see a chance to profit themselves by expanding the capacities of human relationships to create win-win situations). They are instead willing to pay a price out of their own rewards to serve other ends (such as fairness). In more experiments than can be recounted, it has been shown that people of all walks of life from all over the globe refuse to take what they consider an unfair share of the rewards, even if they have the power to do so without fear or any recrimination. Not everyone, of course. But most of us care about fairness, and in this sense must of us are altruists. There is hardly a reader among you who has not watched the tearful fundraising efforts of charity groups soliciting your coffee money to save a child from a harsh existence in the slum of a developing nation. The appeal is clear: won’t you give up something small to make a big difference in the lives of someone else? This is an appeal to altruism of the third type.
Some people still argue that altruism is not real, that people behave altruistically in order to capture reputation effects, for example. (When a beautiful female student plays class games with a nerdy male quant, I have not failed to notice that the male often bends over backwards to play nicely, and I have not ascribed those results – at least not entirely – to altruism.) But the evidence is actually quite clear that even when no one – not even the researcher himself – can know of one’s actions, people still sacrifice substantial portions of their own rewards to satisfy notions of fairness of process or outcome or both.
There’s hope here. But if the hope is that the share of altruistic behavior in society is large, is that enough? The first part of Annie’s dominant gene argument suggests that if we can increase the share of altruism that could be what is needed.
Unfortunately, unlike contingent cooperation, and more like universal cooperation, altruism is not very stable. That is, even large communities of altruists – much like large communities of universal cooperators – do not generally do well against invading competitive behaviors. In most mathematical models of altruism, altruists are not ESS (evolutionarily stable strategies). That is, they cannot defend themselves well.
The exceptions involve a less extreme form of incubating altruism entirely in isolation. The idea is that altruists need to find themselves in communities where they can identify one another readily and interact with one another much more than they are forced to interact with the rest of us. For reasons that I should put off until later, these altruists have internalized norms and beliefs that create a values-based community that tend to restrict both their interactions with others in both number and nature. You should be thinking of religious enclaves embedded within larger cities, as a good example. If the number of costly interactions with outsiders can be kept small enough, and the nature of those interactions structured in ways that minimize damage to the community’s values, then the altruists can survive and resist invasion by competitive behaviors that would ordinarily wipe them out.*
In this sense, Annie’s creation of a dominant gene for altruism would do three things: one, insure that everyone in the altruistic enclave carrying the gene expresses the behavior, thus increasing the behavioral intensity and therefore the “profits” from their altruistic interactions; two, identify anyone not carrying it (those not expressing it) and somehow cull them from the population; and three, increase the growth rate of the enclave population, and thus the share of the total population. All three might help, but none actually guarantees the success of altruism.
Altruism is a delicate behavior, unlike Contingent Cooperation. Unfortunately, CC is not helpful for behaving in ways that reflect caring for future generations. For that, altruism is needed.
In a future posting I will turn to Annie’s additional and very creative questions about what social science theory has to say about a society of altruists constructing and running an economic society. This is an area in which I believe very little research has been done to date, which is quite a shame, as it bears in one way or another on much of what is going on around us in today's virulent political climate as well as on our prospects for putting aside our trivial bickering and get to work on what truly matters. We will not get away with rearranging deck chairs on the Titanic much longer.
_____
* See Gintis, Herbert, "The Hitchhiker’s Guide to Altruism," Journal of Theoretical Biology, 2003.
5 comments:
I think assigning formulaic values to notions of cooperation and altruism is very effective in making readers "root" for those elements as opposed to notions of pure competitiveness.
The terms cooperation (especially when described as "contingent" and contrasted with "unconditional"; which neuters the fear of being a dupe) and altruism - are terms almost universally framed positively in most minds.
Also, the idea that a tipping point of cooperation is compelling as an effect to be wished for.
But does describing it - in any way improve the likelihood of that eventuality? Does identifying as dominant the "competitor" notion - actually enhance the possibility of continued ubiquitous presence?
...and what of the possibility that the contemporary world fails utterly to stem the tide leading to the very worst climatic outcomes - leading to wholesale catastrophe? Who, if anyone, will be winners? Who will (by default) be devastated least? Who, if anyone, is making that calculation?
On the other hand, what of the possibility of another outcome; one where the world succeeds at the eleventh hour to prevent world wide tragedy? Who will benefit the most? What nation states will gain advantage in such a likelihood? What political/social system of organization will forge ahead? What ideas will be left behind?
What OTHER social ills - if ANY - will be improved by a worldwide cooperative effort sparked by those few glorious, self sacrificing altruists - to prevent devastation from Climate Change.
I think we are underestimating the amount of bitter gall lurking in multitudes of humanity who understand that either outcome - one in which nightmares akin to disaster movies - or one in which the effects of climate change is diminished - will leave them in the shadows of discontent.
Otto Rank- 1939
"In the history of mankind we see two alternating principles of change in operation, which seem to present an eternal dilemma: the question as to whether a change in people themselves or a change in their system of living is the better method for improving human conditions."
"While politicians, educators, and psychologists are advocating their respective remedies for the most pressing symptoms of this conflict, unforeseen events, as so often in history, have taken matters out of their hands and are shaping systems, as well as people, far ahead of any expectations."
cont.
"The best we can do under these circumstances is to catch up with these spontaneous developments which occur about us and and are affecting our own life, individually and socially..., following its changing currents as we swim along fully aware of its dangerous under-currents.
In terms of popular support for climate change, at least in America, the main argument against the Waxman-Markey bill (and against the idea of cap-and-trade in general) is that they will raise new “energy taxes.” This argument assumes that whatever cost the government imposes on carbon emitters (energy and industrial plants, mainly) will be passed on to the consumer. So, conservatives’ unwillingness to pay “energy taxes” today for the benefit of future generations is an altruism problem of the third kind that Professor Fortunato discusses: the willingness to pay a price out of one’s own rewards for other ends.
Another important aspect of this type of altruism is that the relationship is not between individuals in the same time period but between present and future generations. This aspect of the altruism problem that’s impeding a solution to climate change makes me wonder about the relationship between altruism and myopia. Many consumers exhibit myopia in their saving behavior. They discount future income. As a result, they spend today what they should “rationally” be saving and spending tomorrow. Yet, if individuals are self-interested with respect to their consumption and saving behavior as it affects their own futures, does myopic behavior reflect a self-interest problem? If myopia is culturally imbedded behavior (import and export economies have different savings behavior, i.e. China and the US) or a psychological irrationality (many individuals want to save for the future but just can’t bring themselves to do it) then perhaps myopic people are still very much self-interested people. If this is the case and if we extrapolate to the altruism problem bedeviling climate change policy, perhaps it is a culturally or psychologically produced myopia problem that needs solving. Either way, should an altruism gene that’s spliced into the human genome be designed to correct for myopic behavior? Or, are altruistic people not myopic by definition?
Another argument made against climate change legislation is made by so-called climate change deniers such as Jim Inhofe, a Republican from Oklahoma (who gives Jim Bunning (R-KY) a run for his money in the contest for worst Senator in the US Congress. I’m sure you can find Inhofe on You Tube ridiculing Al Gore). Perhaps an altruism gene would help these folks if it’s the case that they are the type of anti-altruist who justifies their selfish behavior by denying that a problem exists in the first place.
A third type of argument against domestic climate change policy is that climate change is a global problem in need of a global solution. Thus, domestic climate change legislation, though noble, would not solve the problem. This type of person is the antithesis of the “clean, green vegetarian” who sees the need for someone to be first. Admittedly, the need-for-a-global-solution person does not see the value in domestic climate change legislation that the “clean, green vegetarian” may see. However, is the need-for-a-global-solution person in need of an altruism gene if he or she doesn’t recognize the value of being a first mover? (As an aside but related to the need-for-a-global solution argument, does the CTP change if the interaction between CCs is affected by the behavior of PCs? It seems to me that more than 5.9% of countries are pushing for comprehensive climate change policy, but they are having to wait on countries like India and China to come along since those countries emit large amounts of carbon. So, is the CTP of 5.9% contingent on the condition that the interaction/conduct of PCs do not affect the outcomes of CCs’ interaction?)
cont...
The most important argument preventing a global solution to climate change is one made by developing and emerging market countries, especially China and India. They argue that even though they may be currently contributing enormous amounts of carbon to the atmosphere, they have not being doing so long enough to be responsible for the current state of the climate. Why should they be required to risk their future economic growth in order to solve a problem they didn’t cause? Now, does this type of argument make China and India PCs or does it mean simply that they refuse to be UCs? If the latter, they can still be altruists, correct? And, even if one would argue that China and India are not the third type of altruist who is willing to pay a price from his/her own reward, could one argue that they are the first or second type of altruist concerned with the fairness of outcomes and of process? If altruists refuse to take what they consider an unfair share of the reward, then are China and India altruists if they view themselves as preventing future generations of their people from shouldering what they view as an unfair share of the cost burden of a climate change solution? And, is altruistic behavior complicated in group settings, as opposed to one-on-one interaction, by the problem of fairly distributing the cost burden? I don’t think a religious group of altruists would be immune from the free-rider problem. I also don’t think an altruist in one of these religious groups would be okay with shouldering what he or she would consider an unfair share of the load. Is a refusal to be a UC (a.k.a. a universal altruist) incompatible with being an altruist? Or, does altruism possess people by degrees? If so, are altruists and CCs effectively the same even though they may have different motives for their behavior? I guess all of the these questions lead me to an ultimate one: if we need an altruism gene to solve climate change, what exactly would this gene need to correct to solve climate change – myopia, the under-valuation of being a first mover, a distribution problem among a group of altruistic people, etc…?
Post a Comment