**Economic Impact of Migration **

**In Both Emigrant and Immigrant Countries**

**Paper**

**For the 7 ^{th} International Summer School**

**Of the Institute of International Sociology **

**Gorizia**

**Gorizia (Italy), 17-26 September 2001**

**Seminar Hall, ISIG – Via Mazzini,13**

**Rome, 26 ^{th} August 2001**

Enrico Furia

*Economic Impact of Migration *

*In Both Emigrant and Immigrant Countries*

**Contents**

Foreword

1. – Premises

1.1 – Etymology and Semantics of the Term

2. – Historical and Geographical Overview of Migration

(Westward Migration, Eastward Migration, Southward Migration, Northward Migration)

3. – Definition of Emigrant and Immigrant Countries

4. – Definition of Economic Impact

4.1 – Economics as a Science

4.2 – Cost/Benefit Analysis, and Profit Maximization Theories

4.3 – Definition of Impact as in Physics and Economics

5. – The Analysis of Migration through Determinism, and Game Theory

5.1 – The Neoclassical Economic Theory

5.2 – The Game Theory

6. – Conclusion through Determinism and Game Theory

6.1 – The Genetic Advantage

6.2 – The Cultural Advantage

6.3 – The Social Change

6.4 – The Political Disadvantage

6.5 – The Economic Advantage

6.5 – The contradiction of different doctrines

7. – Nonlinear Dynamics. The Chaos Theory Analysis

8. – Introduction to the Ecosophic Set

9. Conclusion through Ecosophy and Nonlinear Dynamics

**Foreword**

In the proceeding of this work we will use logic.

Following the definition given by Professor Douglas Dowing of School of Business and Economics at Seattle Pacific University in his *Dictionary of Mathematics Terms*, ‘Logic is the study of sound reasoning. The study of logic focuses on the study of arguments. An argument is a sequence of sentences (called *premises*) that lead to a resulting sentence (called the *conclusions*). An argument is a valid argument if the conclusion does follow from the premises. In other words, if an argument is valid and all its premises are true, then the conclusion must be true. …….. Logic can be used to determine whether an argument is valid; however, logic alone cannot determine whether the premises are true or false. Once an argument has been shown to be valid, then all other arguments of the same general form will also be valid, even if their premises are different.

Arguments are composed of sentences.

Sentences are said to have the truth value *T *(corresponding to what we normally think of as ‘true’) or the truth value *F* (corresponding to ‘false’). In studying the general logical properties of sentences, it is customary to represent a sentence by a lower-case letter such as *p*, *q*, or *r*, called a sentence variable or a Boolean variable. Sentences either can be simple sentences or can consist of simple sentences joined by connectives and called compound sentences. For example, ‘Spot is a dog’ is a simple sentence; ‘Spot is a dog and Spot likes to bury bones’ is a compound sentence.

**1. – Premises**

**– Etymology and Semantics of the Term**

a) Migration as Change, Exchange.

From the *Dictionary of Word Origins*, by John Ayto, the term ‘migrate’ is referred to the term ‘mutate’ from which we can quote:’ Semantically mutate is probably the most direct English descendent of the Indo-European base **moi*, **mei- *‘change, exchange,’ which has also given English *mad*, *mean* ‘unworthy, ignoble,’ *municipal*, *mutual* [15] (from Latin *mutuus* ‘exchanged, reciprocal’), the final syllable of *common*, and probably *migrate* [17]. *Mutate *itself comes from Latin *mutare* ‘change’ (source also of English *mews *and *moult*), and was preceded into English by some centuries by the derivatives* mutable *[14] and *mutation* [14]’ Unquote.

b) Migration as Convoy, Transport.

In Physics ions migration is a transport (through a convoy of ions) of parts of an element into another.

**2. – Historical and Geographical Overview of Migration**

In history Culture has always migrated westward. Indo-European people came from East. American civilization came from East.

Eastward migration is rarer, and principally connected with military actions.

Southward migration is over since, and has involved only Latin America..

Northward migration is nowadays the major problem for Europe and the States.

**3. – Definition of Emigrant and Immigrant Countries**

We have generally defined migration in its two basic aspects: emigration and immigration.

Emigration is to leave a place (as a country) to settle elsewhere. In politics immigration is to come into a foreign country and take up permanent residence there. In botanic immigrant is a plant or animal that becomes established where it was previously unknown.

With migration both the emigrant and immigrant places can have huge problems. The country left looses the best part of its population (that is, the young, and all those people who can actively work).

The receiving country can mutate in several aspects such culture, social security, economy, etc. Mutation is an aspect that affects overall bureaucracy and privileges, for bureaucracy is based on acquired rights that tend to become privileges.

**4. – Definition of Economic Impact**

**4.1 – Economics as Science **

Economics is a Science. That is, it is not a religion, not a faith, not an ideology, but a way of knowing the matter.

Economics is a language too. As any language it has logic.

At least Economics is a method. As a method it has the capacity of forecasting and assessing products and processes.

As Science, Economics is based on a few scientific premises, which can be considered as theorems. (A theorem can be defined as a statement that has been proved, such as the Pythagorean Theorem).

**4.2 – Cost/Benefit Analysis, and Profit Maximization Theories**

The two basic theorems of Economics are:

The Cost/Benefit Analysis:

C – B = < 0

The Profit Maximization Theorem

B – C = Max

Whereas

C = Cost;

B = Benefit.

PRODUCER CONSUMER

Unit Cost – Unit Price = Profit

** ** Unit** **Cost – Utility = Profit

The aforementioned theorems define the rationality of economics.

**4.3 – Definition of Impact as in Physics and Economics**

In the *New Merriam-Webster Dictionary*, the term ‘Impact’ is defined as follows:

1. ‘A forceful contact, collision, or onset; also: the impetus communicated in or as if in a collision.’

2. ‘Effect’.

In Physics the ‘impetus’ communicated in a collision is called *momentum, *which is the force that a moving body has because of its weight and motion. Furthermore, in Physics any impact (collision) has a certain level of elasticity, which is the capacity of transferring the force of a moving body to a standing body.

Therefore, economic impact can be defined as the forceful contact/contrast of costs and benefits of both immigrants and local population. The immigrant, which is the moving body, has an impetus, a force, which as in physics can/cannot be transferred to the standing body. This force transferability is what we can call in Economics ‘Profit Maximization.’

**5. – The Analysis of Migration through **

**Determinism, Game Theory, and Nonlinear Dynamics**

**5.1 – The Neoclassical Economic Theory**

In neoclassical economic theory, to choose rationally is to maximize one’s rewards.

Determinism is a doctrine, which states that acts of the will, natural events, or social changes are determined by preceding causes. Such a doctrine uses mathematics to find out the limits, dimensions, derivatives, and scopes of any function (e.g., determine a position at sea).

In neoclassical economic theory, to choose rationality is to maximize one’s rewards. From one point of view this is a problem in mathematics: choose the activity that maximizes rewards in given circumstances.

**5.2 – The Game Theory**

In game theory, the case is more complex, since the outcome depends not only on our own strategies and the “market conditions,” but also directly on the strategies chosen by others, but we may still think of the rational choice of strategies as a mathematical problem – maximize the rewards of a group of interacting decision makers – and so we again speak of the rational outcome as the “solution” to the game.

Furthermore, the weakness of determinism probably lies in the lack of forecasting the ‘butterfly effect’ in technological changes. That is, there is no way of understanding new logics by means of old logics. In fact, determinism can understand and predict economic behaviour only under well known elements of a function.

Game Theory can be used for better understanding economics under uncertainty.

The Prisoners’ Dilemma has clearly shown as individually rational actions result in both persons being made worse off in terms of their own self-interested purposes. This remarkable result is what has made the wide impact in modern social science, for there are many interactions in the modern world that seem very much like that, from arms races through road congestion and pollution to the depletion of fisheries, the overexploitation of some subsurface water resources, and more recently, migration. These are all quite different interactions in detail, but are interactions in which (we suppose) individually rational action leads to inferior results for each person, and the Prisoners’ Dilemma suggests something of what is going on in each of them.

Therefore, as far as migration is concerned, we strongly believe that Game Theory provides a promising approach to understanding strategic problems of all sorts, and the simplicity and power of the Prisoners’ Dilemma and similar examples make them a natural starting point.

From Game Theory we derive also the analysis of zero-sum games, and non-zero sum games.

A game is zero-sum when the worth of the winner is tantamount to the worth of the looser. In such a game there is only a wealth transfer from one player to another. Any lottery is a practical example of a zero-sum gave; in fact, the winner wins the amount lost by the looser. Speculation in s Stock Exchange is a zero-sum game.

In a nonzero-sum game all players have a benefit (even or uneven) from the game. A business contract is a non-zero sum game.

Economics logic is based on non-zero sum games, for it works through the logic of Cost/Benefit Analysis. Politics is based on zero-sum games, for it works on power acquisition: to get more power it has to be subtracted to someone else.

Chaos is the doctrine that studies systems with the property that a small change in the initial conditions can lead to very large changes in the subsequent evolution of the system. Chaotic systems are inherently unpredictable. The weather is an example; small changes in the temperature and pressure over the ocean can lead to large variations in the future development of a storm system. However, chaotic systems can exhibit certain kinds of regularities.

Economic Determinism cannot study migration problems because it is strictly connected with the analysis of closed economies, therefore it does not take into consideration the ‘weather condition of the ocean’, that is, technology change and globalization.

For further information, please refer to (exhibit 1)

**6. – Conclusion through Determinism and Game Theory**

**6.1 – The Genetic Advantage**

The genetic advantage is the first profit that the country of immigration certainly has from the momentum of the emigrate people.

The author has not enough skill to deeply explain this theory, but he feels that the theory works.

Many closed communities in history have been extinguished even for a lack of genetic exchange.

Anyway, this advantage is invisible, so options are under uncertainty. Therefore, neoclassical economics has no instruments for determine such an advantage. To assess it we need to use game theory.

**6.2 – The Cultural Advantage**

Migration can have a dramatic change with culture too. The term ‘Culture’ in Latin has no positive or negative quality: it’s neutral as, for instance, *Fortuna* ‘chance’ (one can have good or bad chance). Therefore one can have good or bad (positive or negative) culture.

Anyway, whilst to improve genes can be very easy and pleasant, to improve culture is certainly hard and uncomfortable.

Change is understood as threaten in culture, while in business it’s understood as opportunity.

A positive culture is essential in business; in any contrary case there is no way of generating new wealth.

Cultural advantages are even more difficult to be understood than genetic advantages, for they are invisible and immaterial.

Cultural advantages are given by the exchange of different experiences between resident and immigrant people.

**6.3 – The Social Change**

The social change is a direct consequence of the cultural change. Any community is the mirror image of its culture. Economy is strictly constrained by cultural attitude of markets. Business ethics in a free market is oriented to satisfy demand, whatever it is, not to censor it, or to steer it through a correct moral way. That is, moral is not producers’ responsibility, but consumer’s ethics.

Social advantage cannot be identified through determinism, but through game theory.

**6.4 – The Political Disadvantage**

Migration has ever been a disadvantage for Politics because its innovative potential destroys law and order.

Most Governments policy is based on bureaucracy, which is extremely contrary to any innovation.

Innovation in politics can jeopardize any power, any chair, and any privilege.

Political disadvantage has the same sin that neoclassical economics disadvantage has. It is based on a wrong individual reward concept.

Neither determinism nor game theory can demonstrate the falsity of these statements.

**6.5 – The Economic Advantage**

Migration, both in the form of emigration, or immigration is nowadays supported and fed by technology and globalization.

Technology allows operating quickly and cheaply all over the world.

Globalization has liberalized the movement of goods, services, and people. Globalization has been generated by technology, not by politics. Therefore, no government policy has the ability of manipulating or stopping globalization. Such a process is the most silent and relevant revolution that humankind has never had since the Neolithic revolution. As any silent change globalization will be fully perceived only when it will be completely performed.

In neoclassical economic theory, as well as in politics, migration is considered as a disadvantage, for those logics used the same concept of utility: individual profit. With the help of game theory we emphasises as individual profit, or individual advantage, can de correctly calculated in any deterministic situation (that is, when all economic operators well understand the behaviour of each other), while with migration we face a problem of uncertainty.

Therefore, why migration has to be considered as a positive cultural, economic, social and political change?

It is hard to give an answer to such a question, using past experience or skills. We are facing a problem made completely new and unknown by globalization, technology innovation, where outcomes depend on the strategies chosen by others, and information is incomplete.

While migration seems to be individually a non-rational phenomenon through the neoclassical economic theory and political theory point of view, on the contrary it can be rational through the game theory analysis.

Contrarily to the Prisoners’ Dilemma, migration is a cooperative game. That means that receiving countries must cooperate with sending countries in order to generate new wealth.

Some 8,000 Italian companies have migrated to Romania, whilst none has migrated to Yugoslavia, or former Yugoslavia. That’s for the cooperative game established between Italy and Romania.

**6.6 – The contradiction of different doctrines **

At this point of our analysis, we might conclude that we cannot define any true or false conclusion, for different doctrines have different logics.

Before approaching any other hypothesis, we need to emphasize that humankind can no more be known through the application of different doctrines, for politics will ever be conflicting with economics, sociology with psychology, medicine with law, etc.

We thing we need to step back and to reinvent philosophy as a unique way of thinking.

Therefore, we need to introduce now other new concepts in order to overcome our impasse: the chaos theory, and the ecosophic philosophy.

**7. – Nonlinear Dynamics. The Chaos Theory Analysis **

Chaos is a term, which indicates the global nature of complex systems.

Chaos Theory study constitutes an interdisciplinary activity for it concerns all scientific sectors, such as turbulence of fluids, population fluctuations, electric activity of hearth and brains, etc.

Therefore, Chaos Theory has to be consider more as a philosophy than as a theory.

For further information, please refer to Exhibit 2.

Vocabulary

*Chaos*: a kind of apparent randomness whose origins are entirely deterministic (Whether is chaotic).

*Fractal*: geometric shape that repeat its structure on ever-finer scales (Clouds are fractals. Snow flakes are fractals.)

*The Butterfly Effect or Sensitivity to Initial Conditions* (When a butterfly in Tokyo flaps its wings, the result may be a hurricane in Florida a month later.)

*Phase Space*: The plane of the system (The city of Gorizia can be a phase space of migration)

*Phase Portrait*: The set of swirling curves (i.e., the functions generated by an attractor).

*Strange Attractor*: Dynamics can be visualized in term of geometric shapes called attractors. (If you start a dynamical system from some initial point and watch what it does in the long run, you often find that it ends up wandering around on some well-defined shape in phase shape. A system that settles down to a steady state has an attractor that is just a point. A system that settle down to repeating the same behaviour periodically has an attractor that is a closed loop. That is, closed loops correspond to oscillators. The butterfly effect implies that the detailed motion on a strange attractor cannot be determined in advance. But this doesn’t alter the fact that it is an attractor.

**8. Introduction to the Ecosophic Set**

In order to conclude this argument we would like to anticipate to you the result of a research we were running during the last decade. A complete description of our theory will be available on the Internet in the next months.

Therefore, during this presentation we would like to introduce in brief the relationship and common aspects we have identified with chaos theory.

With the introduction to the ecosophic set, we would like to report the result of our latest researches we reached. An Ecosophic Set consists of a group of people with all its items of interest, encompassed in its environment. Environment is both a geographical and cultural space.

If we might trace a parallel with chaos theory, we could say that in the case of population migrations, we can suppose several attractors, such as:

Survival

Reproduction

Socialisation, or Exchange

Knowledge

In an ecosophic set the butterfly effect can be generated by the following phase portraits:

Culture (Education, Religion, Ideology, etc.)

Power (Political, Military, Family, Role, Function, etc.)

Freedom (Liberty, Security, Alienation, Exclusion)

Income (Income transform needs into economic demands).

Attractors are instinct (i.e., natural information that all human beings receive as general instructions for their life. As in a computer, these instructions work as a firmware, and as such they can be manipulated, but not destroyed. In such a case a living being shall loose its nature.

Attractors have been listed in a random order, and their strength and efficiency can vary in function with the efficiency of the single phase portraits.

Phase portraits in the economic set work as a software in a computer. They instruct all living beings on how to learn and how to appraise and worth the reality. They can compress and constrain the attractors up to the revolutionary or rebellion limit. Conditional power can constrain freedom up to slavery.

‘Chaos teaches us that systems obeying simple rules can behave in surprisingly complicated ways,’ says Ian Stewarts in *Nature’s Numbers*, (page 127), Ed.1995. Furthermore he argues ‘There are important lessons here for everybody-managers who imagine that tightly controlled companies will automatically run smoothly, politicians who think that legislating against a problem will automatically eliminate it, and scientists who imagine that once they have modelled a system their work is complete. But the world cannot be completely chaotic, otherwise we would not be able to survive in it. In fact, one of the reasons that chaos was not discovered sooner is that in many ways our world is simple. That simplicity tends to disappear when we look below the surface, but on the surface it is still there.’

We can add to Ian Stewart’s statement that we imagine as possible that simple systems can generate complicate systems. Reality, as a construction of human mind, does not pre-exist. It is individually generated by culture, and can vary in function with the individual and collective culture quality.

The authors invites the participants to discuss deeper those arguments in the workshop of the afternoon. In this lesson there is not enough room to do it.

**9. Conclusion through Ecosophy and Nonlinear Dynamics**

Through Ecosophy and nonlinear dynamics we can overpass the following contradiction generated by neoclassical economics and game theory in migration.

First of all, both ecosophy and nonlinear dynamics approach the problem of human behaviour as a whole. Therefore, we make no more distinction between disciplines such as economics, sociology, politics, etc., but we approach the problems we pose with a new philosophy, (the environment in which the problem is encompassed), a new concept of profit ( ), and a new concept of law and order (the law and order generated by attractors).

Genetic advantage can be demonstrated by means of game theory, even if we have given no proof to such a statement in this work.

Cultural advantage

Social change as an advantage

Is demonstrated by what we call anacracy, which corresponds to freedom, market transparency, market competition, and the substitution of zero sum games with non-zero-sum games.

Political disadvantage

We strongly believe that politics, intended as legislative, administrative and judicial power can no more be the leading activity of human societies. Its logic is completely extemporaneous, and no more valid. Therefore, we thing business logic shall substitute in the midterm the logic of politics.

Economic advantage

Fortunately, economics does not need to be supported by any further theory to understand the positive aspect of migrations. Nevertheless, we thing ecosophy and nonlinear dynamics can let economics understand that the butterfly effects we have identified in the ecosophic set must be fully understood when we want to appraise any human behaviour and rationality.

**EXHIBIT 1**

**CHAOS THEORY ANALYSIS**

**CHAOS WITHOUT THE MATH**

BY

HYPERLINK “HTTP://www.wfu.edu/~petrejh4/index.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Left.gif” * MERGEFORMATINET JUDY PETREE’S HOMEPAGE

HYPERLINK “http://www.wfu.edu/~petrejh4/HISTORYchaos.htm” PART 1: HISTORY OF CHAOS

WHO FOUND IT, WHAT ARE THEY DOING WITH IT, AND WHAT’S NEXT? |

HYPERLINK “http://www.wfu.edu/~petrejh4/Instability.htm” PART 2: INSTABILITY

WHAT CAUSES IT? |

HYPERLINK “http://www.wfu.edu/~petrejh4/Attractor.htm” PART 3: THE STRANGE ATTRACTOR

WHAT TAKES OVER? |

HYPERLINK “http://www.wfu.edu/~petrejh4/PhaseTransition.htm” PART 4: PHASE TRANSITION

WHAT HAPPENS AT DECISION TIME? |

HYPERLINK “http://www.wfu.edu/~petrejh4/DEEPCHAOS.htm” PART 5: DEEP CHAOS

TO BE OR NOT TO BE? |

HYPERLINK “http://www.wfu.edu/~petrejh4/selforg.htm” PART 6: SELF-ORGANIZATION

WHAT IS THIS NEW COMPLEXITY? |

HYPERLINK “HTTP://www.wfu.edu/~petrejh4/theend.htm” PART 7: CONCLUSION

HOW CAN WE FIND IT AND USE IT? |

INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/eye_search.gif” * MERGEFORMATINET Look for the best links below

HYPERLINK “http://www.students.uiuc.edu/~ag-ho/chaos/chaos.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET ANDREW HO HAS A COMPLETE INDEX OF LINKS TO CHAOS.

HYPERLINK “http://www.duth.gr/~mboudour/nonlin.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET THE NON-LINEARITY AND COMPLEXITY HOMEPAGE

HYPERLINK “http://www.santafe.edu/” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET THE SANTA FE INSTITUTE STUDIES THE SCIENCE OF COMPLEXITY.

HYPERLINK “http://web.syr.edu/~nmagee/chaos.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET THEOLOGICAL IMPLICATIONS OF CHAOS

HYPERLINK “http://order.ph.utexas.edu/chaos/” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET CHAOS FROM ILYA PRIGOGINE CENTER

HYPERLINK “http://www.mindspring.com/~pcoleman/” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET STRANGE ATTRACTORS LORE

HYPERLINK “http://complex.csu.edu.au/vl_complex/” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET COMPLEX SYSTEMS SEARCH INDEX

HYPERLINK “http://liberty.uc.wlu.edu/~hblackme/biology/chaocall.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Right.gif” * MERGEFORMATINET CHAOTIC BEHAVIOR LIST

OTHER SITES OF INTEREST

HYPERLINK “http://www.gate.net/~svaughen/chaos/index.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET INTRODUCTION TO CHAOS THEORY AND FRACTALS

HYPERLINK “http://www.geocities.com/ResearchTriangle/1402/ComplexityAndEvolutionLinks.htm” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET BEST RESOURCE TO COMPLEXITY

HYPERLINK “http://home.inreach.com/kfarrell/course.outline.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET AN ONLINE COURSE IN CHAOS THEORY AND COMPLEXITY. EXCELLENT

HYPERLINK “http://www.uconect.net/~vead/top.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET MANCHESTER’S FRACTALS IN ACTION

HYPERLINK “http://www.imho.com/grae/chaos/chaos.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET RAE’S STORY OF HOW LORENZ DISCOVERED CHAOS.

HYPERLINK “http://www.imho.com/grae/chaos/” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET RAE ALSO PRESENTS THE MATHEMATICS OF CHAOS

HYPERLINK “http://inls.ucsd.edu/” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET THE INSTITUTE OF NON-LINEAR SCIENCE.

HYPERLINK “http://library.advanced.org/12740/netscape/index.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET FANTASTIC FRACTALS

HYPERLINK “http://www.vanderbilt.edu/AnS/psychology/cogsci/chaos/Journal.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET NON-LINEAR PSYCHOLOGY AND LIFE SCIENCES

HYPERLINK “http://tqd.advanced.org/2647/cooljava/testmand.htm” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET MANDALBROT AND JULIA SETS

HYPERLINK “http://www.gate.net/~svaughen/chaos/index.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET INTRODUCTION TO CHAOS AND COMPLEXITY THEORY

HYPERLINK “http://tqd.advanced.org/3120/main_content.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET HISTORY OF CHOAS THEORY (BETTER THAN MINE–HAS MATH)

HYPERLINK “http://ikiserver.boku.ac.at/~martin/chaos.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET CHAOS THEORY FROM UNIVERSITY GROUP

HYPERLINK “http://tqd.advanced.org/2647/chaos/chaos.htm” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET INTERACTIVE MATHEMATICS

HYPERLINK “http://lslwww.epfl.ch/~moshes/ca.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET SIPPER HAS GOOD PAGE OF CHAOS THEORY

HYPERLINK “http://www.duth.gr/~mboudour/nonlin.html” l “COMPLEXITY” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET COMPLEXITY PAGE

HYPERLINK “http://www.gweep.net/~rocko/sufficiency/node10.html” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET ANDREW’S MATH AND GRAPHS OF CHAOS

HYPERLINK “http://www.highfiber.com/~sjackett/chaos.htm” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET PHILOSOPHICAL MUSINGS ON CHAOS AND HUMAN UNDERSTANDING

HYPERLINK “http://homepages.force9.net/calresco/tutorial.htm” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/Pmarcube.gif” * MERGEFORMATINET ALL ABOUT COMPLEXITY

INCLUDEPICTURE “http://i15.netscape.com/c.cgi?A1586815$1586041$800x600xundefinedx16$http://dir.altavista.com/search?pg=dir&q=chaos+theory&tp=Library&stq=10” * MERGEFORMATINET HYPERLINK “http://www.hitometer.com” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/c.cgi” * MERGEFORMATINET …………… ……………

HYPERLINK “HTTP://www.wfu.edu/~petrejh4/index.html” JUDY PETREE’S HOMEPAGE

HYPERLINK “http://www.wfu.edu/~petrejh4/chaosind.htm” GO TO CHAOS INDEX

HYPERLINK “http://www.wfu.edu/~petrejh4/Instability.htm” GO TO PART 2: INSTABILITY

**Part 1: History of Chaos Theory**

**Be sure to check out Author’s Homepage links in Footnotes**

**Introduction**

Real life never goes as smoothly as this music. Click icon.

HYPERLINK “http://www.wfu.edu/~petrejh4/nocturne.mid” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/goldENTRIANGLE.gif” * MERGEFORMATINET

The word “chaos” might have first appeared in HYPERLINK “http://www.princeton.edu/~rhwebb/hesiod.html” Hesiod’s *Theogeny* (@700 B.C.E.) in Part I: “At the beginning there was chaos, nothing but void, formless matter, infinite space.” Later in Milton’s * HYPERLINK “http://www.literature.org/Works/John-Milton/paradise-lost/” **Paradise Lost*: “In the beginning, how the heav’ns and earth rose out of chaos.” Both Shakespeare (* HYPERLINK “http://www.geocities.com/Athens/Delphi/6151/othello.html” **Othello*) and Henry Miller (* HYPERLINK “http://www.sci.muni.cz/~sulovsky/black.html” **Black Spring*) refer to chaos. In these instances one inferred that chaos was an undesirable disordered quality. Historically our vernacular incorporated this idea of disorder into chaos; dictionaries defined chaos as turmoil, turbulence, primordial abyss, and biblical references to Tohu and Bohu had the same referential character of undesired randomness. Scientifically, Chaos implied the existence of the undesirable randomness, but the self-organization concept at the edge of chaos denoted the order we get out of chaos. The American essayist and historian HYPERLINK “http://www.anova.org/henry-adams.html” Henry Adams (1858-1918) expressed the scientific meaning of “chaos” succinctly: “Chaos often breeds life, when order breeds habit.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff1” (1)

Li and Yorke HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff2” (2) coined the word chaos to refer to the mathematical problem in chaos theory that described a time evolution with sensitive dependence on initial conditions. Robert May, a mathematician-biologist whose research was well read, used the word and the theory from Li and Yorke’s paper, thus making them and the word famous. Chaos theory came in the back door, so to speak, of the researcher’s world. It was not a law like thermodynamics or quantum physics, but it did enable the researcher to analyze events or areas with many problematic intricacies. Cambel reported that it had even been proposed that we call chaos “divinamics” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff3” (3) after the ancient Roman *divinatio* described by HYPERLINK “http://www.thalasson.com/gtn/gtnletC.htm” l “cicero” Cicero. (Because of the ubiquity of chaos found in nature, and because my research is in the area of religion, I would certainly go along with that name.)

HYPERLINK “http://order.ph.utexas.edu/index.html” **Ilya Prigogine**, the 1977 Nobel Prize winner in chemistry, pioneered the work in entropy of open systems; this was the inflow and outflow of matter, energy, or information between the system and its environment. Prigogine used dissipative systems to show that more complex structures can evolve from simpler ones, or order coming out of chaos.

What is Chaos/Complexity theory? Daniel Stein, in the Preface to the first volume of lectures given at the 1988 Complex Systems Summer School for the Santa Fe Institute in New Mexico, compares Chaos/Complexity to a “theological concept,” because lots of people talked about it but no one knew what it really was. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff4” (4) Several explanations for Chaos theory called for the words synthesis, cross-discipline, edge of chaos, dynamical, HYPERLINK “http://www.csc.fi/math_topics/Movies/CA.html” cellular automata, or neural networks, but all carry with them the concept of complex systems. The implications of Chaos are profound, for who could know the absolute conditions of any system for a complete prediction to be made of the behavior of that system?

**HOW IT ALL STARTED**

For thousands of years humans have noted that small causes could have large effects and that it was hard to predict anything for certain. What had caused a stir among scientists was that in some systems small changes of initial conditions could lead to predictions so different that prediction itself becomes useless. At the end of the 19th century, French mathematician, HYPERLINK “http://www.astro.virginia.edu/~eww6n/bios/Hadamard.html” **Jacques Hadamard** proved a theorem on the sensitive dependence on initial conditions about the frictionless motion of a point on a surface or the geodesic flow on a surface of negative curvature. All this was about billiard balls and why you can’t predict what three of them will do when they careened off each other on the table. French physicist HYPERLINK “http://ourworld.compuserve.com/homepages/billramey/duhem.htm” **Pierre Duhem** understood the significance of Hadamard’s theorem. He published a paper in 1906 that made it quite plain that prediction was “forever unusable” because of the necessarily present uncertain initial conditions in Hadamard’s theorem. These papers went unnoticed or rather unnoted by the man who was recognized as the Father of Chaos theory, Henri Poincarè (1854-1912).

In 1908 he published SCIENCE ET METHODE HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff5” * (5)* that contained one sentence concerning the idea of chance being the determining factor in dynamic systems because of some factor in the beginning that we didn’t know about. All three of these men and their ideas went unnoted because quantum mechanics had disrupted the whole physics world of ideas; and because there were no tools such as ergodic theorems HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff6” (6) about the mathematics of measure; and because there were no computers to simulate what these theorems prove.

In 1846, the planet Neptune was discovered, causing quite a celebration in the classical Newtonian mechanical world, this revelation had been predicted from the observation of small deviations in the orbit of Uranus. Something unexpected happened in 1889, though, when King Oscar II of Norway offered a prize for the solution to the problem of whether the solar system was stable. HYPERLINK “http://www.exploratorium.edu/turbulent/CompLexicon/poincare.html” Henri Poincaré submitted his solution and won the prize, but a colleague happened to discover an error in the calculations. Poincaré was given six months to rectify the matter in order to keep his prize. In consternation, Poincaré found there was no solution HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff7” .(7) Poincaré had found results that upset the accepted view of a purely deterministic universe that had reigned since HYPERLINK “http://www-groups.dcs.st-and.ac.uk/history/Mathematicians/Newton.html” Sir Isaac Newton lined out linear mathematics. In his 1890 paper, he showed that Newton’s laws did not provide a solution to the “three-body problem,” in other words, how one deals with predictions about the earth, moon and sun. He had found that small differences in the initial conditions produce very great ones in the final phenomena, and the situation defied prediction. Poincaré’s discoveries were dismissed in lieu of Newton’s linear model; one was to just ignore the small changes that cropped up. The three-body problem was what Poincare had to interpret with a two-body system of mathematics. Why was it a problem? He was trying to discover order in a system where none could be discerned.

Poincaré’s negative answer caused positive consequences in the creation of chaos theory. About eighty years later, as early as 1963, Edward HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff8” **Lorenz**, using Poincaré’s mathematics, described a simple mathematical model of a weather system that was made up of three linked nonlinear differential equations that showed rates of change in temperature and wind speed. Some surprising results showed complex behavior from supposedly simple equations; also, the behavior of the system of equations was sensitively dependent on the initial conditions of the mathematical model. He spelled out the implications of his discovery, saying it implied that if there were any errors in observing the initial state of the system and this is inevitable in any real system, prediction as to a future state of the system was impossible. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff9” (9) Lorenz labeled these systems that exhibited sensitive dependence on initial conditions as having the HYPERLINK “http://hyperion.advanced.org/12170/history/lorenz.html” “butterfly effect”: this unique name came from the proposition that a butterfly flapping its wings in Hong Kong can affect the course of a tornado in Texas.

During 1970-71, interest in turbulence, strange attractors, and sensitive dependence on initial conditions arose in the world of physics. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff10” (10) E. N. Lorenz published a paper, called “Deterministic nonperiodic flow” in 1963 that proved that meteorologists could not predict the weather. HYPERLINK “http://www-chaos.umd.edu/~yorke/” Jim Yorke, an applied mathematician from the University of Maryland was the first to use the name Chaos, but actually it was not even a chaos situation, but the name caught on. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff11” (11)

A chaotic system is sensitive to initial conditions and causes the system to become unstable. Cambel identifies chaos as inherent in both the complexity in nature and the complexity in knowledge. The nature side of chaos entails all the physical sciences. The knowledge side of chaos deals with the human sciences. Chaos may manifest itself in either form or function or in both. Chaos studies the interdependence of things in a far-from-equilibrium state. Every open nonlinear dissipative HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff12” (12) system has some relationship to another open system and their operations will intersect, overlap, and converge. If the systems are sensitive to the initial conditions, in other words, you don’t know exactly in detail every little piece of information, and then you have a potentially chaotic system. Not all systems will be chaotic, but those where a lack of infinite detail is unknown, then these systems have an indeterminate quality about them. You can’t tell what’s going to happen next. They are unpredictable. If these systems are perturbed either internally or externally, they will display chaotic behavior and this behavior will be amplified microscopically and macroscopically.

Further research in non-linear dynamical systems HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff13” (13) that displayed a sensitive dependence on initial conditions came from Ilya Prigogine, a Nobel-prize winning chemist, who first began work with far-from-equilibrium systems in thermodynamic research HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff14” (14). Ilya Prigogines’ research in non-linear dissipative structures leads to the concept of equilibrium and far-from equilibrium, and to categorize the state of a system. In the physical studies of thermodynamics, Prigogines’ research revealed far-from-equilibrium conditions that led to systemic behavior different from what was expected by the customary interpretation of the HYPERLINK “http://www.ediacara.org/2lot.html” Second Law of Thermodynamics. Phenomena of bifurcation and self-organization emerged from systems in equilibrium if there was disruption or interference. This disruption or interference became the next step to Chaos Theory; it became Chaos/Complexity Theory. Prigogine talked about his theory as if he were HYPERLINK “http://www.public.iastate.edu/~honeyl/Rhetoric/index.html” Aristotle: a far-from-equilibrium system can go ‘from being to becoming’. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff15” (15) These ‘becoming’ phenomena showed order coming out of chaos in heat systems, chemical systems, and living systems.

From Lorenz simulation, René Thom, mathematician, proposed ‘catastrophe theory’, or a mathematical description of how a chaos system bifurcates or branches. Out of these bifurcations came pattern, coherence, stable dynamic structures, networks, coupling, synchronization, and synergy. From the study of complex adaptive systems used by Poincaré, Lorenz, and Prigogine, Norman Packard and Chris Langton developed theories about the ‘edge of chaoses in their research with cellular automata. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff16” (16) The energy flowing through the system, and the fluctuations cause endless change which may either dampen or amplify the effects. In a phase transition of chaotic flux, (when a system changes from one state to another), it may completely reorganize the whole system in an unpredictable manner. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff17” (17)

Two scientists, physicist Mitchell Feigenbaum HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff18” (18) and computer scientist Oscar HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff19” Lanford HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff19” (19) came up with a picture of chaos in hydrodynamics using Renormalization ideas. They were studying non-linear systems and their transformations HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff20” . (20) Since then, chaos theory or Nonlinear Science has taken the scientific world by a storm, with papers coming in from all fields of science and the humanities. HYPERLINK “http://www.mindspring.com/~pcoleman/” *Strange attractors* were showing up in biology, statistics, psychology, and economics and in every field of endeavor.

**Properties of complexity**

Complexity or the edge of chaos yielded self-organizing, self-maintaining dynamic structure that occurred spontaneously in a far-from-equilibrium system. Complexity had no agreed upon definition, but it could manifest itself in our everyday lives. Intense work is being done on the implications of complexity at the HYPERLINK “http://www.santafe.edu/” Santa Fe Institute in New Mexico. Here Ph. D.’s from many fields use cross-disciplinary methods to show how complexity in one area might link to another. Erwin Laszlo, from the Vienna International Academy, has the most interesting statement about Complexity:

In fact, of all the terms that form the lingua franca of chaos theory and the general theory of systems, bifurcation may turn out to be the most important, first because it aptly describes the single most important kind of experience shared by nearly all people in today’s world, and second because it accurately describes the single most decisive event shaping the future of contemporary societies. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff21” (21)

Bifurcation once meant splitting into two or more forks. In chaos theory it means: When a complex dynamical chaotic system becomes unstable in its environment because of perturbations, disturbances or ‘stress’, an **attractor** draws the trajectories of the stress, and at the point of phase transition, the system bifurcates and it is propelled either to a new order through self-organization or to disintegration.

The phase transition of a system at the edge of chaos began with the studies of John Von Neumann HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff22” (22) and Steve Wolfram HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff23” (23) in their research on cellular automata. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff24” (24) Their research revealed the edge of chaos was the place where the parallel processing of the whole system was maximized. The system performed at its greatest potential and was able to carry out the most complex computations. At the bifurcation stage, the system was in a virtual area HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff25” (25) where choices are made–the system could choose whatever attractor was most compelling, could jump from one attractor to another–but it was here at this stage that forward futuristic choices were made: this was deep chaos. The system self-organized itself to a higher level of complexity or it disintegrated. The phase transition stage may be called the **transeunt** stage, the place where transitory events happen. Transeunt is a philosophical term meaning that there is an effect on the system as a whole produced from the inside of the system having a transitory effect; and, a scientific term in that it is a nonperiodic signal of sudden pulse or impulse.

After the bifurcation, the system may settle into a new dynamic regime of a set of more complex and chaotic attractors, thus becoming an even more complex system that it was initially. Three kinds of bifurcations happen: **1**. Subtle, the transition is smooth. **2**. Catastrophic, the transition is abrupt and the result of excessive perturbation. **3**. Explosive, the transition is sudden and has discontinuous factors that wrench the system out of one order and into another. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff26” (26) Per Bak HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff27” (27), with his co-researchers Chao Tang and HYPERLINK “http://www.nwmissouri.edu/nwcourses/martin/studres/makegrad.html” Kurt Wiesenfeld reckons nature abiding on the edge of chaos or what they call ‘self-organized criticality’**.**

Our daily encounter with Chaos/Complexity is seen in traffic flow, weather changes, population dynamics, organizational behavior, shifts in public opinion, urban development and decay, cardiological arrhythmias, epidemics. It might be found in the operation of the communications and computer technologies on which we rely, the combustion processes in our automobiles, cell differentiation, immunology, decision making, the fracture structures, and turbulence.

Here are a few of the statements that Cambel makes about the ubiquity of chaos:

1. Complexity can occur in natural and man-made systems, as well as in social structures and human beings.

2. Complex dynamical systems may be very large or very small, indeed, in some complex systems, large and small components live cooperatively.

3. The system is neither completely deterministic nor completely random, and exhibits both characteristics.

4. The causes and effects of the events that the system experiences are not proportional.

5. The different parts of complex systems are linked and affect one another in a synergistic manner.

6. There is positive and negative feedback. The level of complexity depends on the character of the system, its environment, and the nature of the interactions between HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff28” them.28

**WHERE IT’S ALL GOING**

If we lived in a completely deterministic world there would be no surprises and no decision making because an event would be caused by certain conditions that could lead to no other outcome. Nor could we consider living in a completely random world for there would be, as Cambel says, “no rational way of reaching a well-reasoned decision.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff29” (29) What kind of answers do we get when we recognize that a system is indeed unstable and that it is indeed an example of chaos at work? The American Association for the HYPERLINK “http://www.aaas.org/aaas.html” Advancement of Science published nineteen papers presented at their 1989 meeting that was devoted entirely to chaos theory usage on such ideas as chaos in dynamical systems, biological systems, turbulence, quantized systems, global affairs, economics, the arms race, and celestial systems. Stambler HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff30” (30) reported that the Electric Power Research Institute was considering the applications of chaos control in voltage collapses, electromechanical oscillations, and unpredictable behavior in electric grids. Peng, Petrov and Showalter HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff31” (31) were studying the usefulness of chaos control in chemical processing and combustion. Ott, Grebogi, and Yorke cited the many purposes of chaos and said it might even be necessary in higher life forms for brain functioning. Freeman studied just such brain functions related to the olfactory system and concluded that indeed chaos “affords an opportunity to exploit further these manifestations of brain activities” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff32” (32).

Not only are research papers prolific, but an array of books are being published monthly on chaos applications. Bergé, Pomeau, and Vidal assert that chaos theory has “great predictive power” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff33” (33) that allows an understanding of the overall behavior of a system. Kauffman HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff34” (34) uses the self-organization end of chaos to assert that nature itself is spontaneous; Cramer claimed that by overcoming the objections to mysticism and scientism HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff35” (35), that the “theory of fundamental complexity is valid” (this will most likely turn into a book–so many researchers refer to it). This perhaps gives some idea as to far reaching applications of chaos theory in the scientific areas.

A few last words about the edge of chaos will be added here because they will allow you to see how research has gone from linear science to nonlinear applications. Wentworth d’Arcy Thompson used transformations of coordinates to compare species of animals in his book *On Growth and Form HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff36” **(**36)*. Comparing one form of a fish, as an example, with another could be shown on a coordinate map and used to show how they differ and how they were alike. The same kind of transformation coordinate map could compare chimpanzee skulls to human skulls. Where Thompson used order to compare the workings of nature, Stuart Kauffman, in his book *The Origins of Order: Self-Organization and Selection in Evolution*, HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff37” **(37)**took the next step in studying nature. He was seeking the origins of order in complex systems that were chaotic. His research is rife with examples of the interconnectedness of selection and self-organization. The essence of his findings are that much of the order seen in organisms stems from spontaneous generation from systems operating at the edge of chaos, or in other words, systems that are unstable purposely. Thompson applied physics to biology, and now Kauffman is applying chaos /complexity theory to biology. Cramer sees the interaction of order and disorder as a necessity in nature. “In nature, then, forms are not independent and arbitrary, they are interrelated in a regular way…And even organs arising to serve new functions develop according to the principle of transformation. At the branch points where something new emerges, disruptions of order are in fact necessary; abrupt phase changes occur. Indeed, the interplay of order and chaos constitutes the creative potential of nature.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff38” (38)

The great French mathematician Henri Poincaré first noticed the idea that many simple nonlinear deterministic systems can behave in an apparently unpredictable and chaotic manner. Other early pioneering work in the field of chaotic dynamics were found in the mathematical literature by such luminaries as Birkhoff, Cartwright, Littlewood, Levinson, Smale, and Kolmogorov and his students, among others. In spite of this, the importance of chaos was not fully appreciated until the widespread availability of digital computers for numerical simulations and the demonstration of chaos in various physical systems. This realization has had broad implications for many fields of science, and it has been only within the past decade or so that the field has undergone explosive growth. The ideas of chaos have been very fruitful in such diverse disciplines as biology, economics, chemistry, engineering, fluid mechanics, physics, just to name a few. As you can see, Chaos Complexity theory can become a real research tool for many fields. Metaphorically it can be used outside the scientific field. This author plans to apply this theory to religious research.

FOOTNOTES

1. Cambel, A. B. Applied Chaos Theory: A Paradigm for Complexity. Academic Press, Inc. San Diego, CA 1993. P. 15. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

2. Li, T. Y. and Yorke, J.A. “Period Three Implies Chaos”. American Mathematical Monthly, 82,1975. Pp. 995-992. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

3. Cambel. P. 16. quoting Zeldovich, Y. A., Ruzmaikin, A. A. and Sokoloff, D.D. The Almighty Chance. Singapore: World Scientific. 1990. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

4. Stein, Daniel L. (ed.) Lectures in the Sciences of Complexity. Vol. 1. Redwood City, CA. Addison-Wesley Publishing Co. 1989. P. XIII. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

5. Poincare, Jules Henri. Science and Method. New York: Dover. 1952. English trans. Did you know his cousin Raymond was President of France from 1913-20? HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

6. Ergodics are used in statistics, the method says that given a long enough interval a system will return to a similar state it previously had. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

7. Peterson, I. Newton’s Clock: Chaos in the Solar System. New York: MacMillan. 1993. Peterson tells other quite interesting stories about the beginnings of Chaos theory. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk4” return

8. Lorenz, Edward N. “Deterministic nonperiodic flow.” Journal of the Atmospheric Sciences. 20:130-41. 1963. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk5” return

9. Lorenz. P. 133. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk5” return

10. Another problem those jumped right into the middle of the chaos theory excitement was complexity. Kurt Godel, logician from Austria, published a paper in 1931 that the mathematician’s dream of a complete set of numbers to represent everything is incomplete. (“Within the framework of generally accepted basic assertions concerning integers 1 2,3,…., Godel showed that some assertions can neither be proved on or disproved: these are undecideable assertions. If one increases the number of basic assertions there will nevertheless always remain some undecideable assertions. It was quite earthshaking to mathematics, but now it is accepted to know that the set of all properties of integers and the set of all true assertions about them does not have a finite basis. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk6” return

11. Li, T. and HYPERLINK “http://www-chaos.umd.edu/~yorke/” J. A. Yorke. Pp. 985-92. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk6” return

12. Nonlinear defined without mathematics is an equation that has both negative and positive answers. Dissipative defined without the physics means that it gives off energy and doesn’t get any back. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk7” return

13. HYPERLINK “http://www.hamline.edu/depts/philosophy/Faculty.html” l “Stephen” Kellert, Stephen. In the Wake of Chaos: Unpredictable Order in Synamical Systems. Chicago, Ill.: The University of Chicago Press. 1993. P. 3. A dynamical system is a simplified model for the “time-varying” behavior which gives the recipe for producing the present physical state of a system and for transforming the system to a descriptive of its state in the past or future. By changing the variables one can map the changes that a system goes through to obtain a state from time to time: the use of evolution equations or differential equations is but the necessary process that is sometimes oppressively long. Chaos theory is a part of dynamical systems study but uses nonlinear terms in the equations. These nonlinear terms may be expressions such as x2, sin (x), or 2xy, which makes it impossible to render a single answer. “Chaos theory investigates a system by asking about the general character of its long-term behavior” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk8” return

14. Thermodynamics is the study of energy flow. Classical thermodynamic studies closed or near equilibrium systems. Von Bertalanffy actually presented the same idea in his General System’s theory in 1968. Prigogine researched far-from-equilibrium systems of chemical and heat transfer, which displayed self-organizing characteristics. He refined the theory, linked it to living systems and publicized it. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk8” return

15. Prigogine, Ilya. and HYPERLINK “http://home.nordnet.fr/~phuleux/isabelle.htm” Stengers, I. HYPERLINK “http://magna.com.au/~prfbrown/chaos_02.htm” *Order out of Chaos: Man’s New Dialogue with Nature*. New York: Bantam Books. 1984. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk8” return

16. Packard, Norman. “Adaptation Toward the Edge of Chaos.” A Technical Report, Center for Complex Systems Research. University of Illinois. CCSR-88-5. 1988. There is an earlier paper by Chris Langton. “Studying Artificial Life with Cellular Automata” in Physica. 22D. 1986. Pp. 120-49. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk9” return

17. HYPERLINK “http://www.arcfan.demon.co.uk/sf/books/94may.htm” Waldrop, Mitchell. Complexity: The Emerging Science at the Edge of Order and Chaos. (New York: Simon & Schuster. 1992). Waldrop keeps you interested about the discovery and the implication of these complex systems in this book. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk9” return

18. HYPERLINK “http://www.rockefeller.edu/labheads/feigenbaum/feigenbaum.html” Feigenbaum, M. J. “Quantitative universality for a class of nonlinear transformation,” J.STATIST..PHYS. 21 (1979). Pp. 25-52. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk9” return

19. HYPERLINK “http://www.math.ethz.ch/~lanford/index.html” Lanford, O. E. “A computer-assisted proof of the Feigenbaum conjectures”. BULL .AMER. MATH. SOC.6 (1982). Pp.427-34. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk9” return

20. Ruelle, David. CHANCE AND CHAOS. Princeton: University Press. 1991. Pp. 57-79. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk9” return

21. HYPERLINK “http://www.jyu.fi/~rakahu/erwin.html” Laszlo, Ervin. The Age of Bifurcation: Understanding the Changing World. Philadelphia, Pa.: Gordon and Breach Science Publishers. 1991. P. 4. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk11” return

22. HYPERLINK “http://www.mcsr.olemiss.edu/~ccmeena/vnmann.html” von Neumann, John. Theory of Self-Reproducing Automata. Edited by Arthur W. Burks. (Champaign-Urbana: University of Illinois Press. 1966. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk12” return

23. HYPERLINK “http://www.mathematica.com/s.wolfram/” Wolfram, S. “Statistical mechanics of cellular automata” in Rev. Mod. Phys. 55:601. 1983. and “Universality and complexity in cellular automata” in Physica 10D:1. 1984. See also: Theory and Applications of Cellular Automata. Singapore: World Scientific. 1986. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk12” return

24. Waldrop. P.87. Cellular automata are programs for generating patterns on a computer according to rules specified by the programmer. They are precisely defined and can be analyzed in detail, yet they have a dynamic quality that leads to complexity in the system. Their research tries to find laws that describe when and how such complexities emerge in nature. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk12” return

25. Fox, Ronald F. “Quantum Chaos in Two-Level Quantum Systems.” The Ubiquity of Chaos. Saul Krasner, ed. Washington, D. C.: American Association for the Advancement of Science. 1990). Pp. 105-113. Fox worked with quantum mechanical models to discover that the periodic modulations were seen to arise from virtual quantum transitions. There had to be the virtual transitions in order for chaos to happen. There was no classical analogue for these findings. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk12” return

26. Ashby, W. Ross. Design for a Brain. 2nd. ed. (New York: Wiley Publishers. 1960). Ashby’s work was centered around how a system with many interacting parts adapts to it environment. He was thinking in terms of neural or brain adaptation. See also: Ashby, W. Ross. “Principles of the Self-organizing system.” Principles of Self-Organization. Foerster and Zopf, eds. New York: Perganmon Press. 1962. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk13” return

27. HYPERLINK “http://www.csl.sony.fr/Symposium98/Bak.html” Bak, Per and P., Tang, C. “Self-Organized Criticality”. Physics Review A 38:364. 1988. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk13” return

28. Cambel. Pp. 3-4. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk14” return

29. Cambel. P. 4. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk14” return

30. Stambler, I. “Chaos Creates a Stir in Energy-Related R&D”. R&D Magazine. December, p.16. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk14” return

31. Peng, B., Petrov, V. and HYPERLINK “http://heracles.chem.wvu.edu/SHO_group.asp” Showalter, K. “Controlling Chemical Chaos”. Journal of Physical Chemistry. 95. Pp. 1957-59. 1991. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk14” return

32. Freeman, Walter J., “Searching for Signal and Noise in the Chaos of Brain Waves”. in The Ubiquity of Chaos. Saul Krasner, ed. Washington, D. C.: American Association for the Advancement of Science. 1990. Pp. 47-55. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk14” return

33. Bergé, P., Pomeau, Y., and Vidal, C. Order Within Chaos. Translated by L. Tuckerman. Paris: J. Wiley & Sons. 1984. P.265. The predictive power in this reference is not to predict the exact value of some property of a system, but to allow the researcher to understand the overall behavior of that system, and perhaps even predict what the overall behavior will look like at some future point. It is strictly holistic prediction they refer to. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk15” return

34. HYPERLINK “http://home.wxs.nl/~gkorthof/kortho32.htm” Kauffman, Stuart A. The Origins of Order: Self-Organization and Selection in Evolution. (New York: Oxford University Press. 1993). HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk15” return

35. Cramer. Pp. 218-9. Cramer objects that the theory of fundamental complexity is a product of mysticism. He agrees that it might seem mystic because it contains antinaturalistic, nonscientific, and even mystical elements. But, the macroscopic biological realm contains individual molecular events that are subject to feedback coupling operating through amplification mechanisms. The statistical fluctuations can be recognized by nonlinear equations used in chaos theory, thus, these networks become indeterminate under certain conditions. Since chaos theory incorporates these latest scientific findings, it cannot be regarded as mysticism. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk15” return

36. Thompson, Wentworth d’Arcy. On Growth and Form. 2ed.. Cambridge: Cambridge University Press. 1966. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk17” return

37. HYPERLINK “http://home.wxs.nl/~gkorthof/kortho32.htm” Kauffman, Stuart A. The Origins of Order: Self-Organization and Selection in Evolution. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk17” return

38. Cramer, F. CHAOS AND ORDER. Translated by D. I. Loewus. New York: VCH Publishers. 1993. Pp.6-7.

HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk18” return

** HYPERLINK “HTTP://www.wfu.edu/~petrejh4/index.html” ****JUDY PETREE’S HOMEPAGE**

** HYPERLINK “HTTP://www.wfu.edu/~petrejh4/chaosind.htm” ****RETURN TO INDEX**

** HYPERLINK “http://www.wfu.edu/~petrejh4/Attractor.htm” ****GO TO PART 3: ATTRACTOR**

**Part 2: Order and Instability in Chaos**

**ORDER**

There’s order in music: click the symbol below.

HYPERLINK “http://www.wfu.edu/~petrejh4/sync.mid” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/timebals.gif” * MERGEFORMATINET

Finding the order of something is necessary for scientists, historians, artists, waitresses, musicians, theologists, cooks, and Daddies putting together Christmas toys. Ordering can be done mathematically or with pictures. Order in a straight line is easily understood for it can be constructed by just a series of equal segments; here order is defined by a single, similar difference. To find the order in curves you need to know what the starting point is and the common difference in successive line segments. Here again, order is defined by a single similar difference. By noting the similar differences between successive segments of a curve or other geometric figure, you can determine their order. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff1” (1) Order is also seen in randomness, as HYPERLINK “http://www.math.tu-berlin.de/~frommlet/bohm.html” Bohm and Peat explain:

…whatever happens must take place in some order so that the notion of a ‘total lack of order’ has no real meaning. Indeed, even what are called random events do happen to take place in a definable and describable sequence and can be distinguished from other random events. In this elementary sense they obviously have an order. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff2” (2)

Order in language, art, music, games, architecture, social structures, and rituals is very subtle because it is context dependent–the participator must understand all its complexities for a meaningful and satisfying appreciation of it. Order in nature, inanimate objects and physical systems also have an infinite, but subtle order. Flowing water can have a smooth flow in unobstructed areas, or complex eddies and whirlpools can develop when obstructions are there; even **chaotic order** can erupt in extreme agitation. Randomness can result also, but only when it is “understood as the result of the action of the very small elements in an overall context that is set by the boundaries and the initial agitation of the water. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff3” “(3) This is where Chaos theory fits into the idea of order, the flowing water is a dynamic system that uses **non-linear systems theory**.

The arena of a system is called the state space or the phase space. Mathematically this phase space would be the “space where each dimension corresponds to one variable of the system. Thus, every point in state space represents a full description of the system in one of its possible states, and the evolution of the system manifests itself as the tracing out of a path, or trajectory, in state space.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff4” (4) When you investigate the behaviour area, or phase space, in a dynamical system, there might be tiny perturbations or disturbances external to the system that can cause the whole system to change. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff5” (5) Sally Goerner gives a good definition of non-linearity as ‘any system in which input is not proportional to output’. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff6” (6) Non-linear systems in chaos theory display aberrant, illogical behaviour; they can give either positive or negative feedback; they can produce stability or instability; they can produce coherence through convergence, coupling or entrainment, or produce divergence or even explosion. For chaos to happen, you have to have a system that is *sensitive to the initial conditions* and that is interdependent with its environment. What seems obvious when one begins to look at non-linear systems is that they look like what is going on around us in the everyday world.

**INSTABILITY**

Listen to the sense of perturbation this music evokes. Click icon.

HYPERLINK “http://www.wfu.edu/~petrejh4/gmfreak.mid” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/nukeblast.gif” * MERGEFORMATINET

Chaos/Complexity involves dynamics, or what Lorenz called “far-from-equilibrium” states. The word equilibrium may remind you of a tranquil lakeside scene; a state of rest is one of its definitions, but it also entails the idea of balance. For complex dynamical systems, equilibrium is a rarity, or as Çambel calls it “a temporary weigh station”. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff7” (7) For a dynamical process to take place, the system deviates from equilibrium. Prigogine and Stengers tell us that the more complex a system is, the more numerous are the perturbations, disturbances, or fluctuations that threaten its stability HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff8” .(8) As the system becomes more vulnerable to these disturbances, its energy requirements escalate as it tries to maintain coherence. Instability can occur in all kinds of structures from solids to gases, from animate to inanimate, from organic to inorganic, and from constitution to institution. External and internal disturbances can cause stable systems to become unstable, but this instability does not necessarily happen from just some ordinary perturbation. Çambel says it depends upon the “type and magnitude of the perturbation as well as the susceptibility of the system” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff9” (9) that must be considered before the system is rendered unstable. He adds that sometimes it takes more than one kind of disturbance for the system to transform into an unstable state. Prigogine and Stengers speak of the “competition between stabilization through communication and instability through fluctuations. The outcome of that competition determines the threshold of stability.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff10” (10) In other words, the conditions must be ripe for upheaval to take place. We could reckon this to many observable situations in areas such as disease, political unrest, family and community dysfunction. Cambel used the old adage that it might be the straw that broke the camel’s back that finally allows the system to go haywire.

Stephen Kellert says, “Chaos theory investigates a system by asking about the general character of its long-term behaviour.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff11” (11) Chaotic solutions seek a qualitative account of the behaviour of a system at some future time. Quantitative closed solutions might tell you **when** three elliptically orbiting planets will line up. Qualitative solutions will tell you **how** the elliptical orbits may have formed as opposed to circular or parabolic orbits. What will be the characteristics of all solutions of this system? How does the system change behaviour? A system like a marble at the bottom of a bowl can be jostled and will exhibit some behavioural antics but will eventually settle down to the bottom of the bowl. A system like a watch will stop momentarily if given a jar but will continue ticking reliably soon after. These systems are said to be “stable”. Unstable or aperiodic systems are unable to resist small disturbances and will display complex behaviour making prediction impossible and measurements will appear random. Human history is an excellent example of aperiodic behaviour. Civilization may appear to rise and fall, but things never happen in the same way. Small events or single personalities may change the world around them. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff12” (12) Kellert goes on to say, “The standard examples of unstable aperiodic behaviour have always involved huge conglomerations of interacting units. The systems may be composed of competing human agents or colliding gas molecules”. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff13″ (13) An unstable aperiodic system is deterministic because it is usually composed of less than five variables in a differential equation, and because “the equations make no explicit reference to chance mechanisms.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff14” (14)

In far-from-equilibrium complex systems changes can frequently occur that upset the fine-tuning between the internal forces structuring the system and the external forces that make up their environment. Most of the time, the fine tuning allows the system to operate smoothly, but when the perturbations escalate and the system is “stressed” beyond certain threshold limits, subtle indications of unrest crop up, sometimes sudden non-linear “chaos” takes place. The subtle forms of chaos begin by aberrant behaviour. The shift to an attractor or the swing from one attractor to another causes the system to behave differently. What can be done when so many problems come upon us? Uri Merry says a human system will have to give “more awareness, attention, and care …to maintaining its internal ties and communication networks. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff15” (15) Human decision making has the unmistakable imprint of chaos on it. There are always so many assumptions and implications to consider that it sometimes is quite overwhelming. It is here that a strange attractor aids in the decision making process. The strange attractor might take the form of one’s belief system. This has been considered by this writer in her research.

FOOTNOTES

1. HYPERLINK “http://www.math.tu-berlin.de/~frommlet/bohm.html” Bohm, David. and Peat, F. David. Science, Order, and Creativity. (New York: Bantam Books. 1987). Bohm and Peat’s book is the workbook of many mathematical students and professors trying to understand the essence of order. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bga” return

2. Bohm and Peat. Pp. 127-128. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

3. Bohm and Peat. Pp. 131-2. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

4. HYPERLINK “http://www.hamline.edu/depts/philosophy/Faculty.html” l “Stephen” Kellert, Stephen. In the Wake of Chaos: Unpredictable Order in Synamical Systems. Chicago, Ill.: The University of Chicago Press. 1993. P. 8. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

5. Limit cycles and quasi-periodic activity can also be exhibited where external perturbation will not effect the behavior of the system. These are not chaos related. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2″ return

6. Goerner, Sally.”Chaos, Evolution, and Deep Ecology” in Chaos Theory in Psychology and Life Sciences. Robertson, Robin, and Combs, Alan. (Mahwah, New Jersey: Lawrence Erlbaum Publishers. 1995). P. 19. Sally works for the Triangle for Non-linear Dynamic in Raleigh, N. C. She illustrates by saying just because 2 aspirins will reduce a headache a certain amount does not mean 8 aspirins will reduce it 8 times more, or that taking the whole bottle will reduce it that many times more–alas, it will probably kill you, so you won’t have to worry about the headache anymore anyway. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

7. Çambel, A. B. Applied Chaos Theory: A Paradigm for Complexity. (San Diego, CA: Academic Press, Inc. 1993) P. 47. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

8. Prigogine, Ilya and Stengers, I. HYPERLINK “http://magna.com.au/~prfbrown/chaos_02.htm” *Order Out of Chaos: Man’s New Dialogue with Nature*. (New York: Bantam Books. 1988) P.180. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

9. Cambel. P. 48-49. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

10. Prigogine, Ilya and HYPERLINK “http://home.nordnet.fr/~phuleux/isabelle.htm” Stengers, I. P.189. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

11. Kellert. p. 3. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk4” return

12. Kellert. Pp. 3-5. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk4” return

13. Kellert. P. 5. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk4” return

14. Kellert. P. 5. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk5” return

15. HYPERLINK “http://pw2.netcom.com/~nmerry/Urihome.htm” Merry, Uri. Coping with Uncertainty: Insights from the New Sciences of Chaos, Self-Organization, and Complexity. (Westport, CT.: Praeger Publishers. 1995. P.65. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk5” return

HYPERLINK “HTTP://www.wfu.edu/~petrejh4/index.html” JUDY PETREE’S HOMEPAGE

HYPERLINK “http://www.wfu.edu/~petrejh4/chaosind.htm” GO TO CHAOS INDEX

HYPERLINK “http://www.wfu.edu/~petrejh4/HISTORYchaos.htm” PART 1: HISTORY OF CHAOS

HYPERLINK “http://www.wfu.edu/~petrejh4/Instability.htm” PART 2: INSTABILITY

HYPERLINK “HTTP://www.wfu.edu/~petrejh4/PhaseTransition.htm” GO TO PART 4: PHASE TRANSITION

**Part 3: Strange Attractor in Chaos Theory**

HYPERLINK “http://www.wfu.edu/~petrejh4/run.mid” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/juliaset.gif” * MERGEFORMATINET

Listen to the odd reoccurring sound. Click the icon.

To define an attractor is not simple. Tsonis gives the definition of attractors as “a limit set that collects trajectories”. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff1” (1) A strange attractor is simply the pattern of the pathway, in visual form, produced by graphing the behaviour of a system. Since many, if not most, nonlinear systems are unpredictable and yet patterned, it is called **strange** and since it tends to produce a fractal geometric shape, it is said to be attracted to that shape. A system confines a particular entity and its related objects or processes to an imaginary or real frame as the subject of study, this is its “state space” or phase space. The behaviour in this state space tends to contract in certain areas; this contraction is called “the attractor”. The attractor is actually “a set of points such that all trajectories nearby converge to it”. Now tell me what an attractor is. You can’t and neither can I, even with Tsonis’s definition. Scientists, mathematicians, and computer specialists can show you pictures of how they operate, but they cannot tell you what they are. Maybe, that is why Daniel Stein, compares Chaos/Complexity to a “theological concept”, because lots of people talk about it but no one knows what it really is. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff2” (2) (Found in the Preface to the first volume of lectures given at the 1988 Complex Systems Summer School for the Santa Fe Institute in New Mexico)

Several researchers have defined and studied strange attractors. The first was Lorenz in “Deterministic nonperiodic flow” in 1963 HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff3” (3), and Later Ruelle in “Sensitive dependence on initial condition and turbulent behaviour of dynamical systems” in 1979 HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff4” (4) When computer simulation came along, the first fractal shape identified took the form of a butterfly; it arose from graphing the changes in weather systems modelled by Lorenz. Lorenz’s attractor shows just how and why weather prognostication is so involved and notoriously wrong because of the butterfly effect. This amusing name reflects the possibility that a “butterfly in the Amazon might, in principle, ultimately alter the weather in Kansas.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff5” (5) For an in depth story of how this butterfly effect developed into the science of Chaos and Complexity, see James Gleick’s *CHAOS* and Mitchell Waldrop’s *COMPLEXITY HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff6” **(6).*Kauffman explains that the tiny differences in initial conditions make “vast differences in the subsequent behaviour of the system” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff7” (7) as Lorenz illustrated in his weather prognosticator.

Chaos is cantered on the concept of the **strange attractor. **Watch the flow of water from you faucet as you turn the water on to give faster and faster outpour; you will see activity from smooth delivery to gushing states. These various kinds of flow represent different patterns to which the flow is attracted. The feedback process is the feedback displayed in most natural systems in nature. There are **four **basic kinds of feedback or cycles a system can display: these are the **attractors. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff8” ****(** HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff8” 8)Tsonis gives the definition of attractors as “a limit set that collects trajectories” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff9” (9) The four kinds of attractors HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff10” (10) are:

1. Point attractor, such as a pendulum swinging back and forth and eventually stopping at a point. The Attractor may come as a point, in which case, it gives a steady state where no change is made.

2. Periodic attractor, just add a mainspring to the pendulum to compensate for friction and the pendulum now has a limited cycle in its phase space. The periodic attractor portrays processes that repeat themselves.

3. Torus attractor, picture walking on a large doughnut, going over, under and around its outside surface area, circling, but never repeating exactly the same path you went before. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff11” (11) The torus attractor depicts processes that stay in a confined area but wander from place to place in that area. (These first three attractors are not associated with Chaos theory because they are fixed attractors. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff12” (12)

4. **Strange attractor**, this attractor deals with the three-body problem of stability. The strange attractor shows processes that are stable, confined and yet never do the same thing twice.

Three non-linear equation solutions exhibit a fractal structure in computer simulations of the strange attractor. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff13” (13) In other words, each solution curve tended to the same area, the attractor area, and cycled around randomly without any particular set number of times, never crossing itself, staying in the same phase space, and displaying self-similarity at any scale. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff14” (14) The operative term here is self-similarity. Each event, each process, each period, each end-state in phase-space is never precisely identical to another; it is similar but not identical. The attractor acts on the system as a whole and collects the trajectories of perturbation in the environment. (These trajectories of perturbation are the positive and negative events going on in and around the system.) Though these systems are unstable, they have patterned order and boundary.

**FRACTALS**

French mathematician HYPERLINK “http://www1.jcu.edu/math/faculty/spitz/juliaset.htm” Georges Julia studied these chaotic orbits in complex analytical systems back in the 1920s, but Benoit Mandelbrot, in the early 1970s, gave some rules for computation. His work on noise interference problems revealed distinct ratios between order and disorder on any scale he used. The seemingly chaotic behaviour of noise displayed a fractal structure. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff15” (15) Mandelbrot recognized a self-similar pattern that the fractals formed. He then cross-linked this new geometrical idea with hundreds of examples, from cotton prices to the regularity of the flooding of the Nile River.

Mitchell Feigenbaum found the constants or ratios that are responsible for the phase transition state when order turns to chaos. These Feigenbaum numbers helped to predict the onset of turbulence (chaos) in systems: applications in the real world began. Optics, economics, electronics, chemistry, biology, and psychology quickly used this new analytic tool. Fractal geometry is now being used to graphically show change and evolution in technology, sociology, economics, psychotherapy, medicine, psychology, astronomy, evolutionary theory, and the metaphorical application is spreading to art, humanities, philosophy, and theology. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff16” (16)

**METAPHORICAL APPLICATIONS**

Kauffman explains that the tiny differences in initial conditions make “vast differences in the subsequent behavior of the system.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff17” (17) Unstable or aperiodic systems are unable to resist small disturbances and will display complex behavior, making prediction impossible and measurements will appear random. Human history is an excellent example of aperiodic behavior. Civilization may appear to rise and fall, but things never happen in the same way. Small events or single personalities may change the world around them. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff18” (18)

The symbolic use of chaos to delineate the interactions of a system and its environment can be more enlightening with chaos theory as the tool, especially when explicating an historical personage or situation. Here the human being or set of human beings creates a pattern in the time-space; this pattern is the basin of attraction within which the attractor or multiple attractors form. The Newtonian paradigm of linear mechanics does not reveal all the ramifications that affect the event or person. Newtonian expectations proposed smooth transformations that can be plotted by linear actions or reactions; but chaos/complexity will allow the researcher to see the symbolic interaction of the person or event with their environment.

1. Tsonis, Anastasios A. Chaos: From Theory to Applications. New York: Plenum Press. 1992. P. 67. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

2. HYPERLINK “http://www.hamline.edu/depts/philosophy/Faculty.html” l “Stephen” Kellert, Stephen. In the Wake of Chaos: Unpredictable Order in Synamical Systems. Chicago, Ill.: The University of Chicago Press. 1993. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

3. Stein, Daniel L. ed. Lectures in the Sciences of Complexity. Vol. 1. Redwood City, California: Addison-Wesley Publishing Co. 1989. P. XIII.

4. HYPERLINK “http://hyperion.advanced.org/12170/history/lorenz.html” Lorenz, Edward N. “Deterministic nonperiodic flow”. Journal of Atoms.Sci. 20:130. 1963. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

5. Ruelle, David. “Sensitive dependence on initial condition and turbulent behavior of dynamical systems”. Ann. N. Y. Acad. Sci. 316:408. 1979. See also: Grassberger, P. and Procaccia, I. “Measuring the strangeness of strange attractors”. Physica 9D:189. 1983. And, HYPERLINK “http://complex.ccsr.uiuc.edu/~gmk/” Meyer-Kress, G. ed. Dimensions and Entropies in Chaotic Systems: Quantification of Complex Behavior. , Berlin: Springer-Verlag. 1986. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

6. Ibid. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

7. HYPERLINK “http://www.arcfan.demon.co.uk/sf/books/94may.htm” Waldrop, M. Mitchell. Complexity: The Emerging Science at the Edge of Order and Chaos. N. Y, N. Y.: Simon & Schuster: 1992. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

8. HYPERLINK “http://home.wxs.nl/~gkorthof/kortho32.htm” Kauffman, Stuart A. The Origins of Order: Self-Organization and Selection in Evolution. New York: Oxford University Press. 1993. P. 178. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

9. Tsonis, Anastasios A. Chaos: From Theory to Applications. (New York: Plenum Press. 1992). This book teaches and applies the theory of nonlinear dynamical systems to problems of weather prediction, noise reduction, and neural networks. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

10. Attractor refers to sets that “attract” orbits and hence determine typical long-term behavior. It is also possible to have sets in phase space on which the dynamics can be exceedingly complicated, but which are not attracting. In such cases orbits placed exactly on the set stay there forever, but typical neighboring orbits eventually leave the neighborhood of the set, never to return. One indication of the possibility of complex behavior on such nonattracting (unstable) sets is periodic orbits whose number increases exponentially with their period, as well as the presence of the uncountable number of nonperiodic orbits. Nonattracting unstable chaotic sets can have important observable macroscopic consequences. Three such consequences are the phenomena of chaotic transients, fractal basin boundaries, and chaotic scattering.

HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

11. For more complete descriptions see: Ian Stewart “Portraits of Chaos” in New Scientist. Nov. 1989. P. 45. and James P. Crutchfield, J. D. Farmer, N. H. Packard, and R. S. Shaw in “Chaos” from Scientific American. Dec. 1986. P. 50. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk8” return

12. In most cases, you can predict what will happen no matter what the initial conditions were, but in extreme conditions, they too become chaotic. Cramer gives some illustrations of what happens at high energy or angular momentum when the two-body system does exhibit chaos or reordering. Cramer, Freidrich. Chaos and Order: The Complex Structure of Living Systems. New York: VCH Publishers. 1993. Pp.121-2. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk8” return

13. The use of computer simulations is one reason this field of research has been realized only in the past 20 years. The computations are vast and computers make it easier, also they can show detailed pictographs of the form and growth of fractals. The image often used to describe a fractal structure is the Russian nesting dolls–each one, inside another, growing progressively smaller, but always identical. Fractals reveal self-similarity no matter how deeply you look into the forms. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk5” return

14. To be as concise as possible, the strange attractor exhibits two seemingly contradictory effects converging into a new system. You might say that one comes to a crossroads and takes both paths at once and winds up circling forever both of the areas and never crossing the same spot twice. This is actually a word picture of what the Lorenz attractor, or fractal, looks like. Though the system looks random at first, it will retain its shape and space, thus displaying order. This gives researchers a way to investigate the way a system changes its behavior in response to a change in the parameters describing the system and its environment. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk5” return

15. Fractal got its name from Mandelbrot’s son’s Latin book, fractus, meaning to break. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk6” return

16. Scott, George. (ed.) Time, Rhythm and Chaos: In the New Dialogue with Nature. Ames: Iowa State University Press. 1991. Complexity theory in the research area of self-organization gives an idea of the widespread nature of this new analytical tool of Chaos. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk6” return

17. Kauffman, Stuart A. The Origins of Order: Self-Organization and Selection in Evolution. New York: Oxford University Press. 1993. P. 178. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk7” return

18. Kellert. Pp. 3-5. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk7” return

** HYPERLINK “http://www.wfu.edu/~petrejh4/index.html” ****Judy Petree’s Homepage**

** HYPERLINK “http://www.wfu.edu/~petrejh4/chaosind.htm” ****Go to Chaos Index**

** HYPERLINK “http://www.wfu.edu/~petrejh4/HISTORYchaos.htm” ****Go to Part 1: History**

** HYPERLINK “http://www.wfu.edu/~petrejh4/Instability.htm” ****Go to Part 2: Instability**

** HYPERLINK “http://www.wfu.edu/~petrejh4/Attractor.htm” ****Go to Part 3: Attractor**

** HYPERLINK “http://www.wfu.edu/~petrejh4/DEEPCHAOS.htm” ****Go to Next Part 5: Deep Chaos**

**Part 4: PHASE TRANSITION**

** **Listen to how the music bifurcates.

The edge of chaos seems to be the phase transition state of the system or the place where choices are made and take place. There has always been turbulence in the universe, it has been recognized in the scientific world since Poincaré and Lorenz researched the motions of the atmosphere and their relevance to weather prediction. David Ruelle and HYPERLINK “http://www.math.rug.nl/~takens/index.html” Floris Takens opened up a new way to look at turbulence in their paper “On the nature of turbulence.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff1” (1) Most of the time if turbulence showed up in an experiment, it was ignored, accounted for by factoring it out, or declared a failed experiment. But, Ruelle and Takens used ideas of HYPERLINK “http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Thom.html” Rene Thom and HYPERLINK “http://math.berkeley.edu/~smale/biography.html” Steve Smale, who were mathematicians working with “differentiable dynamical systems.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff2” (2) They proved that not only could the onset of turbulence be mathematically formulated by use of nonlinear equations, but also it showed that turbulence was directly related to the sensitive dependence on initial conditions, and that the turbulence was described by strange attractors. Chaos theory developed from open dynamical systems with a time evolution with sensitive dependence on initial conditions. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff3” (3) It has also been called *deterministic noise,* which describes irregular oscillations that appear noisy, but the mechanism that produces them is deterministic. HYPERLINK “http://www.rockefeller.edu/labheads/feigenbaum/feigenbaum.html” Mitchell Feigenbaum proved that there was a mathematical relationship in all open, dynamical systems. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff4” (4) This relationship became the universal number ratios now called Feigenbaum numbers. Use of these Feigenbaum numbers, continued development in studies of nonlinear mathematics, experiments with chaotic systems and now the discovery of *strange attractors* took chaos theory to a new development horizon.

Phase transition studies came about from the work begun by John von Neumann HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff5” (5) and carried on by Steven Wolfram HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff6” (6) in their research of cellular automata. M. Mitchell Waldrop in his book Complexity*: The Emerging Science at the Edge of Order and Chaos* gives a lively account of the discoveries made at the beginning of the research with cellular automata. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff7” (7) They found there were two kinds of phase transitions first order and second order. The First order we are familiar with when ice melts to water–(molecules are forced by a rise in temperature to choose between order and chaos right at 32° F, this is a deterministic choice.) Second order phase transitions combine chaos and order; there is a balance of ordered structures that fill up the phase space in a sort of “dance of sub-microscopic arms and fractal filaments.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff8” (8)

In the material world, phase transitions are so intimately intertwined at the molecular level that there is no way of predicting what state a system will take. Also the real astonishing discovery was that at the edge of chaos, not only would you encounter complexity at its most mysterious, but maybe life itself. Complex adaptive systems, like individuals, families, organizations, and nations, “are able to survive and adapt more effectively in turbulent environments, when they are functioning in a mode that is described as ‘the edge of chaos.’ HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff9” (9) Stuart Kauffman, theoretical biologist, is involved with phase transition at the Santa Fe Institute. His studies show that dynamical systems are at their optimum fitness at phase transition states. These systems seem to reach the boundary between order and chaos by themselves and adapt to that state of transition at peak fitness. Kauffman’s studies, in his book *The Origins of Order: Self-Organization and selection in Evolution*, reveal that complex systems carry out and coordinate the most complex behaviour, adapt most readily and can build the most useful models of their environment.

Kauffman and Christopher Langdon speak of the edge of chaos as the place where systems are at their optimum performance potential. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff10” (10) This edge of chaos seems to be the phase transition state of the system or the place where choices are made and bifurcations take place. It is the time and place when there are many options, many positive and negative influences from these options, and a time of great mental turmoil if the system is a human being. The *strange attractor* boxes behaviour into a small, easily handled package, allows coherence to the many positive and negative influences, and self-organizes the system into something new without causing any damage to cascade throughout the system. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff11” (11) When a system is operating on the border of chaos, a self-organized critical state produces a weak form of chaos that will allow long term predictions to be made about the system. Per Bak explained this in relation to his earthquake model. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff12” (12)

Per Bak describes the perturbances related to chaos as ‘self-organized criticality’ HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff13” (13) Bak and his co-researchers use the metaphor of a sand pile with someone steadily dribbling new grains of sand onto it. Because the whole system of the sand pile is so interwoven and interlocked, the pile grows higher until at some point either a large avalanche or a small cascade will happen. They assert that this ‘power law’ (the average frequency of a given size of avalanche is inversely proportional to some power of its size) is common in nature. Examples are critical mass of plutonium, activity of the sun, light from galaxies, flow of water through a river, and earthquakes. This vivid metaphor allows one to see how disturbances from the outside can take a system to the edge of chaos and then cascade into a new order. The small changes and large upheavals, or cascades and avalanches in the case of the sand pile, are the signalling methods that a system is operating at the edge of chaos.

What you have at the edge of chaos is a sublime balance, between stability and instability. This sublimely balanced area is the place where creativity evinces itself, the place where decisions are roughly wrought, and the place where mental turmoil is at its most torturous. We all have heard the stories of great thinkers, writers, and artists who live on the edge, so to speak, as they retch forth their magnum opi; the stories of their near and real insanities are the lure to millions of readers. It is this poised state between stability and instability that propagates the perturbations, that allows some parts of the sand pile to remain the same, while other parts are changing. This poised state is a system far-from-equilibrium.

FOOTNOTES:

1. Ruelle, David and HYPERLINK “http://www.math.rug.nl/~takens/index.html” Takens, Floris. “On the nature of turbulence”. Commun. Math. Phys. 20 1971. Pp. 167-192. also 23 1971. Pp. 343-44. This paper was rejected by the first scientific journal, but later, Ruelle gave his paper at several seminars. He published the paper himself when he became an editor. It has since become a paradigm in itself. Strange attractors have shaken the scientific and humanities world apart.

HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

2. HYPERLINK “http://math.berkeley.edu/~smale/biography.html” Smale, Steve. “Differentiable dynamical systems” Bull. Amer. Math. Soc. 73 1967. Pp. 747-817. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

3. Ruelle. P. 67. This is Ruelle’s definition of chaos. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

4. HYPERLINK “http://www.rockefeller.edu/labheads/feigenbaum/feigenbaum.html” Feigenbaum, M. J. “Quantitative universality for a class of nonlinear transformations.” Journal of Statistical Physics, 19:25. 1978. Interesting that Feigenbaum’s paper was also rejected the first time it was presented to a journal. In 1975 Feigenbaum was working for the Los Alamos National Laboratory when he plotted the transition points on the route of a system on its way to deep chaos. His discovery was that the ratios were universal to all systems on the way to chaos. The implication to scientists in many disciplines was that they could predict the onset of turbulence. These numbers have been used to forecast heart attacks, predict avalanches, stock market crises, human behavioral dysfunction, social unrest, and also in optical systems, electrical circuits, population growth, the flow of gases, and business cycles. Every endeavor has its crisis point and the Feigenbaum numbers may be a useful too in calculating the onset of disturbances in any system. There are other roads to chaos which Ruelle notes on P. 179 footnote 3. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” return

5. HYPERLINK “http://www.mcsr.olemiss.edu/~ccmeena/vnmann.html” von Neumann, John. Theory of Self-Reproducing Automata. Edited by Arthur W. Burks. Champaign-Urbana: University of Illinois Press. 1966. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

6. HYPERLINK “http://www.mathematica.com/s.wolfram/” Wolfram, S. “Statistical mechanics of cellular automata.” Rev. Mod. Phys. 55:601. 1983. and “Universality and complexity in cellular automata.” Physica 10D:1. 1984. See also: Theory and Applications of Cellular Automata. Singapore: World Scientific. 1986. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

7. HYPERLINK “http://www.arcfan.demon.co.uk/sf/books/94may.htm” Waldrop, M. Mitchell. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Touchstone, Simon & Schuster. 1992. Pp. 222-240. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

8. Waldrop. P. 230. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk2” return

9. HYPERLINK “http://pw2.netcom.com/~nmerry/Urihome.htm” Merry, Uri. Coping with Uncertainty: Insights from the New Sciences of Chaos, Self-Organization and Complexity. (Westport, CT.: Praeger Publishers. 1995. P. 38. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” return

10. HYPERLINK “http://home.wxs.nl/~gkorthof/kortho32.htm” Kauffman, Stuart A. The Origins of Order: Self-Organization and Selection in Evolution. New York: Oxford University Press. 1993. Pp.181-218. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk4” return

11. Kauffman. Origins. Pp. 234-5. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk4” return

12. HYPERLINK “http://www.csl.sony.fr/Symposium98/Bak.html” Bak, Per., Tang, C., and Wiesenfeld, K. “Self-Organized Criticality.” Physics Review. A 38:364. 1988. P. 43. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk4” return

13. Ibid. We will give further information in the Self-organization section. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk5” return

** HYPERLINK “http://www.wfu.edu/~petrejh4/index.html” ****Judy Petree’s Homepage**

** HYPERLINK “http://www.wfu.edu/~petrejh4/selforg.htm” ****Go to Part 6: Self-Organization**

** HYPERLINK “http://www.wfu.edu/~petrejh4/chaosind.htm” ****Go to Chaos Index**

**PART 5: DEEP CHAOS**

**Can you hear the sense of impending doom that this Mars.midi evokes?**

**Click on the Morph**

HYPERLINK “http://www.wfu.edu/~petrejh4/Mars.mid” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/morph.gif” * MERGEFORMATINET

Deep chaos is the fractal dimension where patterns of self-similarity reveal themselves in descending scales of order. Uri Merry likens them to “a set of wooden Russian dolls, each containing a smaller replica of itself within.” HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff1” **(1)** This complexity can occur in natural and man-made systems, as well as in social structure; therefore because it is so ubiquitous to nature, it has no agreed-upon definition. Çambel describes complex systems from 15 categories, but to be succinct, complex systems have size, purpose, and are dynamic. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff2” **(2)** Cramer HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff3” **(3)** gives his definition in the form of a logarithm taken from information theory which in essence means that “the more complex a system, the more information it is capable of carrying. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff4” **(4)** In deep chaos, there is a displacement of being, the chthonic realm of turmoil; it is the dimension between states. It is here in the deep chaotic state that the system becomes complex, and hence the term Complexity enters in. Kauffman and Christopher Langdon speak of the edge of chaos as the place where systems are at their optimum performance potential. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff5” **(5)**

When the constraints on a system are sufficiently strong, (many positive and negative perturbations), the system can adjust to its environment in several different ways. There may be several solutions possible from the whole basin of attraction, and chance alone cannot decide which of these solutions will be realized. It is the attractor that will help determine the solution. The fact that one solution among many does occur gives the system a historical dimension, a sort of memory of a past event that took place at a critical moment and which will affect its further evolution. This is the phase transition of the system, the place where the system is isotropic; it has no preferred direction to go in, it is an either/or decision, the past old ways or the future new ways. You might visualize the phase transition as the coin tossed into the air; while it is in the air there is only probability, no actual choice has been made until it lands. There is no observance of transilience (leaping from one state to another) in the system, but there is a phase transition that takes place at the edge of chaos before an actual self-organization into another state.

There might be some similarity of phase transition to the sense of NOW. The sense of NOW worries all philosophers and theologians and even scientists. What is NOW? HYPERLINK “http://www.eeng.dcu.ie/~tkpw/home.html” **Karl Popper** reckons NOW as a single frame in a filmstrip–the future and past are all known within the whole of the strip. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff6” **(6)** Einstein worried about NOW as a physics question–NOW was special for humans, but did not have a meaning in physics. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff7” **(7)** The NOW in chaos theory is the phase transition state where all choices are open. Paul Tillich HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff8” **(8) ** spoke eloquently about living one’s life in the Eternal Now. Perhaps that is precisely what we do, each moment is a phase transition to the next, and our choices moment by moment determine the life we live.

FOOTNOTES

1. Merry, Uri. Coping with Uncertainty: Insights from the New Sciences of Chaos, Self-Organization and Complexity. Westport, CT.: Praeger Publishers. 1995. P. 40.

2. Çambel, A. B. Applied Chaos Theory: A Paradigm for Complexity. Academic Press, Inc. San Diego, CA 1993. P. 2-4.

3. Cramer, Friedrich. Chaos and Order: The Complex Structure of Living Systems. Trans. David I. Loewus. New York: VCH Publishers. 1993. Pp. 210-218. This section contains Cramer’s view of complexity. He gives its practical aspect as an indeterminate whole, its teleological aspect as the whole emerging, and a subcritical stage in which intentional randomization of the universe as a deterministic system allows for free will because it involves complexity. Therefore the universe is deterministic and indeterministic at the same time. He also refutes that his theory is mysticism or scientism. His explanation is esoteric, but this might be accounted for in translation. The Forward by Ilya Prigogine says Cramer deserves international acclaim. Hey, what do I know?

HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” **return to paragraph**

4. Cramer gets his information for the logarithm from N. Pippenger. “Complexity Theory.” Scientific American. June. 1978. Pp. 90-100.

5. Kauffman. Stuart A. The Origins of Order: Self-Organization and Selection in Evolution. New York: Oxford University Press. 1993. Pp. 181-218.

HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk1” **return to paragraph**

6. HYPERLINK “http://www.eeng.dcu.ie/~tkpw/home.html” **Popper, Karl.** The Open Universe. Totowa, N. J.: Rowman & Littlefield. 1956.

7. Prigogine, I. and Stengers, I. Order out of Chaos . New York: Bantam Books. 1984. P. 214 .

8. HYPERLINK “http://207.140.86.6/Tillich.html” **Tillich, Paul.** The Eternal Now.

HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bk3” **return to paragraph**

HYPERLINK “http://www.wfu.edu/~petrejh4/index.html” JUDY PETREE’S HOMEPAGE

HYPERLINK “http://www.wfu.edu./~petrejh4/chaosind.htm” CHAOS INDEX

HYPERLINK “http://www.wfu.edu/~petrejh4/theend.htm” CONCLUSION

**COMPLEXITY**

**SELF ORGANIZATION IN CHAOS**

HYPERLINK “http://www.wfu.edu/~petrejh4/gmbeepnc.mid” INCLUDEPICTURE “http://www.wfu.edu/~petrejh4/tornado.gif” * MERGEFORMATINET

Complexity is the most difficult area of chaos and the cutting edge of field study at the present time. To better understand what is happening, try a few of the clickable spots on the internet. Then read about complexity below.

For your enjoyment see: HYPERLINK “http://www.brunel.ac.uk:8080/depts/AI/alife/al-gamel.htm” **The Game of Life.**

** HYPERLINK “http://lcs.www.media.mit.edu/groups/el/projects/emergence/rules-of-game.html” ****Emergence games****, you can make your own.**

**Your brain as a HYPERLINK “http://www.missouri.edu/~polsksm/ca.html” ****cellular automata.**

**Visit Capow and see how a power company is using HYPERLINK “http://www.mathcs.sjsu.edu/capow/” ****cellular automata.**

** HYPERLINK “http://www.aridolan.com/” ****Minifloys**** are artificial flies, see them swarm online.**

**And for another fun time go to HYPERLINK “http://ourworld.compuserve.com/homepages/cdosborn/” ****Cellular Automata.**

Physicist, Ilya Prigogine has shown how classical open “dissipative structures” held far from thermal equilibrium by matter-energy flow can be self-organizing. He defines complexity as ‘the ability to switch between different modes of behavior as the environmental conditions are varied.’ HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff1” **(1)** Out of Chaos has come the self-organizational properties that have genuinely surprised and delighted scientists in the last decade. Prigogine states:

**We know now that non-equilibrium, the flow of matter and energy, may be a source of order. We have a feeling of great intellectual excitement: we begin to have a glimpse of the road that leads from being to becoming. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff2” ****(2)**

**In the HYPERLINK “http://www.wfu.edu/~petrejh4/Instability.htm” ****Instability**** section we discussed order and disorder, but in this section we observe another aspect of these two terms. Order suggests that there is symmetry in the model, or an invariance of a pattern under a group of transformations. One part of the pattern is sufficient to reconstruct the whole: in order to reconstruct a mirror-symmetric pattern, like the human face, you need to know one half and then simply add its mirror image. A HYPERLINK “http://www.shef.ac.uk/chemistry/web-elements/nofr-xtal/S.html” ****crystal structure ****is typically invariant under a discrete group of translations and rotations therefore, the smaller the part needed to reconstruct its whole, because it has a more redundant or “ordered” pattern.**

Disorder also contains symmetry of the probabilities that a component will be found at a particular position. A gas is statistically homogeneous in that any position is as likely to contain a gas molecule as any other position, though the individual molecules will not be evenly spread. The law of large numbers says the actual spread will be symmetric or homogeneous. Even a random process can be defined by the fact that all possible transitions or movements are equally probable.

Complexity may then be characterized by a lack of symmetry or ” HYPERLINK “http://www-crtbt.polycnrs-gre.fr/ult/LesHouches99/LesHouches99.html” **symmetry breaking”**. No part or aspect of a complex item can provide sufficient information to actually or statistically predict the properties of the others parts. This again connects to the difficulty of modeling associated with complex systems. Prigogine speaks of the symmetry breaking in nature:

**Here another interesting question arises: In the world around us, some basic simple symmetries seem to be broken. Everybody has observed that shells often have a preferential chirality. HYPERLINK “http://www.pasteur.fr/Pasteur/WLP.html” ****Pasteur**** went so far as to see dissymmetry, in the breaking of symmetry, the very characteristic of life. We know today that DNA the most basic nucleic acid, takes the form of a left-handed helix. How did this dissymmetry arise? One common answer is that it comes from a unique event that has by chance favoured one of the two possible outcomes [but] …. To speak of unique events is not satisfactory; we need a more “systematic” explanation. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff3” ****(3)**

** HYPERLINK “http://obiwan.uvi.edu/computing/turing/ture.htm” ****Alan Turing ****hypothesized a mechanism based on the process of chemical reactions and diffusion to explain how living organisms develop. But the idea was too small to encompass all the complexities involved with biological HYPERLINK “http://www.cpsc.ucalgary.ca/projects/bmv/vmm-deluxe/index.html” ****morphogenesis****. His work did give birth to more work in the theory and experiment with spatial dissipative structures. From that grew the work with oscillations, propagating waves, pattern formation on catalytic surfaces, mulltistability and chaos. Kondepudi and Prigogine give many examples of what is happening in the materials science using instability and self-organization occurring in far-from-equilibrium systems. They also cite biological, geological and social investigations of the same processes. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff4” ****(4)**

We find the mechanisms of instability and self-organization in complexity. Complexity implies two or more components joined together in such a way that it would be difficult to separate them. Methods of analysis or decomposition into independent modules cannot be used to develop or simplify models of complexity. These complex entities will be difficult to model, the models will be difficult to use for prediction or control, and the problems will be difficult to solve. Chaos theory has enabled the analysis of such systems in diverse academic research of both science and humanities.

Two aspects of complexity concern distinction and connection. Distinction denotes variety and heterogeneity, and to the fact that different parts of the complex behave differently. Connection signifies constraint, redundancy, and the fact that different parts are not independent, but that the knowledge of one part allows the determination of features of the other parts. In a gas, where the position of any gas molecule is completely independent of the position of the other molecules is an example of distinction leading to disorder or chaos or entropy. A perfect HYPERLINK “http://www.bocklabs.wisc.edu/xray.html” **crystal, **where the position of a molecule is completely determined by the positions of the neighboring molecules to which it is bound is an example of connection leading to order or negentropy. Complexity can only exist if both aspects are present. Complexity is therefore situated in between order and disorder, or, using a recently fashionable expression, “on the edge of chaos”.

It has been noted that the HYPERLINK “http://www.wfu.edu/~petrejh4/Attractor.htm” ** strange attractor** boxes behavior into a small, easily handled package, allowing coherence to the many positive and negative influences, and self-organizes the system into something new without causing any damage to cascade throughout the system. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff5”

**(5)**Kauffman and Christopher Langdon speak of the edge of chaos as the place where systems are at their optimum performance potential HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff6”

**(6)**. Computer simulations of randomly generated Boolean networks are used to explore: the dynamics of evolution on rugged fitness landscapes; the tendency to react to perturbations by returning to a stable cycle or “attractor” that was active when the perturbation occurred; and the relationship among the different attractor loops within such networks. This experimental work is tied in with knowledge of biology and chemistry to explain the emergence of life, autocatalytic systems of chemicals, cell development, and natural selection. Kauffman’s work is relevant to all complex systems and offers lasting insight into the mechanisms underlying cells, societies, and even thought.

This edge of chaos seems to be the HYPERLINK “http://www.wfu.edu/~petrejh4/PhaseTransition.htm” **phase transition** state of the system or the place where choices are made and bifurcation take place. It is the time and place when there are many options, many positive and negative influences from these options, and a time of great mental turmoil if the system is a human being. What is it that stabilizes the system? What allows the system to clearly weigh the options and select an option that is fittest for that system to reorganize itself and go on to the next level? There have been many theories popping up recently to explain how this self-organization in Complexity works.

David Bohm was one of the leading quantum physicists of our age. In recent years, Bohm attempted to explain an ontological basis for quantum theory. Bohm’s theory is that elementary particles are actually systems of extremely complicated internal structure that act essentially to amplify information contained in a quantum wave. His new theory of the universe is a new model of reality that he called the “Implicate Order.” This entails a holistic cosmic view. It connects all things through a sort of enfoldment. In principle, any individual entity could reveal detailed information about every other entity in the universe. Bohm’s theory states that there is an “unbroken wholeness of the totality of existence as an undivided flowing movement without borders.” The layers of the HYPERLINK “http://www.shavano.org/html/bohm.html” **Implicate Order** can go deeper and deeper to ultimately the “unknown and indescribable totality” that Bohm calls the holomovement. The holomovement is the “fundamental ground of all matter.” Does this sound like HYPERLINK “http://www.crosscurrents.org/godand.htm” **Tillich’**s ‘ground of all being”? Bohm’s implicate order implies a sort of complexity of being where order and disorder join. The Explicate order is what is manifested as the universe. Bohm’s theory of the Implicate Order emphasizes that the cosmos is in a state of process: it is a feedback-universe that continuously recycles forward into a greater mode of being and consciousness. This is precisely what Chaos/complexity entails. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff7” **(7)**

Another similar theory is the Unifying Theory – which is called Tripartite Essentialism. The implications of non-linearity are that atoms belong to observed groups with similarly classified properties and that no two similarly classified ‘atoms’ are absolutely identical due to their Chaos Ontology. It is more fully explained at: HYPERLINK “http://easyweb.easynet.co.uk/~pegasus/” **http://easyweb.easynet.co.uk/~pegasus/** where it continues to make progress in Philosophy of Science, Mind, 5^{th} Generation Artificial Intelligence, and Cosmology.

Stuart Kauffman’s At Home in the Universe, The Search for the Laws of Self-Organization and Complexity (N.Y.: Oxford Univ. Press, 1995) contains the new idea that Darwinian natural selection from random variations while necessary is not sufficient to explain evolution. There is also a spontaneous self-organizing mechanism. Kauffman states:

**Darwin devastated this world. …. Evolution left us stuck on the earth with no ladder to climb, contemplating our fate as nature’s Rube Goldberg machines. Random variation, selection-sifting: here is the core, the root. Here lies the brooding sense of accident, of historical contingency, of design by elimination. … We human, a trumped-up, tricked-out, horn-blowing, self-important presence on the globe, need never have occurred. So much for our pretensions; we are lucky to have our hour. That’s so much, too, for paradise. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “ff8” ****(8)**

**Research into cellular automata is being done by Chris Langdon who formerly worked at the Santa Fe Institute, but is now with Artificial Life, see: HYPERLINK “http://www.santafe.edu/~cgl/” ****http://www.santafe.edu/~cgl/**** . Certainly the basic process of self organization is iteration. A good example is provided by cellular automata. Automaton consists of cells, which perceive their surroundings and perform a decision to change their state according to some rule. The rules need not be deterministic, but the dynamics dictated by it is irreversible. See above for links to illustrations of cellular automata online.**

HYPERLINK “http://pespmc1.vub.ac.be/ASC/AUTOPOIESIS.html” **Autopoiesis** is another theory that has come out of the work of Ilya Prigogine. Heinz von Foerster’s work in the Biological Computer Laboratory (University of Illinois) emphasized the self-organizing features of living systems. He even suggested that we call ourselves ‘human becomings’. Erich Jantsch in 1980 studied self-organizing systems and hypothesized the integration of a variety of theories of self-regulation and self-organization within the framework of the phenomenon of dissipative self-organization. He hypothesizes the unification of Prigogine’s theory of dissipative structures (order out of chaos), Maturana’s concept of autopoiesis (self-production) and Eigen’s (1971) theory of self-reproducing hypercycles. Maturana defines an autopoietic system as a unified system in which one is unable to distinguish product, producer or production since it is self-producing. Jantsch viewed autopoiesis as a way that the self organization of non-equilibrium systems manifests themselves. Characteristically, living systems continuously renew themselves and regulate regulate their processes so that the integrity of their structure is maintained.

**FOOTNOTES:**

1. HYPERLINK “http://order.ph.utexas.edu/chaos/” Prigogine, Ilya, and Gregoire Nicolis. Exploring Complexity: An Introduction New York: W. H. Freeman & Co. 1998. P. 218. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg1” return

2. Prigogine, Ilya, HYPERLINK “http://home.nordnet.fr/~phuleux/isabelle.htm” Isabele Stengers. Order Out of Chaos: Mans’ New Dialogue with Nature. New York: Bantam. 1984. P. xxvii. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg1” return

3. Ibid. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg1” return

4. HYPERLINK “http://www.wfu.edu/~dilip/chm120.html” Kondepudi, Dilip, and Ilya Prigogine. Modern Thermodynamics: From Heat Engines to Dissipative Structures. Chichester, England: John Wiley & Sons Ltd. 1998. Pp. 459-467. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg3” return

5. HYPERLINK “http://home.wxs.nl/~gkorthof/kortho32.htm” Kauffman, Stuart A. The Origins of Order: Self-Organization and Selection in Evolution. (New York: Oxford University Press. 1993). Pp. 234-5 HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg3” return

6. Kauffman /Origins. Pp. 181-218 HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg3” return

7. David Bohm, “Quantum Theory as an Indication of a New Order in Physics: Implicate and Explicate Order in Physical Law,” PHYSICS (GB), 3.2 (June 1973), Pp. 139-168.) See also: David Bohm, B. J. Hiley, and P.N. HYPERLINK “http://www.ub.bw/faculty/science/physics/pan.htm” Kaloyerou, “Ontological Basis for the Quantum Theory,” PHYSICS REPORTS (Netherlands) 144.6 (January 1987), Pp. 323-348. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg4” return

8. Kauffman. P. 7. HYPERLINK “http://www.wfu.edu/~petrejh4/” l “bg5” return

** **

**EXHIBIT 2**

**GAME THEORY ANALYSIS **

**Game Theory: An Introductory Sketch **

**The Paradox of Benevolent Authority**

The “Prisoners’ Dilemma” is without doubt the most influential single analysis in Game Theory, and many social scientists, philosophers and mathematicians have used it as a justification for interventions by governments and other authorities to limit individual choice. After all, in the Prisoners’ Dilemma, rational self-interested individual choice makes both parties worse off. A difficulty with this sort of reasoning is that it treats the authority as a deus ex machina — a sort of predictable, benevolent robot who steps in and makes everything right. But a few game theorists and some economists (influenced by Game Theory but not strictly working in the Game Theoretic framework) have pointed out that the authority is a player in the game, and that makes a difference. This essay will follow that line of thought in an explicitly Game-Theoretic (but very simple) frame, beginning with the Prisoners’ Dilemma. Since we begin with a Prisoners’ Dilemma, we have two participants, whom we will call “commoners,” who interact in a Prisoners’ Dilemma with payoffs as follows:

**Table 16-1**

Commoner 1 | |||

cooperate | defect | ||

Commoner 2 | cooperate | 10,10 | 0,15 |

defect | 15,0 | 5,5 |

The third player in this game is the “authority,” and she (or he) is a very strange sort of player. She can change the payoffs to the commoners. The authority has two strategies, “penalize” or “don’t penalize.” If she chooses “penalize,” the payoffs to the two commoners are reduced by 7. If she chooses “don’t penalize,” there is no change in the payoffs to the two commoners.

The authority also has two other peculiar characteristics:

She is benevolent: the payoff to the authority is the sum of the payoffs to the commoners.

She is flexible: the authority chooses her strategy only after the commoners have chosen theirs.

Now suppose that the authority chooses the strategy “penalize” if, and only if, one or both of the commoners chooses the strategy “defect.” The payoffs to the commoners would then be

**Table 16-2**

Commoner 1 | |||

cooperate | defect | ||

Commoner 2 | cooperate | 10,10 | -7,8 |

defect | 8,-7 | -2,-2 |

But the difficulty is that this does not allow for the authority’s flexibility and benevolence. Is that indeed the strategy the authority will choose? The strategy choices are shown as a tree in Figure 1 below. In the diagram, we assume that commoner 1 chooses first and commoner 2 second. In a Prisoners’ Dilemma, it doesn’t matter which participant chooses first, or they both choose at the same time. What is important is that the authority chooses last.

INCLUDEPICTURE “Exhibit%201/Authority-file/tree.gif” * MERGEFORMATINET d

**Figure 16-1**

What we see in the figure is that the authority has a dominant strategy: not to penalize. No matter what the two commoners choose, imposing a penalty will make them worse off, and since the authority is benevolent — she “feels their pain,” her payoffs being the sum total of theirs — she will always have an incentive to let them off, not to penalize. But the result is that she cannot change the Prisoners Dilemma. Both commoners will choose “defect,” the payoffs will be (5,5) for the commoners, and 10 for the authority.

Perhaps the authority will announce that she intends to punish the commoners if they choose “defect.” But they will not be fooled, because they know that, whatever they do, punishment will reduce the payoff to the authority herself, and that she will not choose a strategy that reduces her payoffs. Her announcements that she intends to punish will not be **credible.**

EXERCISE In this example, a punishment must fall on both commoners, even if only one defects. Does this make a difference for the result? Assume instead that the authority can impose a penalty on one and not the other, so that the authority has 4 strategies: no penalty, penalize commoner 1, penalize commoner 2, penalize both. What are the payoffs to the authority in the sixteen possible outcomes that we now have? Under what circumstances will a benevolent authority penalize? What are the equilibrium outcomes in this more complicated game?

There are two ways to solve this problem. First, the authority might not be **benevolent.** Second, the authority might not be **flexible.**

**Non-benevolent authority:**

We might change the payoffs to the authority so that the authority no longer “feels the pain” of the commoners. For example, make the payoff to the authority 1 if both commoners cooperate and zero otherwise. We might call an authority with a payoff system like this a “Prussian” authority, since she values “order” regardless of the consequences for the people, an attitude sometimes associated with the Prussian state. She then has nothing to lose by penalizing the commoners whenever there is defection, and announcements that she will penalize defection become credible. EXERCISE Suppose the authority is sadistic; that is, the authority’s payoff is 1 if a penalty is imposed and 0 otherwise. What will be the game equilibrium in this case?

**Non-flexible authority: **

If the authority can somehow commit herself to imposing the penalty in some cases and not in others, perhaps by posting a bond greater than the 15 point cost of a penalty, then the announcement of an intention to penalize would become credible. The announcement and commitment would then be a strategy choice that the authority would make first, rather than last. Let’s say that at the first step, the authority has two strategies: commit to a penalty whenever any commoner chooses “defect,” or don’t commit. We then have a tree diagram like Figure 2. What we see in Figure 2 is that if the authority commits, the outcome will be cooperation and a payoff of 20 for her, at the top; but if she does not commit, the outcome will be at the bottom — both commoners defect and the payoff will be -4 for the authority. So the authority will choose the strategy of commitment, if she can, and in that case the rational, self-interested action of the commoners will lead to cooperation and good results. But, if the commoners irrationally defect, or if they don’t believe the commitment and defect for that reason, then the authority is boxed in. She has to impose a penalty even though it makes everyone worse off. In short, she cannot be flexible.

INCLUDEPICTURE “Exhibit%201/Authority-file/tree2.gif” * MERGEFORMATINET d

**Figure 16-2**

What we have seen here are two principles that play an important part in modern macroeconomics. Many modern economists apply these principles to the central banks that control the money supply in modern economies. They are

The principle of “rules rather than discretion.”

That is, the authority should act according to rules chosen in advance, rather than responding flexibly to events as they occur. In the case of the central banks, they should control the money supply or the interest rate on public debt (there is controversy about which) according to some simple rule, such as increasing the money supply at a steady rate or raising the interest rate when production is close to capacity, to prevent inflation. If some groups in the economy push their prices up, the monetary authority might be tempted to print money, which would cause inflation and help other groups to catch up with their prices, and perhaps reduce unemployment. But this must be avoided, since the groups will come to anticipate it and just push their prices up all the faster.

The principle of credibility.

It is not enough for the authority to be committed to the simple rule. The commitment must be credible if the rule is to have its best effect.

The difficulty is that it may be difficult for the authority to commit itself and to make the commitment credible. This can be illustrated by another application: dealing with terrorism. Some governments have taken the position that they will not negotiate with terrorists who take hostages, but when the terrorists actually have hostages, the pressure to make some sort of a deal can be very strong. What is to prevent a sensitive government from caving in — just this once, of course! And potential terrorists know those pressures exist, so that the commitments of governments may not be credible to them, even when the governments have a “track record” of being tough.

This may have an effect on the way we want our institutions to function, at the most basic, more or less constitutional level. For example, in countries with strong currencies, like Germany and the United States, the central bank or monetary authority is strongly insulated from democratic politics. This means that the pressures for a more “flexible” policy expressed by voters are not transmitted to the monetary authority — or, anyway, they are not as strong as they might otherwise be — so the monetary authority is more likely to commit itself to a simple rule and the commitment will be more credible.

Are these “conservative” or “liberal” ideas? Some would say that they are conservative rather than liberal, on the grounds that liberals believe in flexibility — considering each case on its own merits, and making the best decision in the circumstances, regardless of unthinking rules. But it may be a little more complex than that. This and the previous essay have considered particular cases in which commitment and rules work better than flexibility. There may be many other cases in which flexibility is needed. I should think that the “liberal” approach would be to consider the case for commitment and for rules rather than discretion on its merits in each instance, rather than relying on an unthinking rule against rules! Anyway, conservative or liberal or radical (as it could be!), the theory of games in extended form is now a key tool for understanding the role of commitment and rules in any society.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/burnout.html” INCLUDEPICTURE “Exhibit%201/Authority-file/next.gif” * MERGEFORMATINET d

Roger A. McCain INCLUDEPICTURE “Exhibit%201/Authority-file/RAMAC.gif” * MERGEFORMATINET d

**Game Theory: An Introductory Sketch **

**The Essence of Bankruptcy**

Bankruptcy is badly understood in modern economics. This is equally true at the most elementary and most advanced levels, but, of course, the sources of confusion are different in these different contexts.

For the elementary student, there is the tendency to confuse bankruptcy, the decision to shut down production, and “going out of business,” that is, liquidation. The undergraduate textbook encourages this, since it considers only the shut-down decision, and the timeless model usual in the undergraduate textbook makes the shut-down decision appear to be an irreversible one. The textbook discussion of the shut-down observes that the business will shut down if it cannot cover its variable costs, and this illustrates a point about opportunity costs — fixed costs are not considered because they are not opportunity costs in the short run. Bankruptcy occurs when the firm cannot, or will not, cover its debt service payments: quite a different thing. Debt service costs are usually thought of as fixed, not variable costs.

In real businesses, of course, bankruptcy, liquidation, and shut-down are three quite different things that may appear in various combinations or entirely separately. A business may be reorganized under bankruptcy and continue doing business with the former creditors as equity owners — neither shut down nor liquidated. The business that shuts down may not be bankrupt — it may continue to make debt service payments out of its cash reserves and resume production when conditions merit. And a company may be liquidated, for example at the death of a proprietor, although it is able to cover its variable costs and its debt service payments (although this will only occur when the transaction costs of finding a buyer are so high as to make sale of the business infeasible).

Small wonder, then, that the undergraduate economics student finds the shut-down analysis a little confusing — it abstracts from almost everything that matters! But more advanced economists will find bankruptcy confusing for another reason. The reason is related to the phrase “the firm cannot, or will not, cover its debt service payments.” We may think of a lending agreement as a solution to a cooperative game, that is, a game in which both players commit themselves at the outset to coordinated strategies. The repayment of debt service is the strategy the firm has committed itself do. For the firm to fail to pay its debt service contradicts the supposition that the firm had, at the first instance, committed itself. And the creditors are letting the firm out of its contract, and they are losing by that, and why should they do it? It seems that we must fall back on the first part of the statement: the firm cannot make its debt service payments. Some unavoidable (but not clearly foreseen) circumstance makes it impossible for the debt service to be paid. We then interpret the debt contract as a commitment to pay “if possible,” or with some other such weasel-words, and we understand why the creditor capitulates: she or he has no choice.

But how can it be that “the firm cannot” pay its debt service? We need to make our picture a little more detailed.

First, uncertainty clearly plays a part in it. If bankruptcy were certain, there would be no lending. Accordingly, we represent uncertainty in the usual way in modern economics: we suppose that the world may realize one of two or more states. At the outset, the state of the world is not known. After some decisions and commitments are made, the state of the world is revealed, and some of the decisions and commitments made at the first stage must be reconsidered. Bankruptcy is such a reconsideration of commitments made in ignorance of the state of the world: it occurs only in some states of the world, and the payoff to the lender in the other states is good enough to make the deal acceptable as a whole.

Second, we must be a little more careful about just who “the firm” is, since it is a compound player. Let us adopt the John Bates Clark model of the business enterprise, and of the market economy, as a first approximation. In this model there are capitalists (lenders, for our purposes), suppliers of labor services, that is workers, and “the entrepreneur,” who owns nothing and whose services are those of coordination between the other two groups.

With these specifics in mind, let us return to the shut-down decision as it is portrayed in the intermediate microeconomics text. What leads “the firm” to shut down? What happens is that the state of the world realized is a relatively bad one. That is, the conditions for production and/or demand are poor, so that the enterprise is unable to “cover its variable costs.” In other words, it is unable to pay the workers enough to keep them in the enterprise. The key point here is that the workers have alternatives. The revenue of the enterprise is so little that, even if the workers get it all, they do not make as much as they would in their best alternatives. Saying “the firm cannot cover its variable costs” is a coded way of saying “the firm cannot recruit labor with its available revenues.” In such a case, there is clearly no alternative to shutting down.

But, as we have observed, a firm may go bankrupt but not shut down, instead continuing to produce under reorganized ownership. How would this occur? The state of the world is not quite as bad: the enterprise can earn enough revenue to pay its workers their best alternative wages, but having done that, there is not enough left to pay the debt service. The entrepreneur has only two choices: to cut the wages below the workers’ best alternative pay, lose them all, produce nothing, and default on all of the debt service; or to pay the workers at their best alternative, produce something, and pay something toward the debt service. Clearly, the latter is in the interest of the lenders, so they renegotiate the note. HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/Fn18-1.html” 1

In all of this, “the entrepreneur” has played a passive role. John Bates Clark’s “entrepreneur” is not much of a player, from the point of view of game theory, anyway. His role is to combine capital and labor in such a way as to maximize profits. In effect, he is an automaton whose programmed decisions define the rules of a game between the workers and the bankers. At the point of bankruptcy, his role is even less active. The choices and commitments are made by the substantive players: capitalists and workers. The essence of bankruptcy is a game played between a lender and a group of workers. We may as well eliminate the entrepreneur entirely, and think of the firm as a worker-cooperative. HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/Fn18-2.html” 2 From here on, we shall follow that strategy.

To make things more explicit still, let us consider a numerical example. The players are, as before, a banker and a group of workers. If the banker lends and the workers work, the enterprise can produce a revenue that depends on the state of the world. there are three states. The best state is the “normal” one, so we assign it a probability of 0.9. The other two states are bad and worse — a bankruptcy state and a shut-down state — with probabilities of 0.05 each. Thus production possibilities are as shown in Table 18-1.

**Table 18-1**

state | revenue | probability |

1 | 3000 | 0.9 |

2 | 2000 | 0.05 |

3 | 1000 | 0.05 |

We suppose that the safe rate of return (opportunity cost of capital) is .01, and that the lender, being profit oriented, offers a loan of 1000 to enable production to take place. The contract rate of interest is 10%; i.e. 1100 has to be paid back at the end of the period. We suppose, also, that the workers can get an alternative pay amounting to 1500.

If the loan is made, the state of the world is revealed, and then the participants reconsider their strategy choices in the light of the new information. Should the bank make the loan? Should the workers’ cooperative accept it? We shall have to consider the various outcomes and then apply “backward induction” to get the answer.

What then happens in state 3? The answer is that in state 3, the members of the cooperative all resign in order to take their best alternative opportunities, at 1500>1000, so that the cooperative spontaneously ceases to exist, and the lender gets nothing.

What about state 1? The enterprise revenue is enough to pay the 1100 in debt service, and the workers’ income, 1900, is more than their best alternative, so they do stay and produce, and both the bank and the workers’ cooperative are better off.

We now turn to the pivotal state 2. Here, there is enough revenue to pay the debt service, but if it is paid, the workers get only 900<1500. In such a case, again, the worker-members of the cooperative will resign, and the cooperative dissolve for lack of members, and the bank will get nothing. On the other hand, if the bank renegotiates for partial repayment of 500 or less, then the workers get 1500 and the cooperative continues. Thus, in this state, the bank renegotiates and earns 500.

The bank’s expected repayment thus is

.9(1100) + .05(500) + .05(0) = 1015 > 1010

Thus the bank makes more than its best alternative and will accept the contract. As for the workers in the cooperative, they make a mathematical expectation of

.9(1900) + .05(1500) + .05(1500) = 1860 > 1500

And so they, too, accept the contract. Thus the loan is made, despite a .05 probability of bankruptcy and a .05 probability of outright default.

In many games of this kind one or another player can obtain a better result if he can commit himself credibly at the outset to a strategy which may seem less advantageous, once the state of the world is known and others have made their decisions. Would the bank be better off it if could commit itself not to renegotiate? The answer is that it would not. Its payoffs would be

.9(1100) + .05(0) + .05(0) = 990 < 1010

The lenders would be worse off and, if (for example) statute law forbade them from renegotiating, they would refuse to make the loan!

But what about the workers? It is their desertion that leads the enterprise to be abandoned if the debt service is paid in state 2. What if they could be somehow bound to the firm? Slavery offers one possibility. In a system that permits slavery, “the entrepreneur” might buy slaves instead of hiring free workers. In state of nature 3, “the entrepreneur” would rent out the slave work force for 1500, pay the 1100 debt service, and pocket the profits (assuming the cost of food necessary to keep the slaves productive is less than 400). In state 2, “the entrepreneur” would require the slaves to work in the firm, produce 2000, pay the debt service, and pocket 900 less the cost of their food. The bank would get its debt service in every state (barring slave starvation) and might well prefer to lend to slavemasters rather than worker cooperatives or John Bates Clark style firms.

In the context of the John Bates Clark firm, the desertion of the workers in states 2 and 3 comes as no surprise to us — the workers are hired by “the entrepreneur” at mutual convenience and are expected to leave whenever it benefits them to do so. In this example, however, the loan is made to a cooperative association of the workers, their own association. If it were made to them individually, they would be no less responsible for it after they had moved on to their other, better-paying jobs. But the obligation to pay the loan has been assumed by a group of workers, as a group, and the group can continue to exist only so long as it is in the interest of the workers as individuals for it to do so. And this does not reflect the constitution of the firm, but the liberal constitution of society, that holds that no agency, even one constructed by the workers, may require a person to work without offering a payment sufficient to get the worker’s assent.

And this is the essence of the case for the proprietary or corporate enterprise as well. The proprietor or investor-owned corporation is no more than a middleman between a group of workers and a bank, so far as bankruptcy is concerned. The essence of bankruptcy is a renegotiation of the loan contract between a lender and a group of workers, and laws exempting the creditor from the full amount of the debt, in appropriate circumstances, are laws for the protection of the creditors, not of the debtors.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/game-toc.html” Table of Contents

**Game Theory: An Introductory Sketch **

**A Theory of Burnout**

As an illustration of the concepts of sequential games and subgame perfect equilibrium, we shall consider a case in the employment relationship. This game will be a little richer in possibilities than the economics textbook discussion of the supply and demand for labor, in that we will allow for two dimensions of work the principles course does not consider: variable effort and the emotional satisfactions of “meaningful work.” We also allow for a sequence of more or less reliable commitments in the choice of strategies.

We consider a three-stage game. At the first stage, one player in the game, the “worker,” must choose between two kinds of strategies, that is, two “jobs.” In either job, the worker will later have to choose between two rates of effort, “high” and “low.” In either job, the output is 20 in the case of high effort and 10 if effort is low. We suppose that the first job is a “meaningful job,” in the sense that it meets needs with which the worker sympathizes. As a consequence of this, the worker “feels the pain” of unmet needs when her or his output falls below the potential output of 20. This reduces her or his utility payoff when she or he shirks at the lower effort level. Of course, her or his utility also depends on the wage and (negatively) on effort. Accordingly, in Job 1 the worker’s payoff is wage – 0.3(20-output) – 2(effort)

where effort is zero or one. The other job is “meaningless,” so that the worker’s utility does not depend on output, and in this job it is wage – 2(effort)

At the second stage of the game the other player, the “employer,” makes a commitment to pay a wage of either 10 or 15. Finally, the worker chooses an effort level, either 0 or 1.

The payoffs are shown in Table 17-1.

**Table 17-1**

Job | ||||

1 | 2 |

effort 0 1 0 1 wage high -5, 12 5, 13 -5,15 5,13 low 0,7 10,8 0,10 10,8 In each cell of the matrix, the worker’s payoff is to the right of the comma and the employer’s to the left. Let us first see what is “efficient” here. The payoffs are shown in Figure 1. Payoffs to the employer are on the vertical axis and those to the worker on the horizontal axis. Possible payoff pairs are indicated by stars-of-David. In economics, a payoff pair is said to be “efficient,” or equivalently, “Pareto-optimal,” if it is not possible to make one player better off without making the other player worse off. The pairs labeled A, B, and C have that property. They are (10,8), (5,13) and (-5,15). The others are inefficient. The red line linking A, B, and C is called the utility possibility frontier. Any pairs to the left of and below it are inefficient.

INCLUDEPICTURE “Exhibit%201/Burnout-file/table.gif” * MERGEFORMATINET d

**Figure 17-1: Game Outcomes**

Now let us explore the subgame perfect equilibrium of this model. First, we may see that the low wage is a “dominant strategy” for the employer. That is, regardless which strategy the worker chooses — job 1 and low effort, job 2 and high effort, and so on — the employer is better off with low wages than with high. Thus the worker can anticipate that the wages will be low. Let us work backward. Suppose that the worker chooses job 2 at the first stage. This limits the game to the right-hand side of the table, which has a structure very much like the Prisoners’ Dilemma. In this subgame, both players have dominant strategies. The worker’s dominant strategy is low effort, and the Prisoners’ Dilemma-like outcome is at (0,10). This is the outcome the worker must anticipate if he chooses Job 2.

What if he chooses Job 1? Then the game is limited to the left-hand side. In this game, too, the worker, like the employer, has a dominant strategy, but in this case it is high effort. This subgame is not Prisoners’ Dilemma-like, since the equilibrium — (10,8) — is an efficient one. This is the outcome the worker must expect if she or he chooses Job 1, “meaningful work.”

But the worker is better off in the subgame defined by “nonmeaningful work,” Job 2. Accordingly, she will choose Job 2, and thus the equilibrium of the game as a whole (the subgame perfect equilibrium) is at (0,10). It is indicated by point E in the figure, and is inefficient.

Why is meaningful work not chosen in this model? It is not chosen because there is no effective reward for effort. With meaningful work, the worker can make no higher wage, despite her greater effort. Yet she does not reduce her effort because doing so brings the greater utility loss of seeing the output of meaningful work decline on account of her decision. The dilemma of having to choose between a financially unrewarded extra effort and witnessing human suffering on account of one’s failure to make the effort seems to be a very stylized account of what we know as “burnout” in the human service professions.

Put differently, workers do not choose meaningful work at low wages because they have a preferable alternative: shirking at low effort levels in nonmeaningful jobs. Unless the meaningful jobs pay enough to make those jobs, with their high effort levels, preferable to the shirking alternative, no-one will choose them.

Inefficiency in Nash equilibriums is a consequence of their noncooperative nature, that is, of the inability of the players to commit themselves to efficiently coordinated strategies. Suppose they could do so — what then? Suppose, in particular, that the employer could commit herself or himself, at the outset, to pay a high wage, in return for the worker’s commitment to choose Job 1. There is no need for an agreement about effort — of the remaining outcomes, in the upper left corner of the table, the worker will choose high effort and (5,13), because of the “meaningful” nature of the work. This is an efficient outcome.

And that, after all, is the way markets work, isn’t it? Workers and employers make reciprocal commitments that balance the advantages to one against the advantages to the other? It is, of course, but there is an ambiguity here about time. There is, of course, no measurement of time in the game example. But commitments to careers are lifetime commitments, and correspondingly, the wage incomes we are talking about must be lifetime incomes. The question then becomes, can employers make credible commitments to pay high lifetime income to workers who choose “meaningful” work with its implicit high effort levels? In the 1960’s, it may have seemed so; but in 1995 it seems difficult to believe that the competitive pressures of a profit-oriented economic system will permit employers to make any such credible commitments.

This may be one reason why “meaningful work” has generally been organized through non-profit agencies. But under present political and economic conditions, even those agencies may be unable to make credible commitments of incomes that can make the worker as well off in a high-effort meaningful job as in a low-effort no meaningful one. If this is so, there may be little long-term hope for meaningful work in an economy dominated by the profit system.

Lest I be misunderstood, I do not mean to argue that a state-organized system would do any better. There is an alternative: a system in which worker incomes are among the objectives of enterprises, that is, a cooperative system. It appears to be possible that such a system could generate meaningful work. There is empirical evidence that cooperative enterprises do in fact support higher effort levels than either profit-oriented or state organizations.

Of course, some nonmeaningful work has to be done, and it remains true that when nonmeaningful work is done it is done inefficiently and at a low effort level, that is, at E in the figure. In other words, the fundamental source of inefficiency in this model is the inability of the workers to make a credible commitment to high effort levels. If high effort could somehow be assured, then (depending on bargaining power) a high-effort efficient outcome would become a possibility in the nonmeaningful work subgame, and this in turn would eliminate the worker’s incentive to choose nonmeaningful work in order to shirk. (If worker bargaining power should enforce the outcome at C, which is Pareto-optimal, the shirking nonmeaningful strategy would still dominate meaningful work). However, it does seem that it is very difficult to make commitments to high effort levels credible, or enforceable, in the context of profit-oriented enterprises.

It may be, then, the the problem of finding meaningful work and of burn-out in fields of meaningful work is a relatively minor aspect of the far broader question of effort commitment in modern economic systems. Perhaps it will do nevertheless as an example of the application of subgame perfect equilibrium concepts to an issue of considerable interest to many modern university students.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/game-toc.html” Table of Contents

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/bankrupt.html” INCLUDEPICTURE “Exhibit%201/Burnout-file/next.gif” * MERGEFORMATINET d

Roger A. McCain INCLUDEPICTURE “Exhibit%201/Burnout-file/ramac.gif” * MERGEFORMATINET d

HYPERLINK “http://www.econ.canterbury.ac.nz/uocgen.htm” UoC Info INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/econ.htm” Econ Home INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/dinfo.htm” Dept Info INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/people.htm” Staff INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/stu_info.htm” Student Info INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/research.htm” Research INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/links.htm” Links HYPERLINK “http://www.econ.canterbury.ac.nz/euoc.htm” UoC Econ Info HYPERLINK “http://library.canterbury.ac.nz/com/econ/econ_portal.shtml” Lib. Econ. Portal INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/nzesg/prog_papers.htm” NZESG INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” Chronology of G. T. INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/mike.htm” Bringing Home the Cup INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/economists.htm” Famous Economists INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/nobel/nobel.htm” Nobel Prize HYPERLINK “http://www.econ.canterbury.ac.nz/mike2.htm” Game Theory INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/mike3.htm” Econ. & Math. INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/mike4.htm” Linear Prog. INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/mike5.htm” Auctions INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/jesson_kw.htm” Wignall’s Jesson Review INCLUDEPICTURE “Exhibit%201/Chronology%20of%20Game%20Theory-file/line2a.gif” * MERGEFORMATINET d HYPERLINK “http://www.econ.canterbury.ac.nz/jesson_pd.htm” Dalziel’s Jesson Review |

**A Chronology of Game Theory**

**by HYPERLINK “mailto:p.walker@econ.canterbury.ac.nz” ** Paul Walker

*May 2001*

**Chronology**

| HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “cminus” Ancient | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “c17” 1700 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “c18” 1800 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y00” 1900 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y50” 1950 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y60” 1960 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y70” 1970 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y80” 1980 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y90” 1990 | HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “nobel” Nobel Prize |

0-500AD

The Babylonian HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref73” Talmud is the compilation of ancient law and tradition set down during the first five centuries A.D. which serves as the basis of Jewish religious, criminal and civil law. One problem discussed in the Talmud is the so called marriage contract problem: a man has three wives whose marriage contracts specify that in the case of this death they receive 100, 200 and 300 respectively. The Talmud gives apparently contradictory recommendations. Where the man dies leaving an estate of only 100, the Talmud recommends equal division. However, if the estate is worth 300 it recommends proportional division (50,100,150), while for an estate of 200, its recommendation of (50,75,75) is a complete mystery. This particular Mishna has baffled Talmudic scholars for two millennia. In 1985, it was recognised that the Talmud anticipates the modern theory of cooperative games. Each solution corresponds to the nucleolus of an appropriately defined game.

1713

In a letter dated 13 November HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref87” 1713 James Waldegrave provided the first, known, minimax mixed strategy solution to a two-person game. Waldegrave wrote the letter, about a two-person version of the card game le Her, to Pierre-Remond de Montmort who in turn wrote to Nicolas Bernoulli, including in his letter a discussion of the Waldegrave solution. Waldegrave’s solution is a minimax mixed strategy equilibrium, but he made no extension of his result to other games, and expressed concern that a mixed strategy “does not seem to be in the usual rules of play” of games of chance

1838

Publication of Augustin Cournot’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref1” Researches into the Mathematical Principles of the Theory of Wealth. In chapter 7, On the Competition of Producers, Cournot discusses the special case of duopoly and utilises a solution concept that is a restricted version of the Nash equilibrium

1871

In the first edition of his book HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref96” The Descent of Man, and Selection in Relation to Sex Charles Darwin gives the first (implicitly) game theoretic argument in evolutionary biology. Darwin argued that natural section will act to equalize the sex ratio. If, for example, births of females are less common than males, then a newborn female will have better mating prospects than a newborn male and therefore can expect to have more offspring. Thus parents genetically disposed to produce females tend to have more than the average numbers of grandchildren and thus the genes for female-producing tendencies spread, and female births become commoner. As the 1:1 sex ratio is approached, the advantage associated with producing females dies away. The same reasoning holds if males are substituted for females throughout. Therefore 1:1 is the equilibrium ratio.

1881

Publication of Francis Ysidro Edgeworth’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref2” Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences. Edgeworth proposed the contract curve as a solution to the problem of determining the outcome of trading between individuals. In a world of two commodities and two types of consumers he demonstrated that the contract curve shrinks to the set of competitive equilibria as the number of consumers of each type becomes infinite. The concept of the core is a generalisation of Edgeworth’s contract curve.

1913

The first theorem of game theory asserts that in chess either white can force a win, or black can force a win, or both sides can force at least a draw. This ‘theorem’ was published by Ernst Zermelo in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref3” Uber eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and hence is referred to as Zermelo’s Theorem. Zermelo’s results were extended and generalised in two papers by Denes HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref3” Konig and Laszlo HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref3” Kalmar. The Kalmar paper contains the first proof of Zermelo’s theorem since Zermelo’s own paper did not give one. An English translation of the Zermelo paper, along with a discussion its significance and its relationship to the work of Konig and Kalmar is contained in HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref3” Zermelo and the Early History of Game Theory by U. Schwalbe and P. Walker.

1921-27

Emile Borel published HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref4” four notes on strategic games and an erratum to one of them. Borel gave the first modern formulation of a mixed strategy along with finding the minimax solution for two-person games with three or five possible strategies. Initially he maintained that games with more possible strategies would not have minimax solutions, but by 1927, he considered this an open question as he had been unable to find a counterexample.

1928

John von Neumann proved the minimax theorem in his article HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref6” Zur Theorie der Gesellschaftsspiele. It states that every two- person zero-sum game with finitely many pure strategies for each player is determined, ie: when mixed strategies are admitted, this variety of game has precisely one individually rational payoff vector. The proof makes involved use of some topology and of functional calculus. This paper also introduced the extensive form of a game.

1930

Publication of F. Zeuthen’s book HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref94” Problems of Monopoly and Economic Warfare. In chapter IV he proposed a solution to the bargaining problem which HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref94” Harsanyi later showed is equivalent to Nash’s bargaining solution.

1934

R.A. Fisher independently discovers Waldegrave’s solution to the card game le Her. Fisher reported his work in the paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref7” Randomisation and an Old Enigma of Card Play.

1938

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref85” Ville gives the first elementary, but still partially topological, proof of the minimax theorem. Von Neumann and Morgenstern’s (1944) proof of the theorem is a revised, and more elementary, version of Ville’s proof.

1944

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref8” Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern is published. As well as expounding two-person zero sum theory this book is the seminal work in areas of game theory such as the notion of a cooperative game, with transferable utility (TU), its coalitional form and its von Neumann-Morgenstern stable sets. It was also the account of axiomatic utility theory given here that led to its wide spread adoption within economics.

1945

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref88” Herbert Simon writes the first review of von Neumann-Morgenstern.

1946

The first entirely algebraic proof of the minimax theorem is due to L. H. Loomis’s, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref90” On a Theorem of von Neumann, paper.

1950

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref9” Contributions to the Theory of Games I, H. W. Kuhn and A. W. Tucker eds., published.

1950

In January 1950 Melvin Dresher and Merrill Flood carry out, at the Rand Corporation, the experiment which introduced the game now known as the Prisoner’s Dilemma. The famous story associated with this game is due to A. W. Tucker, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref63” A Two-Person Dilemma, (memo, Stanford University). Howard Raiffa independently conducted, unpublished, experiments with the Prisoner’s Dilemma.

1950

John McDonald’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref97” Strategy in Poker, Business and War published. This was the first introduction to game theory for the general reader.

1950-53

In four papers between 1950 and 1953 John Nash made seminal contributions to both non-cooperative game theory and to bargaining theory. In two papers, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref10” Equilibrium Points in N- Person Games (1950) and HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref11” Non-cooperative Games (1951), Nash proved the existence of a strategic equilibrium for non-cooperative games-the Nash equilibrium-and proposed the “Nash program”, in which he suggested approaching the study of cooperative games via their reduction to non-cooperative form. In his two papers on bargaining theory, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref12” The Bargaining Problem (1950) and HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref13” Two-Person Cooperative Games (1953), he founded axiomatic bargaining theory, proved the existence of the Nash bargaining solution and provided the first execution of the Nash program.

1951

George W. Brown described and discussed a simple iterative method for approximating solutions of discrete zero-sum games in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref14” Iterative Solutions of Games by Fictitious Play.

1952

The first textbook on game theory was John Charles C. McKinsey, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref15” Introduction to the Theory of Games.

1952

Merrill Flood’s report, (Rand Corporation research memorandum, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref24″ Some Experimental Games, RM-789, June), on the 1950 Dresher/Flood experiments appears.

1952

The Ford Foundation and the University of Michigan sponsor a seminar on the ” HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref89″ Design of Experiments in Decision Processes” in Santa Monica. This was the first experimental economics/experimental game theory conference

1952-53

The notion of the Core as a general solution concept was developed by L. S. Shapley (Rand Corporation research memorandum, Notes on the N-Person Game III: Some Variants of the von-Neumann-Morgenstern Definition of Solution, RM- 817, 1952) and D. B. Gillies ( HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref26” Some Theorems on N-Person Games, Ph.D. thesis, Department of Mathematics, Princeton University, 1953). The core is the set of allocations that cannot be improved upon by any coalition.

1953

Lloyd Shapley in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref16” A Value for N-Person Games characterised, by a set of axioms, a solution concept that associates with each coalitional game,v, a unique out-come, v. This solution in now known as the Shapley Value.

1953

Lloyd Shapley’s paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref17” Stochastic Games showed that for the strictly competitive case, with future payoff discounted at a fixed rate, such games are determined and that they have optimal strategies that depend only on the game being played, not on the history or even on the date, ie: the strategies are stationary.

1953

Extensive form games allow the modeller to specify the exact order in which players have to make their decisions and to formulate the assumptions about the information possessed by the players in all stages of the game. H. W. Kuhn’s paper, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref18” Extensive Games and the Problem of Information includes the formulation of extensive form games which is currently used, and also some basic theorems pertaining to this class of games.

1953

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref19” Contributions to the Theory of Games II, H. W. Kuhn and A. W. Tucker eds., published.

1954

One of the earliest applications of game theory to political science is L. S. Shapley and M. Shubik with their paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref20” A Method for Evaluating the Distribution of Power in a Committee System. They use the Shapley value to determine the power of the members of the UN Security Council.

1954-55

Differential Games were developed by Rufus Isaacs in the early 1950s. They grew out of the problem of forming and solving military pursuit games. The first publications in the area were Rand Corporation research memoranda, by Isaacs, RM-1391 (30 November 1954), RM-1399 (30 November 1954), RM-1411 (21 December 1954) and RM-1486 (25 March 1955) all entitled, in part, Differential Games.

1955

One of the first applications of game theory to philosophy is R. B. Braithwaite’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref21” Theory of Games as a Tool for the Moral Philosopher.

1957

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref22” Games and Decisions: Introduction and Critical Survey by Robert Duncan Luce and Howard Raiffa published.

1957

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref23” Contributions to the Theory of Games III, M. A. Dresher, A. W. Tucker and P. Wolfe eds., published.

1959

The notion of a Strong Equilibrium was introduced by R. J. Aumann in the paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref25” Acceptable Points in General Cooperative N-Person Games.

1959

The relationship between Edgeworth’s idea of the contract curve and the core was pointed out by Martin Shubik in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref27” Edgeworth Market Games. One limitation with this paper is that Shubik worked within the confines of TU games whereas Edgeworth’s idea is more appropriately modelled as an NTU game.

1959

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref28” Contributions to the Theory of Games IV, A. W. Tucker and R. D. Luce eds., published.

1959

Publication of Martin Shubik’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref29” Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games. This was one of the first books to take an explicitly non-cooperative game theoretic approach to modelling oligopoly. It also contains an early statement of the Folk Theorem.

Late 50’s

Near the end of this decade came the first studies of repeated games. The main result to appear at this time was the Folk Theorem. This states that the equilibrium outcomes in an infinitely repeated game coincide with the feasible and strongly individually rational outcomes of the one-shot game on which it is based. Authorship of the theorem is obscure.

1960

The development of NTU (non-transferable utility) games made cooperative game theory more widely applicable. Von Neumann and Morgenstern stable sets were investigated in the NTU context in the Aumann and Peleg paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref30” Von Neumann and Morgenstern Solutions to Cooperative Games Without Side Payments.

1960

Publication of Thomas C. Schelling’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref31” The Strategy of Conflict. It is in this book that Schelling introduced the idea of a focal-point effect.

1961

The first explicit application to evolutionary biology was by R. C. Lewontin in HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref32” Evolution and the Theory of Games.

1961

The Core was extended to NTU games by R. J. Aumann in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref33” The Core of a Cooperative Game Without Side Payments.

1962

In their paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref34” College Admissions and the Stability of Marriage, D. Gale and L. Shapley asked whether it is possible to match m women with m men so that there is no pair consisting of a woman and a man who prefer each other to the partners with whom they are currently matched. Game theoretically the question is, does the appropriately defined NTU coalitional game have a non-empty core? Gale and Shapley proved not only non-emptiness but also provided an algorithm for finding a point in it.

1962

One of the first applications of game theory to cost allocation is Martin Shubik’s paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref35” Incentives, Decentralized Control, the Assignment of Joint Costs and Internal Pricing. In this paper Shubik argued that the Shapley value could be used to provide a means of devising incentive-compatible cost assignments and internal pricing in a firm with decentralised decision making.

1962

An early use of game theory in insurance is Karl Borch’s paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref36” Application of Game Theory to Some Problems in Automobile Insurance. The article indicates how game theory can be applied to determine premiums for different classes of insurance, when required total premium for all classes is given. Borch suggests that the Shapley value will give reasonable premiums for all classes of risk.

1963

O. N. Bondareva established that for a TU game its core is non-empty iff it is balanced. The reference, which is in Russian, translates as Some Applications of Linear Programming Methods to the Theory of Cooperative Games.

1963

In their paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref37” A Limit Theorem on the Core of an Economy G. Debreu and H. Scarf generalised Edgeworth, in the context of a NTU game, by allowing an arbitrary number of commodities and an arbitrary but finite number of types of traders.

1964

Robert J. Aumann further extended Edgeworth by assuming that the agents constitute a (non-atomic) continuum in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref38” Markets with a Continuum of Traders.

1964

The idea of the Bargaining Set was introduced and discussed in the paper by R. J. Aumann and M. Maschler, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref39” The Bargaining Set for Cooperative Games. The bargaining set includes the core but unlike it, is never empty for TU games.

1964

Carlton E. Lemke and J.T. Howson, Jr., describe an algorithm for finding a Nash equilibrium in a bimatrix game, thereby giving a constructive proof of the existence of an equilibrium point, in their paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref40” Equilibrium Points in Bimatrix Games. The paper also shows that, except for degenerate situations, the number of equilibria in a bimatrix game is odd.

1965

Publication of Rufus Isaacs’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref41” Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization.

1965

R. Selten, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref42” Spieltheoretische Behandlung eines Oligopolmodells mit Nachfragetraegheit. In this article Selten introduced the idea of refinements of the Nash equilibrium with the concept of (subgame) perfect equilibria.

1965

The concept of the Kernel is due to M. Davis and M. Maschler, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref43” The Kernel of a Cooperative Game. The kernel is always included in the bargaining set but is often much smaller.

1966

Infinitely repeated games with incomplete information were born in a paper by R. J. Aumann and M. Maschler, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref44” Game-Theoretic Aspects of Gradual Disarmament.

1966

In his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref45” A General Theory of Rational Behavior in Game Situations John Harsanyi gave the, now, most commonly used definition to distinguish between cooperative and non-cooperative games. A game is cooperative if commitments–agreements, promises, threats–are fully binding and enforceable. It is non-cooperative if commitments are not enforceable.

1967

Lloyd Shapley, independently of O.N. Bondareva, showed that the core of a TU game is non-empty iff it is balanced in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref46” On Balanced Sets and Cores.

1967

In the article HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref47” The Core of a N-Person Game, H. E. Scarf extended the notion of balancedness to NTU games, then showed that every balanced NTU game has a non-empty core.

1967-68

In a series of three papers, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref48” Games with Incomplete Information Played by ‘Bayesian’ Players, Parts I, II and III, John Harsanyi constructed the theory of games of incomplete information. This laid the theoretical groundwork for information economics that has become one of the major themes of economics and game theory.

1968

The long-standing question as to whether stable sets always exist was answered in the negative by William Lucas in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref82” A Game with no Solution.

1969

David Schmeidler introduced the Nucleolus in this paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref49” The Nucleolus of a Characteristic Game. The Nucleolus always exists, is unique, is a member of the Kernel and for any non- empty core is always in it.

1969

Shapley defined a value for NTU games in his article HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref50” Utility Comparison and the Theory of Games.

1969

For a coalitional game to be a market game it is necessary that it and all its subgames have non-empty cores, ie: that the game be totally balanced. In HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref51” Market Games L. S. Shapley and Martin Shubik prove that this necessary condition is also sufficient.

1972

International Journal of Game Theory was founded by Oskar Morgenstern.

1972

The concept of an Evolutionarily Stable Strategy (ESS), was introduced to evolutionary game theory by John Maynard Smith in an essay HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref52” Game Theory and The Evolution of Fighting. The ESS concept has since found increasing use within the economics (and biology!) literature.

1973

In the traditional view of strategy randomization, the players use a randomising device to decide on their actions. John Harsanyi was the first to break away from this view with his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref53” Games with Randomly Disturbed Payoffs: A New Rationale for Mixed Strategy Equilibrium Points. For Harsanyi nobody really randomises. The appearance of randomisation is due to the payoffs not being exactly known to all; each player, who knows his own payoff exactly, has a unique optimal action against his estimate of what the others will do.

1973

The major impetus for the use of the ESS concept was the publication of J. Maynard Smith and G. Price’s paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref54” The Logic of Animal Conflict.

1973

The revelation principle can be traced back to Gibbard’s paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref95” Manipulation of Voting Schemes: A General Result

1974

Publication of R. J. Aumann and L. S. Shapley’s book HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref55” Values of Non-Atomic Games. It deals with values for large games in which all the players are individually insignificant (non-atomic games).

1974

R. J. Aumann proposed the concept of a correlated equilibrium in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref56” Subjectivity and Correlation in Randomized Strategies.

1975

The introduction of trembling hand perfect equilibria occurred in the paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref57” Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games by Reinhard Selten. This paper was the true catalyst for the ‘refinement industry’ that has developed around the Nash equilibrium.

1975

E. Kalai and M. Smorodinsky, in their article HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref58” Other Solutions to Nash’s Bargaining Problem, replace Nash’s independence of irrelevant alternatives axiom with a monotonicity axiom. The resulting solution is known as the Kalai-Smorodinsky solution.

1975

In his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref59” Cross-Subsidization: Pricing in Public Enterprises, G. Faulhaber shows that the set of subsidy-free prices are those prices for which the resulting revenue (ri = piqi for given demand levels qi) vector lies in the core of the cost allocation game.

1976

An event is common knowledge among a set of agents if all know it and all know that they all know it and so on ad infinitum. Although the idea first appeared in the work of the philosopher HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref60” D. K. Lewis in the late 1960s it was not until its formalisation in Robert Aumann’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref61” Agreeing to Disagree that game theorists and economists came to fully appreciate its importance.

1977

S. C. Littlechild and G. F. Thompson are among the first to apply the nucleolus to the problem of cost allocation with their article HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref62” Aircraft Landing Fees: A Game Theory Approach. They use the nucleolus, along with the core and Shapley value, to calculate fair and efficient landing and take-off fees for Birmingham airport.

1981

Elon Kohlberg introduced the idea of forward induction in a conference paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref64” Some Problems with the Concept of Perfect Equilibria.

1981

R. J. Aumann published a HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref65” Survey of Repeated Games. This survey firstly proposed the idea of applying the notion of an automaton to describe a player in a repeated game. A second idea from the survey is to study the interactive behaviour of bounded players by studying a game with appropriately restricted set of strategies. These ideas have given birth to a large and growing literature.

1982

David M. Kreps and Robert Wilson extend the idea of a subgame perfect equilibrium to subgames in the extensive form that begin at information sets with imperfect information. They call this extended idea of equilibrium sequential. It is detailed in their paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref66” Sequential Equilibria.

1982

A. Rubinstein considered a non-cooperative approach to bargaining in his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref67” Perfect Equilibrium in a Bargaining Model. He considered an alternating-offer game were offers are made sequentially until one is accepted. There is no bound on the number of offers that can be made but there is a cost to delay for each player. Rubinstein showed that the subgame perfect equilibrium is unique when each player’s cost of time is given by some discount factor delta.

1982

Publication of HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref68” Evolution and the Theory of Games by John Maynard Smith.

1984

Following the work of Gale and Shapley, A. E. Roth applied the core to the problem of the assignment of interns to hospitals. In his paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref69” The Evolution of the Labour Market for Medical Interns and Residents: A Case Study in Game Theory he found that American hospitals developed in 1950 a method of assignment that is a point in the core.

1984

The idea of a rationalizability was introduced in two papers; B. D. Bernheim, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref70” Rationalizable Strategic Behavior and D. G. Pearce, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref71” Rationalizable Strategic Behavior and the Problem of Perfection.

1984

Publication of HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref72” The Evolution of Cooperation by Robert Axelrod.

1985

For a Bayesian game the question arises as to whether or not it is possible to construct a situation for which there is no sets of types large enough to contain all the private information that players are supposed to have. In their paper, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref91” Formulation of Bayesian Analysis for Games with Incomplete Information, J.-F. Mertens and S. Zamir show that it is not possible to do so.

1985-86

Following Aumann, the theory of automata is now being used to formulate the idea of bounded rationality in repeated games. Two of the first articles to take this approach were A. Neyman’s 1985 paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref74” Bounded Complexity Justifies Cooperation in the Finitely Repeated Prisoner’s Dilemma and A. Rubinstein’s 1986 article HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref75” Finite Automata Play the Repeated Prisoner’s Dilemma.

1986

In their paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref76” On the Strategic Stability of Equilibria Elon Kohlberg and Jean-Francois Mertens deal with the problem of he refinement of Nash equilibria in the normal form, rather than the extensive form of a game as with the Selten and Kreps and Wilson papers. This paper is also one of the first, published, discussions of the idea of forward induction.

1988

John C. Harsanyi and Reinhard Selten produced the first general theory of selecting between equilibria in their book HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref77” A General Theory of Equilibrium Selection in Games. They provide criteria for selecting one particular equilibrium point for any non-cooperative or cooperative game.

1988

With their paper HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref93” The Bayesian Foundations of Solution Concepts of Games Tan and Werlang are among the first to formally discuss the assumptions about a player’s knowledge that lie behind the concepts of Nash equilibria and rationalizability.

1988

One interpretation of the Nash equilibrium is to think of it as an accepted (learned) ‘standard of behaviour’ which governs the interaction of various agents in repetitions of similar situations. The problem then arises of how agents learn the equilibrium. One of the earliest works to attack the learning problem was Drew Fudenberg and David Kreps’s A Theory of Learning, Experimentation and Equilibria, (MIT and Stanford Graduate School of Business, unpublished), which uses an learning process similar to Brown’s fictitious play, except that player occasionally experiment by choosing strategies at random, in the context of iterated extensive form games. Evolutionary game models are also commonly utilised within the learning literature.

1989

The journal Games and Economic Behavior founded.

1990

The first graduate level microeconomics textbook to fully integrate game theory into the standard microeconomic material was David M. Krep’s HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref78” A Course in Microeconomic Theory.

1990

In the article HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref92” Equilibrium without Independence Vincent Crawford discusses mixed strategy Nash equilibrium when the players preferences do not satisfy the assumptions necessary to be represented by expected utility functions.

1991

An early published discussion of the idea of a Perfect Bayesian Equilibrium is the paper by D. Fudenberg and J. Tirole, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref79” Perfect Bayesian Equilibrium and Sequential Equilibrium.

1992

Publication of the HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref80” Handbook of Game Theory with Economic Applications, Volume 1 edited by Robert J. Aumann and Sergiu Hart.

1994

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref86a” Game Theory and the Law by Douglas G. Baird, Robert H. Gertner and Randal C. Picker is one of the first books in law and economics to take an explicitly game theoretic approach to the subject.

1994

Publication of the HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “ref81” Handbook of Game Theory with Economic Applications, Volume 2 edited by Robert J. Aumann and Sergiu Hart.

1994

HYPERLINK “http://www.nobel.se/economics/laureates/1994/index.html” The Sveriges Riksbank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel was award to John Nash, John C. Harsanyi and Reinhard Selten for their contributions to Game Theory.

**Bibliography and Notes**

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “cminus” 0 – 500AD

The Talmud results are from Aumann, R. J. and M. Maschler, (1985), Game Theoretic Analysis of a Bankruptcy Problem from the Talmud, Journal of Economic Theory 36, 195-213.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “c17” 1713

On Waldegrave see Kuhn, H. W. (1968), Preface to Waldegrave’s Comments: Excerpt from Montmort’s Letter to Nicholas Bernoulli, pp. 3-6 in Precursors in Mathematical Economics: An Anthology (Series of Reprints of Scarce Works on Political Economy, 19) (W. J. Baumol and S. M. Goldfeld, eds.), London: London School of Economics and Political Science and Waldegrave’s Comments: Excerpt from Montmort’s Letter to Nicholas Bernoulli, pp. 7-9 in Precursors in Mathematical Economics: An Anthology (Series of Reprints of Scarce Works on Political Economy, 19) (W. J. Baumol and S. M. Goldfeld, eds.), London: London School of Economics and Political Science, 1968.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “c18” 1838

Cournot, Augustin A. (1838), Recherches sur les Principes Mathematiquesde la Theorie des Richesses. Paris: Hachette. (English translation: Researches into the Mathematical Principles of the Theory of Wealth. New York: Macmillan, 1897. (Reprinted New York: Augustus M. Kelley, 1971)).

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1871” 1871

Darwin, C. (1871), The Descent of Man, and Selection in Relation to Sex. London: John Murray. This theory of the evolution of the sex ratio is normally attributed to R. A. Fisher (The Genetical Theory of Natural Selection. Oxford: Clarendon Press, 1930). Before presenting the theory Fisher quotes a paragraph from the second (1874) edition of Darwin’s Descent of Man in which Darwin cannot see how a 1:1 sex ratio could be the result of natural section. Fisher appears not to have noticed that the paragraph he quotes comes from a section which replaces the section in the first edition which contains the essence of Fisher’s own theory. The fact that Darwin had anticipated Fisher by some 60 years was first noted by Michael Bulmer in his 1994 book, Theoretical Evolutionary Ecology. Sunderland, MA: Sinauer Associates Publishers. See chapter 10, pages 207 – 208. This fact is also discussed in an unpublished paper by Martin Osborne; Darwin, Fisher, and a Theory of the Evolution of the Sex Ratio. See HYPERLINK “http://www.economics.utoronto.ca/osborne/research/index.html” martin j. osborne’s recent research

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1881” 1881

Edgeworth, Francis Ysidro (1881), Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences. London: Kegan Paul. (Reprinted New York: Augustus M. Kelley, 1967).

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y00” 1913

Zermelo, E. (1913), Uber eine Anwendung der Mengenlehre auf die Theorie des Schachspiels, pp. 501-504 in Proceedings of the Fifth International Congress of Mathematicians, Volume II (E. W. Hobson and A. E. H. Love, eds.), Cambridge: Cambridge University Press. The reference for the Konig paper is Konig, Denes (1927), Uber eine Schlussweise aus dem Endlichen ins Unendliche, Acta Sci. Math. Szeged 3, 121-130 while the Kalmar reference is Kalmar, Laszlo (1928/29), Zur Theorie der abstrakten Spiele, Acta Sci. Math. Szeged 4, 65-85. The English translation of Zermelo’s paper and discussion of all three papers is in Schwalbe, U. and P. Walker (2001), Zermelo and the Early History of Game Theory, Games and Economic Behavior v34 no1, 123-37.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1921-27” 1921-27

This follows Dimand, Robert W. and Mary Ann Dimand (1992), The Early History of the Theory of Games from Waldegrave to Borel, pp. 15-27 in Toward a History of Game Theory (Annual Supplement to Volume 24 History of Political Economy) (E. Roy Weintraub ed.), Durham: Duke University Press. Frechet, Maurice (1953), Emile Borel, Initiator of the Theory of Psychological games and its Application, Econometrica 21, 95-96, credits Borel with seven notes on game theory between 1921 and 1927. The Frechet seven are: (1) La theorie du jeu et les equations integrales a noyan symetrique gauche, Comptes Rendus Academie des Sciences, Vol. 173, 1921, pp. 1304-1308. (2) Sur les jeux ou interviennent l’hasard et l’habilete des joueurs, Association Francaise pour l’Advancement des Sciences, 1923, pp. 79-85. (3) Sur les jeux ou interviennent l’hasard et l’habilete des joueurs, Theorie des Probabilites. Paris: Librairie Scientifique, J. Hermann, (1924), pp. 204-224. (4) Un theoreme sur les systemes de formes lineaires a determinant symetrique gauche, Comptes Rendus Academie des Sciences, Vol. 183, 1926, pp. 925-927, avec erratum, p. 996 . (5) Algebre et calcul des probabilites, Comptes Rendus Academie des Sciences, Vol. 184, 1927, pp. 52-53. (6) Traite du calcul des probabilites et de ses applications, Applications des jeux de hasard. Paris: Gauthier-Villars, Vol. IV, 1938, Fascicule 2, 122 pp. (7) Jeux ou la psychologie joue un role fondamental, see (6) pp. 71-87. Dimand and Dimand note that (6) and (7) are dated 1938 and so are outside the 1921-1927 time frame while article (2) has the same title as the chapter from the book (3). Three of Borel’s notes were translated and published in Econometrica 21(1953). (1) was published as Theory of Play and Integral Equations with Skew Symmetric Kernels, pp. 91-100. (3) was published as On Games that involve Chance and the Skill of the Players, pp. 101-115. (5) was published as On Systems of Linear Forms of Skew Symmetric Determinant and the General Theory of Play, pp. 116-117.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1928” 1928

von Neumann, J. (1928), Zur Theorie der Gesellschaftsspiele, Mathematische Annalen 100, 295-320. (Translated as “On the Theory of Games of Strategy”, pp.13-42 in Contributions to the Theory of Games, Volume IV (Annals of Mathematics Studies, 40) (A. W. Tucker and R. D. Luce, eds.), Princeton University Press, Princeton, 1959).

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1930” 1930

Zeuthen, F. (1930), Problems of Monopoly and Economic Warfare. London: George Routledge and Sons. The mathematical equivalence of Zeuthen’s and Nash’s solutions was shown by Harsanyi, J. C. (1956), Approaches to the Bargaining Problem Before and After the Theory of Games: A Critical Discussion of Zeuthen’s, Hicks’, and Nash’s Theories, Econometrica 24, 144-157.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1934” 1934

Fisher, R. A. (1934), Randomisation, and an Old Enigma of Card Play, Mathematical Gazette 18, 294-297.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1938” 1938

Ville, Jean (1938), Note sur la theorie generale des jeux ou intervient l’habilite des jouers, pp. 105-113 in Applications aux jeux de hasard, Tome IV, Fascicule II of Traite du calcul des probabilities et de ses applications (Emile Borel), Paris: Gauthier-Villars.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1944” 1944

von Neumann, J., and O. Morgenstern (1944), Theory of Games and Economic Behavior. Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1945” 1945

Simon, H. A. (1945), Review of the Theory of Games and Economic Behavior by J. von Neumann and O. Morgenstern, American Journal of Sociology 27, 558-560.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1946” 1946

Loomis, L. H. (1946), On a Theorem of von Neumann, Proceedings of the National Academy of Sciences of the United States of America 32, 213-215.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y50” 1950

Kuhn, H. W. and A. W. Tucker, eds. (1950), Contributions to the Theory of Games, Volume I (Annals of Mathematics Studies, 24). Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1950” 1950

Publication of Tucker’s (1950) memo occurred in 1980 under the title On Jargon: The Prisoner’s Dilemma, UMAP Journal 1, 101.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1950B” 1950

McDonald, John (1950), Strategy in Poker, Business and War. New York: Norton. This book based on two articles McDonald wrote for Fortune magazine. The first, Poker, An American Game (March, 1948) and the second, A Theory of Strategy (June, 1949).

1950-1953

Nash, J. F. HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1950A” (1950), Equilibrium Points in N-Person Games, Proceedings of the National Academy of Sciences of the United States of America 36, 48-49.

Nash, J. F. HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1950A” (1951), Non-Cooperative Games, Annals of Mathematics 54, 286-295.

Nash, J. F. HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1950A” (1950), The Bargaining Problem, Econometrica 18, 155-162.

Nash, J. F. HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1950A” (1953), Two Person Cooperative Games, Econometrica 21, 128-140.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1951” 1951

Brown, G. W. (1951), Iterative Solution of Games by Fictitious Play, pp. 374-376 in Activity Analysis of Production and Allocation (T. C. Koopmans, ed.), New York: Wiley.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1952” 1952

McKinsey, John Charles C. (1952), Introduction to the Theory of Games. New York: McGraw-Hill Book Co.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1952A” 1952

Flood’s 1952 Rand memorandum was published in Flood, M. A. (1958), Some Experimental Games, Management Science 5, 5-26.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1952B” 1952

Some of the experimental papers from the conference appear in Thrall, R. M., C. H. Coombs and R. C. Davis, eds. (1954), Decision Processes. New York: Wiley.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1950Z” 1952-53

Gillies published version of the core concept appears in his paper, Gillies, D. B. (1959), Solutions to General Non-Zero-Sum Games, pp. 47-85 in Contributions to the Theory of Games, Volume IV (Annals of Mathematics Studies, 40) (A. W. Tucker and R. D. Luce, eds.), Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1953” 1953

Shapley, L. S. (1953), A Value for n-Person Games, pp. 307-317 in Contributions to the Theory of Games, Volume II (Annals of Mathematics Studies, 28) (H. W. Kuhn and A. W. Tucker, eds.), Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1953A” 1953

Shapley, L. S. (1953), Stochastic Games, Proceedings of the National Academy of Sciences of the United States of America 39, 1095-1100.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1953B” 1953

Kuhn, H. W. (1953), Extensive Games and the Problem of Information, pp. 193-216 in Contributions to the Theory of Games, Volume II (Annals of Mathematics Studies, 28) (H. W. Kuhn ans A. W. Tucker, eds.), Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1953C” 1953

Kuhn, H. W. and A. W. Tucker, eds. (1953), Contributions to the Theory of Games, Volume II (Annals of Mathematics Studies, 28). Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1954” 1954

Shapley, L. S. and M. Shubik (1954), A Method for Evaluating The Distribution of Power in a Committee System, American Political Science Review 48, 787-792.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1955” 1955

Braithwaite, R. B. (1955), Theory of Games as a Tool for the Moral Philosopher. Cambridge: Cambridge University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1957” 1957

Luce, R. Duncan and Howard Raiffa (1957), Games and Decisions: Introduction and Critical Survey. New York: Wiley. (Reprinted New York: Dover, 1989).

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1957A” 1957

Dresher, Melvin, A. W. Tucker and P. Wolfe, eds. (1957), Contributions to theTheory of Games, Volume III (Annals of Mathematics Studies, 39). Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1959” 1959

Aumann, R. J. (1959), Acceptable Points in General Cooperative N-Person Games, pp. 287-324 in Contributions to the Theory of Games, Volume IV (Annals of Mathematics Studies, 40) (A. W. Tucker and R. D. Luce, eds.), Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1959B” 1959

Shubik, M. (1959), Edgeworth Market Games, pp. 267-278 in Contributions to the Theory of Games, Volume IV (Annals of Mathematics Studies, 40) (A. W. Tucker and R. D. Luce, eds.), Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1959C” 1959

Tucker, A. W. and R. D. Luce, eds. (1959), Contributions to the Theory of Games, Volume IV (Annals of Mathematics Studies, 40). Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1959D” 1959

Shubik, M. (1959), Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games. New York: Wiley.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y60” 1960

Aumann, R. J. and B. Peleg (1960), Von Neumann-Morgenstern Solutions to Cooperative Games without Side Payments, Bulletin of the American Mathematical Society 66, 173-179.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1960” 1960

Schelling, T. C. (1960), The Strategy of Conflict. Cambridge, Mass.: Harvard University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1961” 1961

Lewontin, R. C. (1961), Evolution and the Theory of Games, Journal of Theoretical Biology 1, 382-403.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1961A” 1961

Aumann, R. J. (1961), The Core of a Cooperative Game Without Side Payments, Transactions of the American Mathematical Society 98, 539-552.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1962” 1962

Gale, D. and L. S. Shapley (1962), College Admissions and the Stability of Marriage, American Mathematics Monthly 69, 9-15.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1962A” 1962

Shubik, M. (1962), Incentives, Decentralized Control, the Assignment of Joint Costs and Internal Pricing, Management Science 8, 325-343.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1962B” 1962

Borch, Karl (1962), Application of Game Theory to Some Problems in Automobile Insurance, The Astin Bulletin 2 (part 2), 208-221.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1963” 1963

Debreu, G. and H. Scarf (1963), A Limit Theorem on the Core of an Economy, International Economic Review 4, 235-246.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1964” 1964

Aumann, R. J. (1964), Markets with a Continuum of Traders, Econometrica 32, 39-50.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1964A” 1964

Aumann, R. J. and M. Maschler (1964), The Bargaining Set for Cooperative Games, pp. 443-476 in Advances in Game Theory (Annals of Mathematics Studies, 52) (M. Dresher, L. S. Shapley and A. W. Tucker, eds.), Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1964B” 1964

Lemke, Carlton E. and J. T. Howson, Jr. (1964), Equilibrium Points of Bimatrix Games, Society for Industrial and Applied Mathematics Journal of Applied Mathematics 12, 413-423.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1965” 1965

Isaacs, Rufus (1965), Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization. New York: Wiley.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1965A” 1965

Selten, R. (1965), Spieltheoretische Behandlung eines Oligopolmodells mit Nachfragetragheit, Zeitschrift fur die gesamte Staatswissenschaft 121, 301-324 and 667-689.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1965B” 1965

Davis, M. and M. Maschler (1965), The Kernel of a Cooperative Game, Naval Research Logistics Quarterly 12, 223-259.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1966” 1966

Aumann, R. J. and M. Maschler (1966), Game-Theoretic Aspects of Gradual Disarmament, Chapter V in Report to the U.S. Arms Control and Disarmament Agency ST-80. Princeton: Mathematica.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1966A” 1966

Harsanyi, J. C. (1966), A General Theory of Rational Behavior in Game Situations, Econometrica 34, 613-634.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1967” 1967

Shapley, L. S. (1967), On Balanced Sets and Cores, Naval Research Logistics Quarterly 14, 453-460.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1967A” 1967

Scarf, H. E. (1967), The Core of a N-Person Game, Econometrica 35, 50-69.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1967-8” 1967-68

Harsanyi, J. C. (1967-8), Games with Incomplete Information Played by ‘Bayesian’ Players, Parts I, II and III, Management Science 14, 159-182, 320-334 and 486-502.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1968” 1968

Lucas, W. F. (1968), A Game with No Solution, Bulletin of the American Mathematical Society 74, 237-239.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1969” 1969

Schmeidler, D. (1969), The Nucleolus of a Characteristic Function Game, Society for Industrial and Applied Mathematics Journal of Applied Mathematics 17, 1163-1170.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1969A” 1969

Shapley, L. S. (1969), Utility Comparison and the Theory of Games, pp. 251-263 in La Decision, Paris: Editions du Centre National de la Recherche Scientifique. (Reprinted on pp. 307-319 of The Shapley Value (Alvin E. Roth, ed.), Cambridge: Cambridge University Press, 1988).

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1969B” 1969

Shapley, L. S. and M. Shubik (1969), On Market Games, Journal of Economic Theory 1, 9-25.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1972” 1972

Maynard Smith, John (1972), Game Theory and the Evolution of Fighting, pp.8-28 in On Evolution (John Maynard Smith), Edinburgh: Edinburgh University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1973” 1973

Harsanyi, J. C. (1973), Games with Randomly Distured Payoffs: A New Rationale for Mixed Strategy Equilibrium Points, International Journal of Game Theory 2, 1-23.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1973A” 1973

Maynard Smith, John and G. A. Price (1973), The Logic of Animal Conflict, Nature 246, 15-18.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1973B” 1973

Gibbard, A. (1973), Manipulation of Voting Schemes: A General Result, Econometrica 41, 587-601.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1974” 1974

Aumann, R. J. and L. S. Shapley (1974), Values of Non-Atomic Games. Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1974A” 1974

Aumann, R. J. (1974), Subjectivity and Correlation in Randomized Strategies, Journal of Mathematical Economics 1, 67-96.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1975” 1975

Selten, R. (1975), Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games, International Journal of Game Theory 4, 25-55.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1975A” 1975

Kalai, E. and M. Smorodinsky (1975), Other Solutions to Nash’s Bargaining Problem, Econometrica 43, 513-518.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1975B” 1975

Faulhaber, G. (1975), Cross-Subsidization: Pricing in Public Enterprises, American Economic Review 65, 966-977.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1976” 1976

Lewis, D. K. (1969), Convention: A Philosophical Study. Cambridge Mass.: Harvard University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1976” 1976

Aumann, R. J. (1976), Agreeing to Disagree, Annals of Statistics 4, 1236-1239.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1977” 1977

Littlechild, S. C. and G. F. Thompson (1977), Aircraft Landing Fees: A Game Theory Approach, Bell Journal of Economics 8, 186-204.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y80” 1981

Kohlberg, Elon (1981), Some Problems with the Concept of Perfect Equilibria, Rapporteurs’ Report of the NBER Conference on the Theory of General Economic Equilibrium by Karl Dunz and Nirvikar Sing, University of Californa Berkeley.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1981A” 1981

Aumann, R. J. (1981), Survey of Repeated Games, pp.11-42 in Essays in Game Theory and Mathematical Economics in Honor of Oskar Morgenstern (R. J. Aumann et al), Zurich: Bibliographisches Institut. (This paper is a slightly revised and updated version of a paper originally presented as background material for a one-day workshop on repeated games that took place at the Institute for Mathematical Studies in the Social Sciences (Stanford University) summer seminar on mathematical economics on 10 August 1978.) (A slightly revised and updated version of the 1981 version is reprinted as Repeated Games on pp. 209-242 of Issues in Contemporary Microeconomics and Welfare (George R Feiwel, ed.), London: Macmillan.)

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1982” 1982

Kreps, D. M. and R. B. Wison (1982), Sequential Equilibria, Econometrica 50, 863-894.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1982A” 1982

Rubinstein, A. (1982), Perfect Equilibrium in a Bargaining Model, Econometrica 50, 97-109.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1982B” 1982

Maynard Smith, John (1982), Evolution and the Theory of Games. Cambridge: Cambridge University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1984” 1984

Roth, A. E. (1984), The Evolution of the Labor Market for Medical Interns and Residents: A Case Study in Game Theory, Journal of Political Economy 92, 991-1016.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1984A” 1984

Bernheim, B. D. (1984), Rationalizable Strategic Behavior, Econometrica 52, 1007-1028.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1984A” 1984

Pearce, D. G. (1984), Rationalizable Strategic Behavior and the Problem of Perfection, Econometrica 52, 1029-1050.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1984B” 1984

Axelrod, R. (1984), The Evolution of Cooperation. New York: Basic Books.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1985” 1985

Mertens, J.-F. and S. Zamir (1985), Formulation of Bayesian Analysis for Games with Incomplete Information, International Journal of Games Theory 14, 1-29.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1985-6” 1985-86

Neyman, A. (1985), Bounded Complexity Jusifies Cooperation in the Finitely Repeated Prisoner’s Dilemma, Economic Letters 19, 227-229.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1985-6” 1985-86

Rubinstein, A. (1986), Finite Automata Play the Repeated Prisoner’s Dilemma, Journal of Economic Theory 39, 83-96.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1986” 1986

Kohlberg, E. and J.-F. Mertens (1986), On the Strategic Stability of Equilibria, Econometrica 54, 1003-1037.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1988” 1988

Harsanyi, J. C. and R. Selten (1988), A General Theory of Equilibrium Selection in Games. Cambridge Mass.: MIT Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1988A” 1988

Tan, T. and S. Werlang (1988), The Bayesian Foundations of Solution Concepts of Games, Journal of Economic Theory 45, 370-391.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “y90” 1990

Kreps, D. M. (1990), A Course in Microeconomic Theory. Princeton: Princeton University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1990A” 1990

Crawford, V. P. (1990), Equilibrium without Independence, Journal of Economic Theory 50,127-154.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1991” 1991

Fudenberg, D. and J. Tirole (1991), Perfect Bayesian Equilibrium and Sequential Equilibrium, Journal of Economic Theory 53, 236-260.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1992” 1992

Aumann, R. J. and S. Hart, eds. (1992), Handbook of Game Theory with Economic Applications, Volume 1. Amsterdam: North-Holland.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1994A” 1994

Baird, Douglas G., Robert H. Gertner and Randal C. Picker (1994), Game Theory and the Law. Cambridge Mass.: Harvard University Press.

HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “a1994” 1994

Aumann, R. J. and S. Hart, eds. (1994), Handbook of Game Theory with Economic Applications, Volume 2. Amsterdam: North-Holland.

**Cooperative Games**

All of the examples so far have focused on non-cooperative solutions to “games.” We recall that there is, in general, no unique answer to the question “what is the rational choice of strategies?” Instead there are at least two possible answers, two possible kinds of “rational” strategies, in non-constant sum games. Often there are more than two “rational solutions,” based on different definitions of a “rational solution” to the game. But there are at least two: a “non-cooperative” solution in which each person maximizes his or her own rewards regardless of the results for others, and a “cooperative” solution in which the strategies of the participants are coordinated so as to attain the best result for the whole group. Of course, “best for the whole group” is a tricky concept — that’s one reason why there can be more than two solutions, corresponding to more than concept of “best for the whole group.”

Without going into technical details, here is the problem: if people can arrive at a cooperative solution, any non-constant sum game can in principle be converted to a win-win game. How, then, can a non-cooperative outcome of a non-constant sum game be rational? The obvious answer seems to be that it cannot be rational: as Anatole Rapoport argued years ago, the cooperative solution is the only truly rational outcome in a non-constant sum game. Yet we do seem to observe non-cooperative interactions every day, and the “noncooperative solutions” to non-constant sum games often seem to be descriptive of real outcomes. Arms races, street congestion, environmental pollution, the overexploitation of fisheries, inflation, and many other social problems seem to be accurately described by the “noncooperative solutions” of rather simple nonconstant sum games. How can all this irrationality exist in a world of absolutely rational decision makers?

**Credible Commitment**

There is a neoclassical answer to that question. The answer has been made explicit mostly in the context of inflation. According to the neoclassical theory, inflation happens when the central bank increases the quantity of money in circulation too fast. Then, the solution to inflation is to slow down or stop increasing in the quantity of money. If the central bank were committed to stopping inflation, and businessmen in general knew that the central bank were committed, then (according to neoclassical economics) inflation could be stopped quickly and without disruption. But, in a political world, it is difficult for a central bank to make this commitment, and businessmen know this. Thus the businessmen have to be convinced that the central bank really is committed — and that may require a long period of unemployment, sky-high interest rates, recession and business failures. Therefore, the cost of eliminating inflation can be very high — which makes it all the more difficult for the central bank to make the commitment. The difficulty is that the central bank cannot make a credible commitment to a low-inflation strategy.

Evidently (as seen by neoclassical economics) the interaction between the central bank and businessmen is a non-constant sum game, and recessions are a result of a “noncooperative solution to the game.” This can be extended to non-constant sum games in general: noncooperative solutions occur when participants in the game cannot make credible commitments to cooperative strategies. Evidently this is a very common difficulty in many human interactions.

Games in which the participants cannot make commitments to coordinate their strategies are “noncooperative games.” The solution to a “noncooperative game” is a “noncooperative solution.” In a noncooperative game, the rational person’s problem is to answer the question “What is the rational choice of a strategy when other players will try to choose their best responses to my strategy?”

Conversely, games in which the participants can make commitments to coordinate their strategies are “cooperative games,” and the solution to a “cooperative game” is a “cooperative solution.” In a cooperative game, the rational person’s problem is to answer the question, “What strategy choice will lead to the best outcome for all of us in this game?” If that seems excessively idealistic, we should keep in mind that cooperative games typically allow for “side payments,” that is, bribes and quid pro quo arrangements so that every one is (might be?) better off. Thus the rational person’s problem in the cooperative game is actually a little more complicated than that. The rational person must ask not only “What strategy choice will lead to the best outcome for all of us in this game?” but also “How large a bribe may I reasonably expect for choosing it?”

**A Basic Cooperative Game**

Cooperative games are particularly important in economics. Here is an example that may illustrate the reason why. We suppose that Joey has a bicycle. Joey would rather have a game machine than a bicycle, and he could buy a game machine for $80, but Joey doesn’t have any money. We express this by saying that Joey values his bicycle at $80. Mikey has $100 and no bicycle, and would rather have a bicycle than anything else he can buy for $100. We express this by saying that Mikey values a bicycle at $100.

The strategies available to Joey and Mikey are to give or to keep. That is, Joey can give his bicycle to Mikey or keep it, and Mikey can give some of this money to Joey or keep it all. it is suggested that Mikey give Joey $90 and that Joey give Mikey the bicycle. This is what we call “exchange.” Here are the payoffs:

**Table 12-1**

Joey | |||

give | keep | ||

Mikey | give | 110, 90 | 10, 170 |

keep | 200, 0 | 100, 80 |

**EXPLANATION: **At the upper left, Mikey has a bicycle he values at $100, plus $10 extra, while Joey has a game machine he values at $80, plus an extra $10. At the lower left, Mikey has the bicycle he values at $100, plus $100 extra. At the upper left, Joey has a game machine and a bike, each of which he values at $80, plus $10 extra, and Mikey is left with only $10. At the lower right, they simply have what they begin with — Mikey $100 and Joey a bike.

If we think of this as a noncooperative game, it is much like a Prisoners’ Dilemma. To keep is a dominant strategy and keep, keep is a dominant strategy equilibrium. However, give, give makes both better off. Being children, they may distrust one another and fail to make the exchange that will make them better off. But market societies have a range of institutions that allow adults to commit themselves to mutually beneficial transactions. Thus, we would expect a cooperative solution, and we suspect that it would be the one in the upper left. But what cooperative “solution concept” may we use?

**Pareto Optimum**

We have observed that both participants in the bike-selling game are better off if they make the transaction. This is the basis for one solution concept in cooperative games.

First, we define a criterion to rank outcomes from the point of view of the group of players as a whole. We can say that one outcome is better than another (upper left better than lower right, e.g) if at least one person is better off and no-one is worse off. This is called the Pareto criterion, after the Italian economist and mechanical engineer, Vilfredo Pareto. If an outcome (such as the upper left) cannot be improved upon, in that sense — in other words, if no-one can be made better off without making somebody else worse off — then we say that the outcome is Pareto Optimal, that is, Optimal (cannot be improved upon) in terms of the Pareto Criterion.

If there were a unique Pareto optimal outcome for a cooperative game, that would seem to be a good solution concept. The problem is that there isn’t — in general, there are infinitely many Pareto Optima for any fairly complicated economic “game.” In the bike-selling example, every cell in the table except the lower right is Pareto-optimal, and in fact any price between $80 and $100 would give yet another of the (infinitely many) Pareto-Optimal outcomes to this game. All the same, this was the solution criterion that von Neumann and Morgenstern used, and the set of all Pareto-Optimal outcomes is called the “solution set.”

**Alternative Solution Concepts**

If we are to improve on this concept, we need to solve two problems. One is to narrow down the range of possible solutions to a particular price or, more generally, distribution of the benefits. This is called “the bargaining problem.” Second, we still need to generalize cooperative games to more than two participants. There are a number of concepts, including several with interesting results; but here attention will be limited to one. It is the Core, and it builds on the Pareto Optimal solution set, allowing these two problems to solve one another via “competition.”

**My Late Homework**

The last time this colloquium was offered, I assigned myself as homework a model of high school students’ decisions which universities to apply to and attend, but I didn’t get my homework in. Fortunately, I didn’t have to give myself a grade. Here is my late homework.

In this game there are 100,000 players, seniors in high school. There are 100 universities. The universities are not players in the game — just mindless, predictable automata. The students are ranked from the most to the least “promising,” with the most “promising” getting 100,000 points, the second most “promising” getting 99,999 points, and so on down to the least “promising” student, who gets one point. Each university will admit 1000 students, and they will be the 1000 highest-ranking students who apply. The payoff to each student is the average “promise” ranking of the students who enroll in the same university she or he does.

Thus, suppose the “best” 1000 students enroll in Old Ivy University. Their average “promise” ranking is 99,500, so that is the payoff to every student at Old Ivy. (Well, actually, 99500.5, but we will round off to integers). Suppose the next ranked 1000 enroll in Pixel University. Their average ranking is 98,500, so that is the payoff to each student at Pixel. And so it goes.

The student’s strategies are to apply to one and — for simplicity — only one university. We will assume that each student knows where she or he is in the “promise” ranking. Thus the student knows the best university that will accept her or him. We may assume each student will apply to and attend the university that will give her or him the best payoff, that is, the university with the highest average “promise” ranking, provided that the student is confident of being admitted. (We are ignoring tuition and also parents’ preferences for a college nearer home).

This game has 100! distinct Nash equilibria, but, happily, they are all very similar to one another. Suppose, for example, that (as we have said before)

The most promising 1000 students apply to Old Ivy

The next 1000 apply to Pixel

The next 1000 apply to Pinhead State

and so on, with each group of 1000 students ranked together applying to the same university. Then each university will admit the 1000 students that apply, and the payoffs will be highest to students enrolled in Old Ivy, second highest to students enrolled in Pixel, third highest to those enrolled in Pinhead, and so on. Every student knows what university to apply to and is enrolled in the university she or he applies to.

This is a Nash equilibrium. To see why, suppose a single student in the top thousand were to switch his or her application from Old Ivy to Pixel. The student who switches will be accepted, but that student’s payoff drops from 99,500 to 98,500. Conversely, suppose a student in the second 1000 switches her or his application from Pixel to Old Ivy. She or he will not be accepted, so cannot improve the payoff by switching to a more highly ranked university.

Thus, the ranking of universities with Old Ivy at the top, Pixel second, and so on is a Nash Equilibrium; but it is not the only one. As we have said, there are 100! equilibria in this game. For example, there are equilibria in which Old Ivy is ranked last, instead of first. If Old Ivy were ranked last in terms of the average promise of its students, then only the 1000 worst students would bother applying to Old Ivy. No student who could get admitted to Pixel would bother applying to Old Ivy, since that would just reduce their payoff to 500, the minimum.

In other words, this game is a coordination game. So long as each group of students in the same thousand all apply to the same university, we have an equilibrium — and it doesn’t matter which university that is. If the best 1000 students happened to apply to Podunk State, Podunk State would be the best university in the country, and Harvard and MIT would be so much chopped liver. (Notice that it also doesn’t depend on the quality of the faculty, the facilities, or the food in the lunchroom. All that matters is agreement among the students).

But it gets worse. Once all of the students have sorted themselves out into groups of 1000, each group with next-door promise rankings and attending separate universities, the payoffs to the students will range from a low of 500 to a high of 99,500. The average payoff will be 50,000. What would happen if the students were deprived of their decision to apply to one school or another, and instead were assigned to universities at random? Each university would then have an average promise ranking of — average, that is, about 50,000. So that the average payoff to students would be 50,000. So all this struggle among the students hasn’t changed the average student payoff at all. It has just taken from those who have less (promise) and given to those who have more (promise), like Robin Hood in reverse. If that seems discouraging, look at it this way: it’s your decision. Harvard may be the best or Harvard may be the worst. It’s the students who decide. The faculty and the trustees don’t have any say at all. Just those high school seniors.

**An Information Technology Example**

Game theory provides a promising approach to understanding strategic problems of all sorts, and the simplicity and power of the Prisoners’ Dilemma and similar examples make them a natural starting point. But there will often be complications we must consider in a more complex and realistic application. Let’s see how we might move from a simpler to a more realistic game model in a real-world example of strategic thinking: choosing an information system.

For this example, the players will be a company considering the choice of a new internal e-mail or intranet system, and a supplier who is considering producing it. The two choices are to install a technically advanced or a more proven system with less functionality. We’ll assume that the more advanced system really does supply a lot more functionality, so that the payoffs to the two players, net of the user’s payment to the supplier, are as shown in Table A-1.

**Table A-1**

User | |||

Advanced | Proven | ||

Supplier | Advanced | 20,20 | 0,0 |

Proven | 0,0 | 5,5 |

We see that both players can be better off, on net, if an advanced system is installed. (We are not claiming that that’s always the case! We’re just assuming it is in this particular decision). But the worst that can happen is for one player to commit to an advance system while the other player stays with the proven one. In that case there is no deal, and no payoffs for anyone. The problem is that the supplier and the user must have a **compatible standard,** in order to work together, and since the choice of a standard is a strategic choice, their strategies have to mesh.

Although it looks a lot like the Prisoners’ Dilemma at first glance, this is a more complicated game. We’ll take several complications in turn:

Looking at it carefully, we see that there this game has no dominated strategies. The best strategy for each participant depends on the strategy chosen by the other participant. Thus, we need a new concept of game-equilibrium, that will allow for that complication. When there are no dominant strategies, we often use an equilibrium conception called the ** HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/nash.html” ****Nash Equilibrium****, **named after Nobel Memorial Laureate John Nash. The Nash Equilibrium is a pretty simple idea: we have a Nash Equilibrium if each participant chooses the best strategy, given the strategy chosen by the other participant. In the example, if the user opts for the advanced system, then it is best for the supplier to do that too. So (Advanced, Advanced) is a Nash-equilibrium. But, hold on here! If the user chooses the proven system, it’s best for the supplier to do that too. There are two Nash Equilibria! Which one will be chosen? It may seem easy enough to opt for the advanced system which is better all around, but if each participant believes that the other will stick with the proven system — being a bit of a stick in the mud, perhaps — then it will be best for each player to choose the proven system — and each will be right in assuming that the other one is a stick in the mud! This is a danger typical of a class of games called HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/multnash.html” coordination games — and what we have learned is that the choice of compatible standards is a coordination game.

We have assumed that the payoffs are known and certain. In the real world, every strategic decision is risky — and a decision for the advanced system is likely to be riskier than a decision for the proven system. Thus, we would have to take into account the players’ subjective attitudes toward risk, their **risk aversion,** to make the example fully realistic. We won’t attempt to do that in this example, but we must keep it in mind.

The example assumes that payoffs are measured in money. Thus, we are not only leaving risk aversion out of the picture, but also any other subjective rewards and penalties that cannot be measured in money. Economists have ways of measuring subjective rewards in money terms — and sometimes they work — but, again, we are going to skip over that problem and assume that all rewards and penalties are measured in money and are transferable from the user to the supplier and vice versa.

Real choices of information systems are likely to involve more than two players, at least in the long run — the user may choose among several suppliers, and suppliers may have many customers. That makes the coordination problem harder to solve. Suppose, for example, that “beta” is the advanced system and “VHS” is the proven system, and suppose that about 90% of the market uses “VHS.” Then “VHS” may take over the market from “beta” even though “beta” is the better system. Many economists, game theorists and others believe this is a main reason why certain technical standards gain dominance. (This is being written on a Macintosh computer. Can you think of any other possible examples like the beta vs. VHS example?)

On the other hand, the user and the supplier don’t have to just sit back and wait to see what the other person does — they can sit down and talk it out, and commit themselves to a contract. In fact, they have to do so, because the amount of payment from the user to the supplier — a strategic decision we have ignored until now — also has to be agreed upon. In other words, unlike the Prisoners’ Dilemma, this is a **cooperative game,** not a **noncooperative game.** On the one hand, that will make the problem of coordinating standards easier, at least in the short run. On the other hand, Cooperative games call for a different approach to solution.

**Game Theory: An Introductory Sketch **

**Games with Many Participants: Proportional Games**

The queuing game gives us one example of how the Prisoners’ Dilemma can be generalized, and I hope that it provides some insights on some real human interactions. But there is another simple approach to multi-person two-strategy games that is closer to textbook economics, and is important in its own right.

As an example, let us consider the choice of transportation modes — car or bus — by a large number of identical individual commuters. The basic idea here is that car commuting increases congestion and slows down traffic. The more commuters drive their cars to work, the longer it takes to get to work, and the lower the payoffs are for both car commuters and bus commuters.

Figure 10-1 illustrates this. In the figure, the horizontal axis measures the proportion of commuters who drive their cars. Accordingly, the horizontal axis varies from a lower limit of zero to a maximum of 1 or 100%. The vertical axis shows the payoffs for this game.The upper (green) line shows the payoffs for car commuters. We see that it declines as the proportion of commuters in their cars increases. The lower, red line shows the payoffs to bus commuters. We see that, regardless of the proportion of commuters in cars, cars have a higher payoff than busses. In other words, commuting by car is a dominant strategy in this game. In a dominant strategy equilibrium, all drive their cars. The result is that they all have negative payoffs, whereas, if all rode busses, all would have positive payoffs. If all commuters choose their mode of transportation with self-interested rationality, all choose the strategy that makes them individually better off, but all are worse off as a result.

INCLUDEPICTURE “Exhibit%201/Many-Person%20Equilibria-file/commute1.gif” * MERGEFORMATINET d

**Figure 10-1**

This is an extension of the Prisoners’ Dilemma , in that there is a dominant strategy equilibrium, but the choice of dominant strategies makes everyone worse off. But it probably is not a very “realistic” model of choice of transportation modes. Some people do ride busses. So let’s make it a little more realistic, as in Figure 10-2:

INCLUDEPICTURE “Exhibit%201/Many-Person%20Equilibria-file/commute2.gif” * MERGEFORMATINET d

**Figure 10-2**

The axes and lines in Figure 10-2 are defined as they were for Figure 10-1. In Figure 10-2, congestion slows the busses down somewhat, so that the payoff to bus commuting declines as congestion increases; but the payoff to car commuting drops even faster. When the proportion of people in their cars reaches q, the payoff to car commuting overtakes the payoff to bus-riding, and for larger proportions of car commuters (to the right of q), the payoff to car commuting is worse than to bus commuting.

Thus, the game no longer has a dominant strategy equilibrium. However, it has a Nash-equilbrium. When a fraction q of commuters drives cars, that is a Nash-equilibrium. Here is the reasoning: starting from q, if one bus commuter shifts to the car, that moves into the region to the right of q, where car commuters are worse off, so (in particular) the person who switched is worse off. On the other hand, starting from q, if one car commuter switches to the bus, that moves into the region to the left of q, where bus commuters are worse off, so, again, the switcher is worse off. No-one can be better off by individually switching from q.

This illustrates an important point: in a Nash-equilibrium, identical people may choose different strategies to maximize their payoffs. This Nash-equilibri

**Game Theory: An Introductory Sketch **

**A Theory of Marriage Vows**

This example is an attempt to use game theory to “explain” marriage vows. But first (given the nature of the topic) it might be a good idea to say something about “explanation” using game theory.

One possible objection is that marriage is a very emotional and even spiritual topic, and game theory doesn’t say anything about emotions and spirit. Instead game theory is about payoffs and strategies and rationality. That’s true, but it may be that the specific phenomenon — the taking of vows that (in some societies, at least) restrict freedom of choice — may have more to do with payoffs and strategies than with anything else, and may be rational. In that case, a game-theoretic model may capture the aspects that are most relevant to the institution of marriage vows. Second, game-theoretic explanations are never conclusive. The most we can say is that we have a game-theoretic model, with payoffs and strategies like this, that would lead rational players to choose the strategies that, in the actual world, they seem to choose. It remains possible that their real reasons are different and deeper, or irrational and emotional. That’s no less true of bankruptcy than of marriage. Indeed, from some points of view, their “real reasons” have to be deeper and more complex — no picture of the world is ever “complete.” The best we can hope for is a picture that fits fairly well and contains some insight. I think game theory can “explain” marriage vows in this sense.

In some sequential games, in which the players have to make decisions in sequence, freedom of choice can be a problem. These are games that give one or more players possibilities for “opportunism.” That is, some players are able to make their decisions in late stages of the game in ways that exploit the decisions made by others in early stages. But those who make the decisions in the early stages will then avoid decisions that make them vulnerable to opportunism, with results that can be inferior all around. In these circumstances, the potential opportunist might welcome some sort of restraint that would make it impossible for him to act opportunistically at the later stage. Jon Elster made the legend of “Ulysses and the Sirens” a symbol for this. Recall, in the legend, Ulysses wanted to hear the sirens sing; but he knew that a person who would hear them would destroy himself trying to go to the sirens. Thus, Ulysses decided at the first stage of the game to have himself bound to the mast, so that, at the second stage, he would not have the freedom to choose self-destruction. Sequential games are a bit different from that, in that they involve interactions of two or more people, but the games of sequential commitment can give players reason to act as Ulysses did — that is, to rationally choose at the first step in a way that would limit their freedom of choice at the second step. That is our strategy in attempting to “explain” marriage vows.

Here is the “game.” At the first stage, two people get together. They can either stay together for one period or two. If they take a vow, they are committed to stay together for both periods. During the first period, each person can choose whether or not to “invest in the relationship.” “Investing in the relationship” means making a special effort in the first period that will only yield the investor benefits in the second period, and will yield benefits in the second period only if the couple stay together. At the end of the first period, if there has been no vow, each partner decides whether to remain together for the second period or separate. If either prefers to separate, then separation occurs; but if both choose to remain together, they remain together for the second period. Payoffs in the second period depend on whether the couple separate, and, if they stay together, on who invested in the first period.

The payoffs are determined as follows: First, in the first stage, the payoff to one partner is 40, minus 30 if that partner “invests in the relationship,” plus 20 if the other partner “invests in the relationship.” Thus, investment in the relationship is a loss in the first period — that’s what makes it “investment.” In the second period, if they separate, both partners get additional payoffs of 30. Thus, each partner can assure himself or herself of 70 by not investing and then separating. However, if they stay together, each partner gets an additional payoff of 20 plus (if only the other partner invested) 25 or (if both partners invested) 60.

Notice that the total return to investment to the couple over both periods is disproportionately greater if both persons invest — that is, it is 2*20-2*30 in the first period plus 2*60 = 80 if both invest, but is 20-30+25=15 if only one invests. The difference 80-2*15=50 reflects the assumption that the investments are complementary — that each partner’s investment reinforces and increases the productivity of the other person’s investment.

These ground rules lead to the payoffs in Table 15-1, in which “his” payoffs are to the right in each pair and “hers” are to the left.

**Table 15-1**

him | ||||

her | invest | don’t invest | ||

invest/ | stay | separate | stay | separate |

stay | 110, 110 | 60, 60 | 30, 105 | 40, 115 |

separate | 60, 60 | 60, 60 | 40, 115 | 40, 115 |

don’t invest/ | ||||

stay | 105, 30 | 115, 40 | 60, 60 | 70, 70 |

separate | 115, 40 | 115, 40 | 70, 70 | 70, 70 |

Since the decision to invest (or not) precedes the decision to separate (or not) we have to work backward to solve this game. Suppose that there are no vows and both partners invest. Then we have the subgame in the upper left quarter of the table:

110, 110 | 60, 60 |

60, 60 | 60, 60 |

Clearly, in this subgame, to remain together is a dominant strategy for both partners, so we can identify 110, 110 as the payoffs that will in fact occur in case both partners invest.

Now take the other symmetrical case and suppose that neither partner invests. We then have the subgame at the lower right:

60, 60 | 70, 70 |

70, 70 | 70, 70 |

Here, again, we have a clear dominant strategy, and it is to separate. The payoffs of symmetrical non-investment are thus 70,70.

Now suppose that only one partner invests, and (purely for illustrative purposes!) we consider the case in which “he” invests and “she” does not. We then have the subgame at the lower right:

105, 30 | 115, 40 |

115, 40 | 115, 40 |

Here again, separation is a dominant strategy, so the payoffs for the subgame where “she” invests and “he” does not are 115,40. A symmetrical analysis will give us payoffs of 40, 115 when “she” invests and “he” does not.

Putting these subgame outcomes together in a payoff table for the decision to invest or not invest we have:

**Table 15-2**

he | ||

invest | don’t invest | |

she | ||

invest | 110, 110 | 40, 115 |

don’t invest | 115, 40 | 70, 70 |

This game resembles the Prisoners’ Dilemma, in that non-investment is a dominant strategy, but when both players play their dominant strategies, both are worse off than they would be if both played the non-dominant strategy. Anyway, we identify 70, 70 as the subgame perfect equilibrium in the absence of marriage vows.

But now suppose that, back at the beginning of things, the pair have the option to take, or not to take, a vow to stay together regardless. If they take the vow, only the “stay together” payoffs would remain as possibilities. If they do not take the vow, we know that there will be a separation and no investment, so we need consider only that possibility. In effect, there are three strategies: take a vow and invest, take a vow and don’t invest, or don’t take a vow. We have

**Table 15-3**

he | |||

Vow & invest | vow & don’t invest | don’t vow | |

she | |||

vow & invest | 110, 110 | 30, 105 | 70, 70 |

vow & don’t invest | 105, 30 | 60, 60 | 70, 70 |

don’t vow | 70, 70 | 70, 70 | 70, 70 |

In this game, there is no dominant strategy. However, the only Nash equilibrium is for each player to take the vow and invest, and thus the payoff that will occur if a vow can be taken is at the upper left — 110, 110, the “efficient” outcome. In effect, willingness to take the vow is a “signal” that the partner intends to invest in the relationship — if (s)he didn’t, it would make more sense for him (her) to avoid the vow. Both partners are better off if the vow is taken, and if they had no opportunity to bind themselves with a vow, they could not attain the blissful outcome at the upper left.

Thus, when each partner decides whether or not to take the vow, each rationally expects a payoff of 110 if the vow is taken and 70 if not, and so, the rational thing to do is to take the vow. Of course, this depends strictly on the credibility of the commitment. In a world in which marriage vows become of questionable credibility, this reasoning breaks down, and we are back at Table 2, the Prisoners’ Dilemma of “investment in the relationship.” Some sort of first-stage commitment is necessary. Perhaps emotional commitment will be enough to make the partnership permanent — emotional commitment is one of the things that is missing from this example. But emotional commitment is hard to judge. One of the things a credible vow does is to signal emotional commitment. If there are no vows that bind, how can emotional commitment be signaled? That seems to be one of the hard problems of living in modern society!

There is a lot of common sense here that your mother might have told you — anyway my mother would have! What the game-theoretic analysis gives us is an insight on why Mom was right, after all, and how superficial reasoning can mislead us. As we compare tables 2 and 3, we can observe that —** given the choices made,** that is, reading down a column or across a row — no-one is ever better off with Table 3 (vow) than with Table 2 (no vow). And except for the upper left quadrant, both parties are worse off with the vow than without it. Thus I might reason — wrongly! — that since, ceteris paribus, I am better off with freedom of choice than without it, I had best not take the vow. But this illustrates a pitfall of “ceteris paribus” reasoning. In this comparison, ceteris are not paribus. Rather, the outcomes of the various subgames — “ceteris” — depend on the payoff possibilities as a whole. The vow changes the whole set of payoff possibilities in such a way that “ceteris” are changed — non paribus — and the outcome improved. The set of possible outcomes is worse but the selection of outcomes among the available set is so much improved that both parties are almost twice as well off as they would be had they not agreed to restrain their freedom of choice.

In other words: Cent’ Anni!

**Game Theory: An Introductory Sketch **

**Some Examples of Games with More Complex Structures**

All of the game examples so far are relatively simple in that time plays no part in them, however complex they may be in other ways. The passage of time can make at least three kinds of differences. First, people may learn — new information may become available, that affects the payoffs their strategies can give. Second, even when people do not or cannot commit themselves at first, they may commit themselves later — and they may have to decide when and if to commit. Of course, this blurs the distinction we have so carefully set up between cooperative and noncooperative games, but life is like that. Third, there is the possibility of retaliation against other players who fail to cooperate with us. That, too, blurs the cooperative-noncooperative distinction. That means, in particular, that repeated games — and particularly repeated prisoners’ dilemmas — may have quite different outcomes than they do when they are played one-off. But we shall leave the repeated games out as an advanced topic and move on to study sequential games and the problems that arise when people can make commitments only in stages, at different points in the game. I personally find these examples interesting and to the point and they are somewhat original.

There are some surprising results. One surprising result is that, in some games, people are better off if they can give up some of their freedom of choice, binding themselves to do things at a later stage in the came that may not look right when they get to this stage. An example of this (I suggest) is to be found in HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/marriage.html” Marriage Vows. This provides a good example of what some folks call “economic imperialism” — the use of economics (and game theory) to explain human behavior we do not usually think of as economic, rational, or calculating — although you do not really need to know any economics to follow the example in Marriage Vows. Another example along the same line (although the main application is economics in a more conventional sense) is HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/authority.html” The Paradox of Benevolent Authority, which tries to capture, in game-theoretic terms, a reason why liberal societies often try to constrain their authorities rather than relying on their benevolence.

Also, the following example will have to do with relations between an employer and an employee: HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/burnout.html” A Theory of Burnout. For an example in which flexibility is important, so that giving up freedom of choice is a bad idea, and another non-imperialistic economic application of game theory, see HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/bankrupt.html” The Essence of Bankruptcy. Of course, that note is meant to discuss bankruptcy, not to exemplify it!

**Game Theory: An Introductory Sketch **

**Games with Multiple Nash Equilibria**

Here is another example to try the Nash Equilibrium approach on.

Two radio stations (WIRD and KOOL) have to choose formats for their broadcasts. There are three possible formats: Country-Western (CW), Industrial Music (IM) or all-news (AN). The audiences for the three formats are 50%, 30%, and 20%, respectively. If they choose the same formats they will split the audience for that format equally, while if they choose different formats, each will get the total audience for that format. Audience shares are proportionate to payoffs. The payoffs (audience shares) are in Table 6-1.

**Table 6-1**

KOOL | ||||

CW | IM | AN | ||

WIRD | CW | 25,25 | 50,30 | 50,20 |

IM | 30,50 | 15,15 | 30,20 | |

AN | 20,50 | 20,30 | 10,10 |

You should be able to verify that this is a non-constant sum game, and that there are no dominant strategy equilibria. If we find the Nash Equilibria by elimination, we find that there are two of them — the upper middle cell and the middle-left one, in both of which one station chooses CW and gets a 50 market share and the other chooses IM and gets 30. But it doesn’t matter which station chooses which format.

It may seem that this makes little difference, since

the total payoff is the same in both cases, namely 80

both are efficient, in that there is no larger total payoff than 80

There are multiple Nash Equilibria in which neither of these things is so, as we will see in some later examples. But even when they are both true, the multiplication of equilibria creates a danger. The danger is that both stations will choose the more profitable CW format — and split the market, getting only 25 each! Actually, there is an even worse danger that each station might assume that the other station will choose CW, and each choose IM, splitting that market and leaving each with a market share of just 15.

More generally, the problem for the players is to figure out which equilibrium will in fact occur. In still other words, a game of this kind raises a “coordination problem:” how can the two stations coordinate their choices of strategies and avoid the danger of a mutually inferior outcome such as splitting the market? Games that present coordination problems are sometimes called coordination games.

From a mathematical point of view, this multiplicity of equilibria is a problem. For a “solution” to a “problem,” we want one answer, not a family of answers. And many economists would also regard it as a problem that has to be solved by some restriction of the assumptions that would rule out the multiple equilibria. But, from a social scientific point of view, there is another interpretation. Many social scientists (myself included) believe that coordination problems are quite real and important aspects of human social life. From this point of view, we might say that multiple Nash equilibria provide us with a possible “explanation” of coordination problems. That would be an important positive finding, not a problem!

That seems to have been Thomas Schelling’s idea. Writing about 1960, Schelling proposed that any bit of information that all participants in a coordination game would have, that would enable them all to focus on the same equilibrium, might solve the problem. In determining a national boundary, for example, the highest mountain between the two countries would be an obvious enough landmark that both might focus on setting the boundary there — even if the mountain were not very high at all.

Another source of a hint that could solve a coordination game is social convention. Here is a game in which social convention could be quite important. That game has a long name: “Which Side of the Road to Drive On?” In Britain, we know, people drive on the left side of the road; in the US they drive on the right. In abstract, how do we choose which side to drive on? There are two strategies: drive on the left side and drive on the right side. There are two possible outcomes: the two cars pass one another without incident or they crash. We arbitrarily assign a value of one each to passing without problems and of -10 each to a crash. Here is the payoff table:

**Table 6-2**

Mercedes | |||

L | R | ||

Buick | L | 1,1 | -10,-10 |

R | -10,-10 | 1,1 |

Verify that LL and RR are both Nash equilibria. But, if we do not know which side to choose, there is some danger that we will choose LR or RL at random and crash. How can we know which side to choose? The answer is, of course, that for this coordination game we rely on social convention. Conversely, we know that in this game, social convention is very powerful and persistent, and no less so in the country where the solution is LL than in the country where it is RR.

We will see another example in which multiple Nash equilibria provides an explanation for a social problem. First, however, we need to deal with one of the issues about the Prisoners’ Dilemma that applies no less to all of our examples so far: they deal with only two players.

We will first look at one way to extend the Prisoners’ Dilemma to more than two players — an invention of my own, so of course I rather like it — and then explore a more general technique for extending Nash equilibria to games of many participants.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/game-toc.html” Table of Contents

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/queuing.html” INCLUDEPICTURE “Exhibit%201/Multiple%20Equilibria-file/next.gif” * MERGEFORMATINET d

Roger A. McCain INCLUDEPICTURE “Exhibit%201/Multiple%20Equilibria-file/RAMAC.gif” * MERGEFORMATINET d

**Game Theory: An Introductory Sketch **

**Nash Equilibrium and the Economics of Patents**

In the Queueing Game, the first person in line gets the best service. In a patent system, the first person to invent a device gets the patent. Let’s apply that parallel and try to come up with a game-theoretic analysis of patenting. For this example, we will have to try to think a bit like economists, and use two economic concepts: the concept of diminishing returns to investment and the focus on the additional revenue as a result of one more unit of investment — the “marginal revenue,” in economist jargon. If you have studied those concepts, this will be your reminder. If you haven’t studied them, take it slow, but don’t worry — it will all be explained.

Patents exist because inventions are easy to imitate. An inventor could spend all his wealth working on an invention, and once it is proven, other businessmen could imitate it and get the benefit of all that investment without making any comparable investment of their own. Because of this, if there were no patents, there would be little incentive to develop new inventions, and thus very few inventions. At least, that’s the idea behind the patent system. But some economists point out that patents are limited in time and in scope, so that there probably isn’t as much incentive to invent as we would need to get an “efficient” number of inventions. This suggests that we don’t get enough inventions — but that may be a hasty conclusion.

For this game example, let us think of some new invention. At the beginning of the game, a number of development labs are considering whether to invest in the development of the invention. It is not known whether the invention is actually possible or not. Research and development are required even to discover that. But everyone estimates that, if the invention is produced and patented, it will yield a profit of $10,000,000. To keep things simple, we ignore any other potential benefits and proceed as if the profits were the only net benefits of developing the invention. Also for simplicity, we suppose that the only decision a laboratory can make is to spend $1,000,000 on a development effort or to spend nothing. A lab is not allowed to invest more nor any positive amount less than $1,000,000.

If there are investments in development, the invention may or may not be successfully developed. Since we don’t know whether the invention is possible or not, the development effort may fail, and we can only say how **probable** it is that the invention will be made. The probability that the invention is successfully developed depends on the amount invested by the whole group, as shown in Table 2. But what does this probability mean, in money? To answer that question, we compute the “expected value” of total revenue — that is, the revenue of $10,000,000 if the invention is made times the probability. Thus, if the probability of success is fifty percent, the “expected revenue” is 0.5*$10,000,000 = $5,000,000. The more labs invest, the greater the probability of success is, up to a point. But the labs’ investment is subject to “diminishing returns:” at each step, the additional $1,000,000 of investment increases the expected revenue by a smaller amount. This is shown in the last column, as the “additional expected revenue” goes from $3,000,000 to $2,000,000, and so on.

**Table 2**

investment in millions | probability | expected revenue | additional expected revenue |

0 | 0 | 0 | 0 |

1 | .3 | 3,000,000 | 3,000,000 |

2 | .5 | 5,000,000 | 2,000,000 |

3 | .5667 | 5,667,000 | 667,000 |

4 | .61 | 6,100,000 | 433,000 |

5 | .61 | 6,100,000 | 0 |

6 | .61 | 6,100,000 | 0 |

7 | .61 | 6,100,000 | 0 |

How much should be invested, for maximum net benefits? Economic theory tells us that it makes sense to keep increasing the investment as long as the additional expected revenue (marginal benefit, in economist’s jargon) is greater than the investment necessary (marginal cost, in economist’s jargon). That means it is efficient to increase investment in development as long as adding one more lab will increase the expected revenues by at the amount the lab will invest, $1,000,000, but no further. The second lab adds 2 million, while the third adds only 667,000 in expected revenues. The third lab is a loser, and the efficient number of labs working on developing this invention is two.

But how are these payoffs distributed among the development labs? We assume they are distributed on the “horse-race” principle: only the lab that “comes in first” is a winner. That is, no matter how many labs invest, 100% of the profits go to one lab, the lab that completes a working prototype first. Since all of the labs invest the same lump sum $1,000,000, we shall assume that all who invest have an equal chance of getting the payoff, if there is any payoff at all. Thus, when two labs invest, each has a 50% chance at a 50% chance of a $10,000,000 payoff — that is, overall, a 25% (50% times 50%) chance at the $10,000,000, for an expected value payoff of $2,500,000 and a profit of $1,500,000. How many will invest in this “horse race” model of invention? If an enterprise is considering investing when two others are already committed to investing, it can anticipate gaining a 1/3 chance at a 56.67% chance (overall, an 18.89% chance) at $10,000,000, for an expected value of $1,889,000 and a profit of $889,000. The third lab will invest. What about the fourth, fifth, sixth? To make a long story a little shorter, the sixth lab would gain a 1/6 chance at a 61% chance at $10,000,000, for an expected value of $10,166,666.67 and a profit of $166,666.67. The sixth (fourth, and fifth) labs will invest. However, a potential seventh lab will anticipate a 1/7 chance at a 61% chance at the $10,000,000 payoff. This is an expected value of $871,428.57 and a loss of $128,571.43. The seventh firm will not invest. Any set of strategies in which six firms invest in research to produce this invention is a Nash-equilibrium.

Thus, in equilibrium, six firms will contest this particular “horse race,” and that is three times the efficient number of labs to work on developing this invention. The allocation of four more labs to this job increases the probability of success by just 11%, a change worth just over a million; but four million are spent doing it!

Notice that this model could be applied when the reward goes, not to the first innovation in time, but to the first on some other scale. For example, suppose that the information products developed are not patentable, but are slightly different in design, as different computer applications might be. Suppose also that only the one perceived as “best” can be sold, and the rest fail and investments in developing them are lost. Finally, suppose that enterprises that invest equally have equal chances of producing the “best” dominant product, the “killer app.” This “star system” will create a tendency toward overallocation of resources to the production of the information products.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/game-toc.html” Table of Contents

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/homework.html” INCLUDEPICTURE “Exhibit%201/Patents-file/next.gif” * MERGEFORMATINET d Roger A. McCain INCLUDEPICTURE “Exhibit%201/Patents-file/ramac.gif” * MERGEFORMATINET d

**Game Theory: An Introductory Sketch **

**Notes Toward a Simple Game-Theoretic Foundation for Post-Walrasian Economics **

*by Roger A. McCain*

These notes are suggested by some recent discussions of the economic theory of persistent slumps, such as the Great Depression of the 1930’s and the European unemployment of the 1980’s and 1990’s. Clearly, Keynesian ideas are a central (and still controversial) aspect of any such theory, but there are a wide variety of interpretation of Keynes, both favorable and critical, and recently one group in that broad tradition have begun to call themselves “Post-Walrasian.” The Post-Walrasian view has several distinctive insights. The following list is not represented as either complete or final, but I hope it will do as a starting point:

In real market systems, equilibrium is not unique. (This is the major departure from the tradition of Leon Walras in economics, a tradition that has insisted on the uniqueness of equilibrium).

In real economies, economic activity is also “path dependent;” that is, what happens depends not only on “where the economic system is,” but also on how it has gotten there.

Instead, there may be multiple equilibria, and some of these equilibria may be better than others.

Persistent slumps occur when, for some reason, the economy falls into an inferior equilibrium.

Economics requires multiple levels of analysis, and institutions and institutional change have to be as much a part of the theory as markets and market changes.

Institutions are not simply limitations on markets, but evolve to solve problems that arise at the lower, market, level of analysis.

Among the problems to be solved, coordination of market activity is a very important one. In a Walrasian model, markets solve all problems of coordination seriously; but that is not “realistic.”

Persistent slumps and inferior equilibria arise in large part because of coordination problems.

Problems for market equilibria, including multiple equilibria, are especially likely where market relationships involve trust or bargaining relationships.

These notes will attempt to give a game-theoretic framework for such a theory at about the level of simplicity of the introductory text. The discussion will be “simple” in that — among other things — it will not consider the multiple levels of analysis in the model itself, although aspects of the model will point in that direction.

**The Heave-Ho Game **

Game theory seems promising for this purpose in part because the concept of Nash equilibrium is known to include a possibility of multiple equilibria. In my 1980 textbook I gave an illustration of games with multiple equilibria, some better than others, called the Heave-Ho Game. Here is the “little story” that goes with the game:

Two people are driving on a back-road short-cut and they come to a place where the road is obstructed by a fallen tree. Together, they are capable of moving the tree off the road, but only if each motorist heaves as hard as he can. If they do coordinate their decisions and heave, they can get the road clear, get back in their car, and continue to their destination. Let us say that their payoffs in that case are both +5. If neither motorist heaves, they have to turn back and arrive very late at their destination. In that case, we shall say that the payoffs are 0 for each motorist. However, if one motorist heaves and the other slacks off, making less than an all-out effort, the tree is shifted only a little — it remains in the road — the the motorist who heaved gets a painful muscle-strain as a result of the effort. Thus, the payoffs in that case are 0 (for the slacker) and -5 for the motorist who heaves. The payoff table is shown in Table 11-1.

**Table 11-1**

heave | slack off | |

heave | 5,5 | -5,0 |

slack off | 0,-5 | 0,0 |

I hope it is clear that there are two Nash equilibria, at the upper left and the lower right. Starting from the upper right, either the column player or the row player will be worse off — going from 5 to 0 — if he changes strategies unilaterally. Starting from the lower right, again, either the column player or the row player will be worse off, going from 0 to -5, if he changes strategies unilaterally. However, the other two outcomes are not equilibria: from the upper right, for example, the row player will be better off to switch from a “heave” strategy, with a -5 payoff, to a “slack off” strategy, with a 0 payoff, if the other player does not change. Symmetrically, the lower left is not an equilibrium either. HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/Post-Walrasian_fn.html” l “fn0” [1]

This reasoning — that we have equilibrium if neither player can be better off by changing strategy unilaterally — is consistent with the definition of Nash equilibrium. However, even though the lower right outcome is a Nash equilibrium, it is clearly inferior to the upper left outcome. In fact, it is Pareto-inferior, which means that a coordinated change of strategies from (slack off, slack off) to (heave, heave) would make some participants better off (in this case, both are better off) and nobody worse off. This illustrates that the game has multiple equilibria, with some equilibria superior to others. It also illustrates the importance of coordination: can they make the coordinated change of strategies they need to make them both better off? Because of this issue, the Heave-Ho game illustrates a pure coordination game.

With only two decision-makers to coordinate, it ought to be relatively easy. But even in this simple case, coordination might fail because of mistrust. If each of the motorists suspects that the other is a slacker, each will consider the choice as being between -5 and 0, rather than a choice between 0 and +5, and choose to slack off. Pessimism and loss aversion can have the same effect. A very pessimistic way of choosing strategies (in this situation) is to maximize the minimum payoff. “If I heave, my minimum payoff is -5. If I slack off, my minimum payoff is 0, and that is better than -5. I’ll slack off.”

We can also see how institutions might help to solve the coordination problem. An institution that might help in this case is a social convention. Suppose it were widely believe that, in cases of this kind, “gentlemen don’t slack off. Just isn’t done, don’t y’know.” If this convention were widespread enough that both motorists believed that the other motorist would subscribe to it — and therefore would not slack off — that would be enough to assure each motorist that the other one would slack off, and then each would behave like a gentleman — and heave!

Because the Heave-Ho game is symmetrical, it doesn’t quite tell the whole story. Let us make the example just a little more complex. Suppose that one of the motorists — the “bridegroom” — has more to lose by being late than the other motorist, the “hitchhiker.” The payoff table for this modified game is Table 11-2.

**Table 11-2**

heave | slack off | |

heave | 5,5 | -5,0 |

slack off | 0,-5 | 0,-4 |

In this case, the column player is the bridegroom and the row player is the hitchhiker. It is still true that they both gain in a coordinated switch of strategies from (slack off, slack off) to (heave, heave). But now the bridegroom gains 9, while the hitchhiker, who is in less of a hurry, gains only 5. Seeing that difference, the hitchhiker might demand some compensation for his cooperation — turning the decision into a bargaining session. Bargaining outcomes can be unpredictable, and contribute to distrust, so this temptation could be another reason why coordination might fail.

**N-Person Heave-Ho, or The Investment Game **

Despite all this, it should be relatively easy for just two people to coordinate their strategy. In economics, we are often concerned with cases in which very large numbers of people must coordinate their strategies. We are also really concerned with macroeconomics, rather than back-road driving, and so it is time to bring the economics to the fore. Accordingly, we consider an investment game with N potential investors — N very large — and two strategies: each investor can choose a high rate of investment or a low one. We suppose that the payoff to a low rate of investment is always zero, but the payoff to a high rate of investment depends on how many other investors choose a high rate of investment.

The payoffs in the Investment Game are shown in Figure 9-1. The figure is an xy diagram with the profitability of investment on the vertical axis. The horizontal axis measures the proportion of all investors who choose the high-investment strategy, which varies from a minimum of n=zero to a maximum of n=N. The number of investors who choose a low-investment strategy is N-n. An increase in the number of investors choosing a high-investment strategy raises overall investment, and that stimulates demand, which in turn increases the profitability of investment. For the high-investment strategy, the payoff increases more rapidly. Thus, in Figure 1, the thick gray line ab gives the profitability of the high-investment strategy, and the cross-hatched line fg gives the profitability of the low-investment strategy.

INCLUDEPICTURE “Exhibit%201/Post-Up1-file/PWfig1.gif” * MERGEFORMATINET d

**Figure 9-1 **

An economist naturally assumes that something important happens when two lines cross. In this case y, the proportion at which the two lines cross, is a watershed rather than an equilibrium. Suppose that, at a given time, the number of investors choosing the high-investment strategy is greater than (to the right of) y, but less than N. This is not an equilibrium, since the profit-maximizing strategy for all investors is then the high-investment strategy. On the other hand, suppose that the number of investors choosing the high-investment strategy is less than (to the left of) y, but positive. This is not an equilibrium either, since the profit-maximizing strategy for all investors is then the low-investment strategy. There are 3 equilibria. If all investors choose the high-investment strategy, then we are at the right-hand extreme of the diagram, and the high-investment strategy is the profitable one — so this is an equilibrium. Similarly, if all investors choose the low-investment strategy, we are at the left extreme of the diagram, and the low-investment strategy is the profitable choice, and this is an equilibrium. Finally, when exactly y investors choose the high-investment strategy, both strategies are equally profitable, so there is no reason for anyone to deviate from it. This, too, is an equilibrium, but it is an unstable one — any random deviation from it will lead on to one of the extremes. HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/Post-Walrasian_fn.html” l “fn1” [2]

This example illustrates some key points. Once again we have two equilibria — and one is better than the other. In the high-investment equilibrium, profits are higher for all investors. The logic behind the example is that the investment community sometimes settles into the low-investment equilibrium, though. When that happens we have a depression or long-term stagnation.

However, there is still a good deal going on behind the scenes. We have said that higher overall investment leads to higher profits, because it stimulates demand. That is a Keynesian idea. But the Keynesian tradition tells us that there are several other sources of spending that stimulate demand. To take them into account we would need to look at things from a more traditionally Keynesian perspective. We will reserve that to an HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/Post-Up2.html” appendix, though. The appendix links these ideas to those in the macroeconomic principles textbook, so it may be of interest to students who have studied macroeconomics; but it is not, in itself, game theory.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/game-toc.html” Table of Contents

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/cooperative.html” INCLUDEPICTURE “Exhibit%201/Post-Up1-file/next.gif” * MERGEFORMATINET d

Roger A. McCain INCLUDEPICTURE “Exhibit%201/Post-Up1-file/RAMAC.gif” * MERGEFORMATINET d

**The Prisoners’ Dilemma**

Recent developments in game theory, especially the award of the Nobel Memorial Prize in 1994 to three game theorists and the death of A. W. Tucker, in January, 1995, at 89, have renewed the memory of its beginnings. Although the history of game theory can be traced back earlier, the key period for the emergence of game theory was the decade of the 1940’s. The publication of *The Theory of Games and Economic Behavior* was a particularly important step, of course. But in some ways, Tucker’s invention of the Prisoners’ Dilemma example was even more important. This example, which can be set out in one page, could be the most influential one page in the social sciences in the latter half of the twentieth century.

This remarkable innovation did not come out in a research paper, but in a classroom. As S. J. Hagenmayer wrote in the Philadelphia Inquirer (“Albert W. Tucker, 89, Famed Mathematician,” Thursday, Feb. 2, 1995, p.. B7) ” In 1950, while addressing an audience of psychologists at Stanford University, where he was a visiting professor, Mr. Tucker created the Prisoners’ Dilemma to illustrate the difficulty of analyzing” certain kinds of games. “Mr. Tucker’s simple explanation has since given rise to a vast body of literature in subjects as diverse as philosophy, ethics, biology, sociology, political science, economics, and, of course, game theory.”

**The Game**

Tucker began with a little story, like this: two burglars, Bob and Al, are captured near the scene of a burglary and are given the “third degree” separately by the police. Each has to choose whether or not to confess and implicate the other. If neither man confesses, then both will serve one year on a charge of carrying a concealed weapon. If each confesses and implicates the other, both will go to prison for 10 years. However, if one burglar confesses and implicates the other, and the other burglar does not confess, the one who has collaborated with the police will go free, while the other burglar will go to prison for 20 years on the maximum charge.

The strategies in this case are: confess or don’t confess. The payoffs (penalties, actually) are the sentences served. We can express all this compactly in a “payoff table” of a kind that has become pretty standard in game theory. Here is the payoff table for the Prisoners’ Dilemma game:

**Table 3-1**

Al | |||

confess | don’t | ||

Bob | confess | 10,10 | 0,20 |

don’t | 20,0 | 1,1 |

The table is read like this: Each prisoner chooses one of the two strategies. In effect, Al chooses a column and Bob chooses a row. The two numbers in each cell tell the outcomes for the two prisoners when the corresponding pair of strategies is chosen. The number to the left of the comma tells the payoff to the person who chooses the rows (Bob) while the number to the right of the column tells the payoff to the person who chooses the columns (Al). Thus (reading down the first column) if they both confess, each gets 10 years, but if Al confesses and Bob does not, Bob gets 20 and Al goes free.

So: how to solve this game? What strategies are “rational” if both men want to minimize the time they spend in jail? Al might reason as follows: “Two things can happen: Bob can confess or Bob can keep quiet. Suppose Bob confesses. Then I get 20 years if I don’t confess, 10 years if I do, so in that case it’s best to confess. On the other hand, if Bob doesn’t confess, and I don’t either, I get a year; but in that case, if I confess I can go free. Either way, it’s best if I confess. Therefore, I’ll confess.”

But Bob can and presumably will reason in the same way — so that they both confess and go to prison for 10 years each. Yet, if they had acted “irrationally,” and kept quiet, they each could have gotten off with one year each.

**Dominant Strategies**

What has happened here is that the two prisoners have fallen into something called a “dominant strategy equilibrium.”

**DEFINITION Dominant Strategy: **Let an individual player in a game evaluate separately each of the strategy combinations he may face, and, for each combination, choose from his own strategies the one that gives the best payoff. If the same strategy is chosen for each of the different combinations of strategies the player might face, that strategy is called a “dominant strategy” for that player in that game.

**DEFINITION Dominant Strategy Equilibrium: **If, in a game, each player has a dominant strategy, and each player plays the dominant strategy, then that combination of (dominant) strategies and the corresponding payoffs are said to constitute the dominant strategy equilibrium for that game.

In the Prisoners’ Dilemma game, to confess is a dominant strategy, and when both prisoners confess, that is a dominant strategy equilibrium.

**Issues With Respect to the Prisoners’ Dilemma**

This remarkable result — that individually rational action results in both persons being made worse off in terms of their own self-interested purposes — is what has made the wide impact in modern social science. For there are many interactions in the modern world that seem very much like that, from arms races through road congestion and pollution to the depletion of fisheries and the overexploitation of some subsurface water resources. These are all quite different interactions in detail, but are interactions in which (we suppose) individually rational action leads to inferior results for each person, and the Prisoners’ Dilemma suggests something of what is going on in each of them. That is the source of its power.

Having said that, we must also admit candidly that the Prisoners’ Dilemma is a very simplified and abstract — if you will, “unrealistic” — conception of many of these interactions. A number of critical issues can be raised with the Prisoners’ Dilemma, and each of these issues has been the basis of a large scholarly literature:

The Prisoners’ Dilemma is a two-person game, but many of the applications of the idea are really many-person interactions.

We have assumed that there is no communication between the two prisoners. If they could communicate and commit themselves to coordinated strategies, we would expect a quite different outcome.

In the Prisoners’ Dilemma, the two prisoners interact only once. Repetition of the interactions might lead to quite different results.

Compelling as the reasoning that leads to the dominant strategy equilibrium may be, it is not the only way this problem might be reasoned out. Perhaps it is not really the most rational answer after all.

We will consider some of these points in what follows.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/IT1.html” INCLUDEPICTURE “Exhibit%201/Prisoners’%20Dilemma-file/next.gif” * MERGEFORMATINET d

Roger A. McCain INCLUDEPICTURE “Exhibit%201/Prisoners’%20Dilemma-file/RAMAC.gif” * MERGEFORMATINET d

**Game Theory: An Introductory Sketch **

**The Queuing Game**

Two-person games don’t get us very far. Many of the “games” that are most important in the real world involve considerably more than two players — for example, economic competition, highway congestion, overexploitation of the environment, and monetary exchange. So we need to explore games with more than two players.

Von Neumann and Morgenstern spent a good deal of time on games with three players, and some more recent authors follow their example. This serves to illustrate how even one more player can complicate things, but it does not help us much with realism. We need an analysis of games with N>3 players, where N can be quite large. To get that, we will simply have to pay our way with some simplifying assumptions. One kind of simplifying assumption is the “representative agent model.” In this sort of model, we assume that all players are identical, have the same strategy options and get symmetrical payoffs. We also assume that the payoff to each player depends only on the number of other players who choose each strategy, and not on which agent chooses which strategy.

This “representative agent” approach shouldn’t be pushed too far. It is quite common in economic theory, and economists are sometimes criticized for overdoing it. But it is useful in many practical examples, and the next few sections will apply it.

This section presents a “game” which extends the Prisoners’ Dilemma in some interesting ways. The Prisoners’ Dilemma is often offered as a paradigm for situations in which individual self-interested rationality leads to bad results, so that the participants may be made better off if an authority limits their freedom to choose their strategies independently. Powerful as the example is, there is much missing from it. Just to take one point: the Prisoners’ Dilemma game is a two-person game, and many of the applications are many-person interactions. The game considered in this example extends the Prisoners’ Dilemma sort of interaction to a group of more than two people. I believe it gives somewhat richer implications about the role of authority, and as we will see in a later section, its N-person structure links it in an important way to cooperative game theory.

As usual, let us begin with a story. Perhaps the story will call to mind some of the reader’s experience. We suppose that six people are waiting at an airline boarding gate, but that the clerks have not yet arrived at the gate to check them in. Perhaps these six unfortunates have arrived on a connecting flight with a long layover. Anyway, they are sitting and awaiting their chance to check in, and one of them stands up and steps to the counter to be the first in the queue. As a result the others feel that they, too, must stand in the queue, and a number of people end up standing when they could have been sitting.

Here is a numerical example to illustrate a payoff structure that might lead to this result. Let us suppose that there are six people, and that the gross payoff to each passenger depends on when she is served, with payoffs as follows in the second column of Table 7-1. Order of service is listed in the first column.

**Table 7-1**

Order served | Gross Payoff | Net Payoff |

First | 20 | 18 |

Second | 17 | 15 |

Third | 14 | 12 |

Fourth | 11 | 9 |

Fifth | 8 | 6 |

Sixth | 5 | 3 |

These payoffs assume, however, that one does not stand in line. There is a two-point effort penalty for standing in line, so that for those who stand in line, the net payoff to being served is two less that what is shown in the second column. These net payoffs are given in the third column of the table.

Those who do not stand in line are chosen for service at random, after those who stand in line have been served. (Assume WLOG that these six passengers are risk neutral.) If no-one stands in line, then each person has an equal chance of being served first, second, …, sixth, and an expected payoff of 12.5. In such a case the aggregate payoff is 75.

But this will not be the case, since an individual can improve her payoff by standing in line, provided she is first in line. The net payoff to the person first in line is 18>12.5, so someone will get up and stand in line.

This leaves the average payoff at 11 for those who remain. Since the second person in line gets a net payoff of 15, someone will be better off to get up and stand in the second place in line.

This leaves the average payoff at 9.5 for those who remain. Since the third person in line gets a net payoff of 12, someone will be better off to get up and stand in the third place in line.

This leaves the average payoff at 8 for those who remain. Since the fourth person in line gets a net payoff of 9, someone will be better off to get up and stand in the fourth place in line.

This leaves the average payoff at 6.5 for those who remain. Since the fifth person in line gets a net payoff of 6, no-one else will join the queue. With 4 persons in the queue, we have arrived at a Nash equilibrium of the game. The total payoff is 67, less than the 75 that would have been the total payoff if, somehow, the queue could have been prevented.

Two people are better off — the first two in line — with the first gaining an assured payoff of 5.5 above the uncertain average payoff she would have had in the absence of queuing and the second gaining 2.5. But the rest are worse off. The third person in line gets 12, losing 0.5; the fourth 9, losing 3.5, and the rest get average payoffs of 6.5, losing 6 each. Since the total gains from queuing are 8 and the losses 16, we can say that, in one fairly clear sense, queuing is inefficient.

We should note that it is in the power of the authority (the airline, in this case) to prevent this inefficiency by the simple expedient of not respecting the queue. If the clerks were to ignore the queue and, let us say, pass out lots for order of service at the time of their arrival, there would be no point for anybody to stand in line, and there would be no effort wasted by queuing (in an equilibrial information state).

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/game-toc.html” Table of Contents

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/patent.html” INCLUDEPICTURE “Exhibit%201/Queueing_file/next.gif” * MERGEFORMATINET d

Roger A. McCain INCLUDEPICTURE “Exhibit%201/Queueing_file/RAMAC.gif” * MERGEFORMATINET d

**Cooperative Games and the Core**

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/core.html” l “H1” Language for Cooperative Games

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/core.html” l “H2” The Core

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/core.html” l “H5” A Market Game

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/core.html” l “H6” The Queuing Game and the Core

**Language for Cooperative Games**

We will need a bit of language to talk about cooperative games with more than two persons. A group of players who commit themselves to coordinate their strategies is called a “coalition.” What the members of the coalition get, after all the bribes, side payments, and quids pro quo have cleared, is called an “allocation” or “imputation.”

(The problem of coalitions also arises in zero-sum games, if there are more than two players. With three or more players, some of the players may profit by “ganging up” on the rest. For example, in poker, two or more players may cooperate to cheat a third, dividing the pelf between themselves. This is cheating, in poker, because the rules of poker forbid cooperation among the players. For the members of a coalition of this kind, the game becomes a non-zero sum game — both of the cheaters can win, if they cheat efficiently).

“Allocation” and “imputation” are economic terms, and economists are often concerned with the efficiency of allocations. The standard definition of efficient allocation in economics is “Pareto optimality.” Let us pause to recall that concept. In defining an efficient allocation, it is best to proceed by a double-negative. An allocation is *inefficient* if there is at least one person who can do better, while no other person is worse off. (That makes sense — if somebody can do better without anyone else being made worse off, then there is an unrealized potential for benefits in the game). Conversely, the allocation is *efficient* in the Paretian sense if no-one can be made better off without making someone else worse off.

The members of a coalition, C, are a subset of the set of players in the game. (Remember, a “subset” can include all of the players in the game. If the subset is less than the whole set of players in the game, it is called a “proper” subset). If all of the players in the game are members of the coalition, it is called the “grand” coalition. A coalition can also have only a single member. A coalition with just a single member is called a “singleton coalition.”

Let us say that the members of coalition C get payoff A. (A is a vector or list of the payoffs to all the members of C, including side payments, if any). Now suppose that some of the members of coalition C could join another coalition, C’; with an allocation of payoffs A’. The members of C who switch to C’ may be called “defectors.” If the payoffs to defectors in A’ are greater than those in A, then we say that A’ “dominates” A through coalition C. In other words: an allocation is dominated if some of the members of the coalition can do better for themselves by deserting that coalition for some other coalition.

**The Core**

We can now define one important concept of the solution of a cooperative game. The core of a cooperative game consists of all undominated allocations in the game. In other words, the core consists of all allocations with the property that no subgroup within the coalition can do better by deserting the coalition.

Notice that an allocation in the core of a game will always be an efficient allocation. Here, again, the best way to show this is to reason in double-negatives — that is, to show that an inefficient allocation cannot be in the core. To say that the allocation A is inefficient is to say that a grand coalition can be formed in which at least one person is better off, and no-one worse off, than they are in A. Thus, any inefficient coalition is dominated through the grand coalition.

Now, two very important limitations should be mentioned. The core of a cooperative game may be of any size — it may have only one allocation, or there may be many allocations in the core (corresponding either to one or many coalitions), and it is also possible that there may not be any allocations in the core at all. What does it mean to say that there are no allocations in the core? It means that there are no stable coalitions — whatever coalition may be formed, there is some subgroup that can benefit by deserting it. A game with no allocations in the core is called an “empty-core game.”

I said that the rational player in a cooperative game must ask “not only ‘What strategy choice will lead to the best outcome for all of us in this game?’ but also ‘How large a bribe may I reasonably expect for choosing it?'” The core concept answers this question as follows” “Don’t settle for a smaller bribe than you can get from an other coalition, and don’t make any commitments until you are sure.”

We will now consider two applications of the concept of the core. The first is a “market game,” a game of exchange. We then return to a game we have looked at from the noncooperative viewpoint: the queuing game.

**A Market Game**

Economists often claim that “increasing competition” (an increasing number of participants on both sides of the market, demanders and suppliers) limits monopoly power. Our market game is designed to bring out that idea.

The concept of the core, and the effect of “increasing competition” on the core, can be illustrated by a fairly simple numerical example, provided we make some simplifying assumptions. We will assume that there are just two goods: “widgets” and “money.” We will also use what I call the benefits hypothesis — that is, that utility is proportional to money. In other words, we assume that the subjective benefits a person obtains from her or his possessions can be expressed in money terms, as is done in cost-benefit analysis. In a model of this kind, “money” is a stand-in for “all other goods and services.” Thus, people derive utility from holding “money,” that is, from spending on “all other goods and services,” and what we are assuming is that the marginal utility of “all other goods and services” is (near enough) constant, so that we can use equivalent amounts of “money” or “all other goods and services” as a measure of the utility of widgets. Since money is transferable, that is very much like the “transferable utility” conception originally used by Shubik in his discussions of the core.

We will begin with an example in which there are just two persons, Jeff and Adam. At the beginning of the game, Jeff has 5 widgets but no money, and Adam has $22 but no widgets. The benefits functions are

**Table 13-1**

Jeff | Adam | |||

widgets | benefits | widgets | benefits |

total marginal total marginal 1 10 10 1 9 9 2 15 5 2 13 4 3 18 3 3 15 2 4 21 3 4 16 1 5 22 1 5 16 0 Adam’s demand curve for widgets will be his marginal benefit curve, while Jeff’s supply curve will be the reverse of his marginal benefit curve. These are shown in Figure 13-1.

INCLUDEPICTURE “Exhibit%201/The%20Core-file/gfig1.gif” * MERGEFORMATINET d

**Figure 13-1**

Market equilibrium comes where p=3, Q=2, i.e. Jeff sells Adam 2 widgets for a total payment of $6. The two transactors then have total benefits of

Jeff | Adam | |

widgets | 18 | 13 |

money | 6 | 16 |

total | 24 | 29 |

The total benefit divided between the two persons is $24+$29=$53.

Now we want to look at this from the point of view of the core. The “strategies’ that Jeff and Adam can choose are unilateral transfers — Jeff can give up 0, 1, 2, 3, 4, or 5 widgets, and Adam can give up from 0-22 dollars. Presumably both would choose “zero” in a noncooperative game. The possible coalitions are a) a grand coalition of both persons, or b) two singleton coalitions in which each person goes it alone. In this case, a cooperative solution might involve a grand coalition of the two players. In fact, a cooperative solution to this game is a coordinated pair of strategies in which Jeff gives up some widgets to Adam and Adam gives up some money to Jeff. (In more ordinary terms, that is, of course, a market transaction). The core will consist of all such coordinated strategies such that a) neither person (singleton coalition) can do better by going it alone, and b) the coalition of the two cannot do better by a different coordination of their strategies. In this game, the core will be a set of transactions each of which fulfills both of those conditions.

Let us illustrate both conditions: First, suppose Jeff offers to sell Adam one widget for $10. But Adam’s marginal benefit is only nine — Adam can do better by going it alone and not buying anything. Thus, “one widget for $10” is not in the core. Second, suppose Jeff proposes to sell Adam one widget for $5. Adam’s total benefit would then be 22-5+9=26, Jeff’s 5+21=26. Both are better off, with a total benefit of 52. However, they can do better, if Jeff now sells Adam a second widget for $3.50. Adam now has benefits of 13+22-8.50=26.50, and Jeff has benefits of 18+8.50=26.50, for a total benefit of 53. Thus, a sale of just one widget is not in the core. In fact, the core will include only transactions in which exactly two widgets are sold.

We can check for this in the following way. If the “benefits hypothesis” is correct, the only transactions in the core will be transactions that maximize the total benefits for the two persons. When the two person shift from a transaction that does not maximize benefits to one that does, they can divide the increase in benefits among themselves in the form of money, and both be better off — so a transaction that does not maximize benefits cannot satisfy condition b) above. From Table 13-1,

**Table 13-2**

Quantity Sold | benefit of widgets | money | total | |

to Jeff | to Adam | |||

0 | 22 | 0 | 22 | 44 |

1 | 21 | 9 | 22 | 52 |

2 | 18 | 13 | 22 | 53 |

3 | 15 | 15 | 22 | 52 |

4 | 10 | 16 | 22 | 48 |

5 | 0 | 16 | 22 | 38 |

and we see that a trade of 2 maximizes total benefits.

But we have not figured out the price at which the two units will be sold. This is not necessarily the competitive “supply-and-demand” price, since the two traders are both monopolists and one may successfully hold out for a better-than-competitive price.

Here are some examples:

Quantity Sold | Total Payment | Total Benefits | |

Jeff’s | Adam’s | ||

2 | 12 | 18+12=30 | 22-12+13=22 |

2 | 5 | 18+5=23 | 22-5+13=30 |

2 | 8 | 18+8=28 | 22-8+13=27 |

What all of these transactions have in common is that the total benefits are maximized — at 53 — but the benefits are distributed in very different ways between the two traders. All the same, each trader does no worse than the 22 of benefits he can have without trading at all. Thus each of these transactions is in the core.

It will be clear, then, that there are a wide range of transactions in the core of this two-person game. We may visualize the core in a diagram with the benefits to Jeff on the horizontal axis and benefits to Adam on the vertical axis. The core then is the line segment ab. Algebraically, it is the line BA=53-BJ, where BA means “Adam’s benefits” and BJ means “Jeff’s benefits,” and the line is bounded by BA>=22 and BJ>=22. The competitive equilibrium is at C.

INCLUDEPICTURE “Exhibit%201/The%20Core-file/gfig2.gif” * MERGEFORMATINET d

**Figure 13-2 **

The large size of the core is something of a problem. The cooperative solution must be one of the transactions in the core, but which one? In the two-person game, there is just no answer. The “supply-and-demand” approach does give a definite answer, shown as point C in the diagram. According to the supply-and-demand story, this equilibrium comes about because there are many buyers and many sellers. In our example, instead, we have just one of each, a bilateral monopoly. That would seem to be the problem: the core is large because the number of buyers and sellers is small.

So what happens if we allow the number of buyers and sellers to increase until it is very large? To keep things simple, we will continue to suppose that there are just two kinds of people — jeffs and adams — but we will consider a sequence of games with 2, 3, …, 10, …, 100,… adams and an equal number of jeffs and see what happens to the core of these games as the number of traders gets large.

First, suppose that there are just two jeffs and two adams. Each jeff and each adam has just the same endowment and benefit function as before.

What coalitions are possible in this larger economy? There could be two one-to-one coalitions of a jeff and an adam. Two jeffs or two adams could, in principle, form a coalition; but since they would have nothing to exchange, there would be little point in it. There could also be coalitions of two jeffs and an adam, two adams and a jeff, or a grand coalition of both jeffs and both adams.

We want to show that this bigger game has a smaller core. There are some transactions in the core of the first game that are not in this one.

Here is an example: In the 2-person game, an exchange of 12 dollars for 2 widgets is in the core. But it is not in the core of this game. At an exchange of 12 for 2, each adam gets total benefits of 23, each jeff or 30. Suppose then that a jeff forms a coalition with 2 adams, so that the jeff sells each adam one widget for $7. The jeff gets total benefits of 18+7+7=32, and so is better off. Each adam gets benefits of 15+9=24, and so is better off. This three-person coalition — which could not have been formed in the two-person game — “dominates” the 12-for-2 allocation and so the 12-for-2 allocation is not in the core of the 4-person game. (Of course, the other jeff is out in the cold, but that’s his look-out — the three-person coalition are better off. But, in fact, we are not saying that the three-person coalition is in the core either. It probably isn’t, since the odd jeff out is likely to make an offer that would dominate this one).

This is illustrated by the diagram in Figure 13-3. Line segment de shows the trade-off between benefits to the jeffs and the adams in a 3-person coalition. It means that, from any point on line segment fb, a shift to a 3-person coalition makes it possible to move to the northwest — making all members of the coalition better off — to a point on fe. Thus all of the allocations on fb are dominated, and not in the core of the 4-person game.

INCLUDEPICTURE “Exhibit%201/The%20Core-file/gfig3.gif” * MERGEFORMATINET d

**Figure 13-3 **

Here is another example: in the two-person game, an exchange of two widgets for five dollars is in the core. Again, it will not be in the core of a four-person game. Each jeff gets benefits of 23 and each adam of 30. Now, suppose an adam proposes a coalition with both jeffs. The adam will pay each jeff $2.40 for one widget. The adam then has 30.20 of benefits and so is better off. Each jeff gets 23.40 of benefits and is also better off. Thus the one-adam-and-two-jeffs coalition dominates the 2-for-5 coalition, which is no longer in the core. Figure 13-4 illustrates the situation we now have. The benefit trade-off for a 2-jeff-one-adam coalition is shown by line gj. Every allocation on ab to the left of h is dominated. Putting everything together, we see that allocations on ab to the left of h and to the right of f are dominated by 3-person coalitions, but the 3-person coalitions are dominated by the 2-person coalitions between h and f. (Four-person coalitions function like pairs of two-person coalitions, adding nothing to the game).

INCLUDEPICTURE “Exhibit%201/The%20Core-file/gfig4.gif” * MERGEFORMATINET d

**Figure 13-4**

We can now see the core of the four-person game in Figure 13-4. It is shown by the line segment hf. It is limited by BA>=27, BJ>=24. The core of the four-person game is part of the core of the two-person game, but it is a smaller part, because the four-person game admits of coalitions which cannot be formed in the two-person game. Some of these coalitions dominate some of the coalitions in the core of the smaller game. This illustrates an important point about the core. The bigger the game, the greater the variety of coalitions that can be formed. The more coalitions, often, the smaller the core.

Let us pursue this line of reasoning one more step, considering a six-person game with three jeffs and three adams. We notice that a trade of two widgets for $8 is in the core of the four-person game and we will see that it is not in the core of the 6-person game. Beginning from the 2-for-8 allocation, a coalition of 2 jeffs and 3 adams is proposed, such that each jeff gives up three widgets and each adam buys two, at a price of 3.50 each. The results are shown in Table 13-3

**Table 13-3**

Type | old allocation | new allocation |

widgets money total benefit widgets money total benefit Jeff 4 8 26 3 11.4 26.4 Adam 2 14 27 2 14.4 27.4 We see that both the adams and the jeffs within the coalition are better off, so the two-and-three coalition dominates the two-for-eight bilateral trade. Thus the two-for-eight trade is not in the core of the six-person game. What is in it? This is shown by Figure 13-5. As before, the line segment ab is the core of the two-person game and line segment gj is the benefits trade-off for the coalition of two jeffs and one adam. Segment kl is the benefits trade-off for the coalition of of two jeffs and thee adams. We see that every point on a, b except point h is dominated, either by a 2-jeff 1-adam coalition or by a two-jeff three-adam coalition. The core of a six-player game is exactly one allocation: the one at point h. And this is the competitive equilibrium! No coalition can do better than it.

INCLUDEPICTURE “Exhibit%201/The%20Core-file/gfig5.gif” * MERGEFORMATINET d

**Figure 13-5 **

If we were to look at 8, 10, 100, 1000, or 1,000,000 player games, we would find the same core. This series of examples illustrates a key point about the core of an exchange game: as the number of participants (of each type) increases without limit, the core of the game shrinks down to the competitive equilibrium. This result can be generalized in various ways. First, we should observe that in some games, any game with a finite number of players has more than one allocation in the core. This game has been simplified by only allowing players to trade in whole numbers of widgets. That is one reason why the core shrinks to the competitive equilibrium so soon in our example. We may also eliminate the benefits hypothesis, assuming instead that utility is nontransferable and not proportional to money. We can also allow for more than two kinds of players, and get rid of the “types” assumption completely, at the cost of much greater mathematical complexity. But the general idea is simple enough. With more participants, more kinds of coalitions can form, and some of those coalitions dominate coalitions that could form in smaller games. Thus a bigger game will have a smaller core; in that sense “more competition limits monopoly power.” But (in a market game) the supply-and-demand is the one allocation that is always in the core. And this provides us with a new understanding of the unique role of the market equilibrium.

**The Queuing Game and the Core**

We have seen that the market game has a non-empty core, but some very important games have empty cores. From the mathematical point of view, this seems to be a difficulty — the problem has no solution. But from the economic point of view it may be an important diagnostic point. The University of Chicago economist Lester Telser has argued that empty-core games provide a rationale for government regulation of markets. The core is empty because efficient allocations are dominated — people can defect to coalitions that can promise them more than they can get from an efficient allocation. What government regulation does in such a case is to prohibit some of the coalitions. Ruling out some coalitions by means of regulation may allow an efficient coalition to form and to remain stable — the coalitions through which it might be dominated are prohibited by regulation.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/queuing.html” In another segment, we have looked at a game that has an inefficient noncooperative equilibrium: the queuing game. We shall see that the Queuing Game also is an empty-core game. Recalling that every allocation in the core is Pareto Optimal, and that Pareto Optimality in this game presupposes a grand coalition of all players to refrain from starting a queue, it will suffice to show that the grand coalition is unstable against a defection of a single agent to form a singleton coalition and form a one-person queue.

It is easy to see that the defector will be better off if the rump coalition (the five remaining in a coalition not to queue) continues its strategy of not contesting for any place in line. Then the defector gets a net payoff of 18 with certainty, better than the average payoff of 12.5 she would get with the grand coalition — and this observation is just a repetition of the argument that the grand coalition is not a Nash equilibrium. But the rump coalition needs not simply continue with its policy of noncontesting. For example, it can contest the first position in line, while continuing the agreement to allocate the balance at random. This would leave the aggressor with a one-sixth chance of the first place, but she can do no worse than second, so her expected payoff would then be (1/6)(18)+(5/6)(15)=15.5. She will not be deterred from defecting by this possible strategy response from the rump coalition.

That is not the only strategy response open to the rump coalition. Table 13-4 presents a range of strategy alternatives available to the rump coalition:

**Table 13-4**

contest the first | payoff to defector |
average payoff to rump coalition |

no places | 18 | 11 |

one place | 15.5 | 11.167 |

two places | 13.5 | 11.223 |

three places | 12 | 11.2 |

four places | 11.167 | 11.167 |

five places | 11.167 | 11.167 |

These are not the only strategy options available to the rump coalition. For example, the rump coalition might choose to contest just the first and third positions in line, leaving the second uncontested. But this would serve only to assure the defector of a better outcome than she could otherwise be sure of, making the members of the rump coalition worse off. Thus, the rump coalition will never choose a strategy like that, and it cannot be relevant to the defector’s strategy. Conversely, we see that the rump coalition’s best response to the defection is to contest the first two positions in line, but no more — leaving the defector better off as a result of defecting, with an expected payoff of 13.5 rather than 12.5. If follows that the grand coalition is unstable under recontracting.

To illustrate the reasoning that underlies the table, let us compute the payoffs for the case in which the rump coalition contests the first two positions in line, the optimal response. 1) The aggressor has a one/sixth chance of first place in line for a payoff of 18, one/sixth of second place for 15, and four-fifths chance of being third, for 12. (The aggressor must still stand in line to be sure of third place, rather than worse, although that position is uncontested). Thus the expected payoff is 18/6+15/6+4*12/6, for 13.5. 2a) With one chance in six, the aggressor is first, leaving the rump coalition to allocate among themselves rewards of 15 (second place in queue), 14, 11, 8, and 5 (third through last places without standing in the queue). Each of these outcomes has a conditional probability of one-fifth for each of the five individuals in the rump coalition. This accounts for expectations of one in thirty (one-sixth times one-fifth) of each of those rewards. 2b) With one chance in six, the aggressor is second, and the rump coalition allocate among themselves, at random, payoffs of 18 (first place in queue), 14, 11, 8 and 5 (as before) accounting for yet further expectations of one in thirty of each of these rewards. 2c) With four chances in six, the aggressor is third — without contest — and the members of the rump coalition allocate among themselves, at random, rewards of 18, 15, (first two places in the queue), 11, 8, and 5 (last three places without queuing). 2d) Thus the expected payoff of a member of the rump coalition is (15+14+11+8+5)/30+(18+14+11+8+5)/30+4*(18+15+11+8+5)/30, or 11.233.

**Game Theory: An Introductory Sketch **

**Zero-Sum Games**

By the time Tucker invented the Prisoners’ Dilemma, Game Theory was already a going concern. But most of the earlier work had focused on a special class of games: zero-sum games.

In his earliest work, HYPERLINK “http://www.econ.canterbury.ac.nz/hist.htm” l “1928” von Neumann made a striking discovery. He found that if poker players maximize their rewards, they do so by bluffing; and, more generally, that in many games it pays to be unpredictable. This was not qualitatively new, of course — baseball pitchers were throwing change-up pitches before von Neumann wrote about mixed strategies. But von Neumann’s discovery was a bit more than just that. He discovered a unique and unequivocal answer to the question “how can I maximize my rewards in this sort of game?” without any markets, prices, property rights, or other institutions in the picture. It was a very major extension of the concept of absolute rationality in neoclassical economics. But von Neumann had bought his discovery at a price. The price was a strong simplifying assumption: von Neumann’s discovery applied only to zero-sum games.

For example, consider the children’s game of “Matching Pennies.” In this game, the two players agree that one will be “even” and the other will be “odd.” Each one then shows a penny. The pennies are shown simultaneously, and each player may show either a head or a tail. If both show the same side, then “even” wins the penny from “odd;” or if they show different sides, “odd” wins the penny from “even”. Here is the payoff table for the game.

HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/zerosum.html” l “return” **Table 4-1**

Odd | |||

Head | Tail | ||

Even | Head | 1,-1 | -1,1 |

Tail | -1,1 | 1,-1 |

If we add up the payoffs in each cell, we find 1-1=0. This is a “zero-sum game.”

**DEFINITION: Zero-Sum game **If we add up the wins and losses in a game, treating losses as negatives, and we find that the sum is zero for each set of strategies chosen, then the game is a “zero-sum game.”

In less formal terms, a zero-sum game is a game in which one player’s winnings equal the other player’s losses. Do notice that the definition requires a zero sum for every set of strategies. If there is even one strategy set for which the sum differs from zero, then the game is not zero sum.

**Another Example**

Here is another example of a zero-sum game. It is a very simplified model of price competition. Like Augustin Cournot (writing in the 1840’s) we will think of two companies that sell mineral water. Each company has a fixed cost of $5000 per period, regardless whether they sell anything or not. We will call the companies Perrier and Apollinaris, just to take two names at random.

The two companies are competing for the same market and each firm must choose a high price ($2 per bottle) or a low price ($1 per bottle). Here are the rules of the game:

1) At a price of $2, 5000 bottles can be sold for a total revenue of $10000.

2) At a price of $1, 10000 bottles can be sold for a total revenue of $10000.

3) If both companies charge the same price, they split the sales evenly between them.

4) If one company charges a higher price, the company with the lower price sells the whole amount and the company with the higher price sells nothing.

5) Payoffs are profits — revenue minus the $5000 fixed cost.

Here is the payoff table for these two companies

**Table 4-2**

Perrier | |||

Price=$1 | Price=$2 | ||

Apollinaris | Price=$1 | 0,0 | 5000, -5000 |

Price=$2 | -5000, 5000 | 0,0 |

(Verify for yourself that this is a zero-sum game.) For two-person zero-sum games, there is a clear concept of a solution. The solution to the game is the maximin criterion — that is, each player chooses the strategy that maximizes her minimum payoff. In this game, Appolinaris’ minimum payoff at a price of $1 is zero, and at a price of $2 it is -5000, so the $1 price maximizes the minimum payoff. The same reasoning applies to Perrier, so both will choose the $1 price. Here is the reasoning behind the maximin solution: Apollinaris knows that whatever she loses, Perrier gains; so whatever strategy she chooses, Perrier will choose the strategy that gives the minimum payoff for that row. Again, Perrier reasons conversely.

**SOLUTION: Maximin criterion** For a two-person, zero sum game it is rational for each player to choose the strategy that maximizes the minimum payoff, and the pair of strategies and payoffs such that each player maximizes her minimum payoff is the “solution to the game.”

**Mixed Strategies **

Now let’s HYPERLINK “http://william-king.www.drexel.edu/top/eco/game/zerosum.html” l “T1” look back at the game of matching pennies. It appears that this game does not have a unique solution. The minimum payoff for each of the two strategies is the same: -1. But this is not the whole story. This game can have more than two strategies. In addition to the two obvious strategies, head and tail, a player can “randomize” her strategy by offering either a head or a tail, at random, with specific probabilities. Such a randomized strategy is called a “mixed strategy.” The obvious two strategies, heads and tails, are called “pure strategies.” There are infinitely many mixed strategies corresponding to the infinitely many ways probabilities can be assigned to the two pure strategies.

**DEFINITION Mixed strategy** If a player in a game chooses among two or more strategies at random according to specific probabilities, this choice is called a “mixed strategy.”

The game of matching pennies has a solution in mixed strategies, and it is to offer heads or tails at random with probabilities 0.5 each way. Here is the reasoning: if odd offers heads with any probability greater than 0.5, then even can have better than even odds of winning by offering heads with probability 1. On the other hand, if odd offers heads with any probability less than 0.5, then even can have better than even odds of winning by offering tails with probability 1. The only way odd can get even odds of winning is to choose a randomized strategy with probability 0.5, and there is no way odd can get better than even odds. The 0.5 probability maximizes the minimum payoff over all pure *or mixed* strategies. And even can reason the same way (reversing heads and tails) and come to the same conclusion, so both players choose 0.5.

**Von Neumann’s Discovery**

We can now say more exactly what von Neumann’s discovery was. Von Neumann showed that every two-person zero sum game had a maximin solution, in mixed if not in pure strategies. This was an important insight, but it probably seemed more important at the time than it does now. In limiting his analysis to two-person zero sum games, von Neumann had made a strong simplifying assumption. Von Neumann was a mathematician, and he had used the mathematician’s approach: take a simple example, solve it, and then try to extend the solution to the more complex cases. But the mathematician’s approach did not work as well in game theory as it does in some other cases. Von Neumann’s solution applies unequivocally only to “games” that share this zero-sum property. Because of this assumption, von Neumann’s brilliant solution was and is only applicable to a small proportion of all “games,” serious and nonserious. Arms races, for example, are not zero-sum games. Both participants can and often do lose. The Prisoners’ Dilemma, as we have already noticed, is not a zero-sum game, and that is the source of a major part of its interest. Economic competition is not a zero-sum game. It is often possible for most players to win, and in principle, economics is a win-win game. Environmental pollution and the overexploitation of resources, again, tend to be lose-lose games: it is hard to find a winner in the destruction of most of the world’s ocean fisheries in the past generation. Thus, von Neumann’s solution does not — without further work — apply to these serious interactions.

The serious interactions are instances of “nonconstant sum games,” since the winnings and losses may add up differently depending on the strategies the participants choose. It is possible, for example, for rival nations to choose mutual disarmament, save the cost of weapons, and both be better off as a result — so the sum of the winnings is greater in that case. In economic competition, increasing division of labor, specialization, investment, and improved coordination can increase “the size of the pie,” leading to “that universal opulence which extends itself to the lowest ranks of the people,” in the words of Adam Smith. In cases of environmental pollution, the benefits to each individual from the polluting activity is so swamped by others’ losses from polluting activity that all can lose — as we have often observed.

Poker and baseball are zero-sum games. It begins to seem that the only zero-sum games are literal games that human beings have invented — and made them zero-sum — for our own amusement. “Games” that are in some sense natural are non-constant sum games. And even poker and baseball are somewhat unclear cases. A “friendly” poker game is zero-sum, but in a casino game, the house takes a proportion of the pot, so the sum of the winnings is less the more the players bet. And even in the friendly game, we are considering only the money payoffs — not the thrill of gambling and the pleasure of the social event, without which presumably the players would not play. When we take those rewards into account, even gambling games are not really zero-sum.

Von Neumann and Morgenstern hoped to extend their analysis to non-constant sum games with many participants, and they proposed an analysis of these games. However, the problem was much more difficult, and while a number of solutions have been proposed, there is no one generally accepted mathematical solution of nonconstant sum games. To put it a little differently, there seems to be no clear answer to the question, “Just what is rational in a non-constant sum game?” The well-defined rational policy in neoclassical economics — maximization of reward — is extended to zero-sum games but not to the more realistic category of non-constant sum games.