The Equivalence Of Evolutionary Games And Distributed Monte Carlo Learning
This paper presents a tight relationship between evolutionary game theory and distributed intelligence models. After reviewing some existing theories of replicator dynamics and distributed Monte Carlo learning, we make formulations and proofs of the equivalence between these two models. The relationship will be revealed not only from a theoretical viewpoint, but also by experimental simulations of the models by taking a simple symmetric zero-sum game as an example. As a consequence, it will be verified that seemingly chaotic macro dynamics generated by distributed micro-decisions can be explained with theoretical models.
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Arthur, W Brian, 1993. "On Designing Economic Agents That Behave Like Human Agents," Journal of Evolutionary Economics, Springer, vol. 3(1), pages 1-22, February.
- Drew Fudenberg & David K. Levine, 1998.
"Learning in Games,"
Levine's Working Paper Archive
2222, David K. Levine.
- Holland, John H & Miller, John H, 1991. "Artificial Adaptive Agents in Economic Theory," American Economic Review, American Economic Association, vol. 81(2), pages 365-371, May.
- Samuelson, L. & Zhang, J., 1990.
"Evolutionary Stability In Symmetric Games,"
90-24, Wisconsin Madison - Social Systems.
- Daniel Friedman, 1998.
"On economic applications of evolutionary game theory,"
Journal of Evolutionary Economics,
Springer, vol. 8(1), pages 15-43.
- Daniel Friedman, 2010. "On Economic Applications of Evolutionary Game Theory," Levine's Working Paper Archive 53, David K. Levine.
- Giovanni Dosi & Luigi Marengo & Giorgio Fagiolo, 1996.
"Learning in evolutionary environment,"
CEEL Working Papers
9605, Cognitive and Experimental Economics Laboratory, Department of Economics, University of Trento, Italia.
- Jeroen M. Swinkels, 1991.
"Adjustment Dynamics and Rational Play in Games,"
1001, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Leigh Tesfatsion, 2002. "Agent-Based Computational Economics," Computational Economics 0203001, EconWPA, revised 15 Aug 2002.
- Kandori, Michihiro & Mailath, George J & Rob, Rafael, 1993.
"Learning, Mutation, and Long Run Equilibria in Games,"
Econometric Society, vol. 61(1), pages 29-56, January.
- Kandori, M. & Mailath, G.J., 1991. "Learning, Mutation, And Long Run Equilibria In Games," Papers 71, Princeton, Woodrow Wilson School - John M. Olin Program.
- M. Kandori & G. Mailath & R. Rob, 1999. "Learning, Mutation and Long Run Equilibria in Games," Levine's Working Paper Archive 500, David K. Levine.
- Dekel, Eddie & Scotchmer, Suzanne, 1992.
"On the evolution of optimizing behavior,"
Journal of Economic Theory,
Elsevier, vol. 57(2), pages 392-406, August.
- Cabrales, Antonio & Sobel, Joel, 1992.
"On the limit points of discrete selection dynamics,"
Journal of Economic Theory,
Elsevier, vol. 57(2), pages 407-419, August.
- Antonio Cabrales & Joel Sobel, 2010. "On the Limit Points of Discrete Selection Dynamics," Levine's Working Paper Archive 432, David K. Levine.
- Friedman, Daniel, 1991. "Evolutionary Games in Economics," Econometrica, Econometric Society, vol. 59(3), pages 637-666, May.
- Leigh Tesfatsion, 2000. "Agent-Based Computational Economics: A Brief Guide to the Literature," Computational Economics 0004001, EconWPA.
- Samuelson, Larry & Zhang, Jianbo, 1992. "Evolutionary stability in asymmetric games," Journal of Economic Theory, Elsevier, vol. 57(2), pages 363-391, August.
When requesting a correction, please mention this item's handle: RePEc:ags:usuese:28338. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (AgEcon Search)
If references are entirely missing, you can add them using this form.