IDEAS home Printed from https://ideas.repec.org/a/eee/mateco/v46y2010i1p86-98.html
   My bibliography  Save this article

A note on adjusted replicator dynamics in iterated games

Author

Listed:
  • Alanyali, Murat

Abstract

We establish how a rich collection of evolutionary games can arise as asymptotically exact descriptions of player strategies in iterated games. We consider arbitrary normal-form games that are iteratively played by players that observe their own payoffs after each round. Each player's strategy is assumed to depend only past actions and past payoffs of the player. We study a class of autonomous reinforcement-learning rules for such players and show that variants of the adjusted replicator dynamics are asymptotically exact approximations of player strategies for small values of a step-size parameter adopted in learning. We also obtain a convergence result that identifies when a stable equilibrium of the limit dynamics characterizes equilibrium behavior of player strategies.

Suggested Citation

  • Alanyali, Murat, 2010. "A note on adjusted replicator dynamics in iterated games," Journal of Mathematical Economics, Elsevier, vol. 46(1), pages 86-98, January.
  • Handle: RePEc:eee:mateco:v:46:y:2010:i:1:p:86-98
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0304-4068(09)00081-0
    Download Restriction: Full text for ScienceDirect subscribers only
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Antonio J. Morales Siles, 2002. "Absolute Expediency and Imitative Behaviour," Economic Working Papers at Centro de Estudios Andaluces E2002/03, Centro de Estudios Andaluces.
    2. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-881, September.
    3. Laslier, Jean-Francois & Topol, Richard & Walliser, Bernard, 2001. "A Behavioral Learning Process in Games," Games and Economic Behavior, Elsevier, vol. 37(2), pages 340-366, November.
    4. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    5. Rustichini, Aldo, 1999. "Optimal Properties of Stimulus--Response Learning Models," Games and Economic Behavior, Elsevier, vol. 29(1-2), pages 244-273, October.
    6. Schlag, Karl H., 1998. "Why Imitate, and If So, How?, : A Boundedly Rational Approach to Multi-armed Bandits," Journal of Economic Theory, Elsevier, vol. 78(1), pages 130-156, January.
    7. Borgers, Tilman & Sarin, Rajiv, 2000. "Naive Reinforcement Learning with Endogenous Aspirations," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-950, November.
    8. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    9. Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
    10. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    11. R. Boylan, 2010. "Continuous Approximation of Dynamical Systems with Randomly Matched Individuals," Levine's Working Paper Archive 372, David K. Levine.
    12. Michel BenaÔm & J–rgen W. Weibull, 2003. "Deterministic Approximation of Stochastic Evolution in Games," Econometrica, Econometric Society, vol. 71(3), pages 873-903, May.
    13. Corradi, Valentina & Sarin, Rajiv, 2000. "Continuous Approximations of Stochastic Evolutionary Game Dynamics," Journal of Economic Theory, Elsevier, vol. 94(2), pages 163-191, October.
    14. Josef Hofbauer & Karl H. Schlag, 2000. "Sophisticated imitation in cyclic games," Journal of Evolutionary Economics, Springer, vol. 10(5), pages 523-543.
    15. DellaVigna, Stefano & LiCalzi, Marco, 2001. "Learning to make risk neutral choices in a symmetric world," Mathematical Social Sciences, Elsevier, vol. 41(1), pages 19-37, January.
    16. Sarin, Rajiv & Vahid, Farshid, 1999. "Payoff Assessments without Probabilities: A Simple Dynamic Model of Choice," Games and Economic Behavior, Elsevier, vol. 28(2), pages 294-309, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jonathan Newton, 2018. "Evolutionary Game Theory: A Renaissance," Games, MDPI, Open Access Journal, vol. 9(2), pages 1-67, May.
    2. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    3. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    4. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    5. Mengel, Friederike, 2012. "Learning across games," Games and Economic Behavior, Elsevier, vol. 74(2), pages 601-619.
    6. Hopkins, Ed & Posch, Martin, 2005. "Attainability of boundary points under reinforcement learning," Games and Economic Behavior, Elsevier, vol. 53(1), pages 110-125, October.
    7. Ianni, Antonella, 2014. "Learning strict Nash equilibria through reinforcement," Journal of Mathematical Economics, Elsevier, vol. 50(C), pages 148-155.
    8. Naoki Funai, 2019. "Convergence results on stochastic adaptive learning," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 68(4), pages 907-934, November.
    9. Mertikopoulos, Panayotis & Sandholm, William H., 2018. "Riemannian game dynamics," Journal of Economic Theory, Elsevier, vol. 177(C), pages 315-364.
    10. Erik Mohlin & Robert Ostling & Joseph Tao-yi Wang, 2014. "Learning by Imitation in Games: Theory, Field, and Laboratory," Economics Series Working Papers 734, University of Oxford, Department of Economics.
    11. Panayotis Mertikopoulos & William H. Sandholm, 2016. "Learning in Games via Reinforcement and Regularization," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1297-1324, November.
    12. Hopkins, Ed, 2007. "Adaptive learning models of consumer behavior," Journal of Economic Behavior & Organization, Elsevier, vol. 64(3-4), pages 348-368.
    13. Maxwell Pak & Bing Xu, 2016. "Generalized reinforcement learning in perfect-information games," International Journal of Game Theory, Springer;Game Theory Society, vol. 45(4), pages 985-1011, November.
    14. Tanabe, Yasuo, 2006. "The propagation of chaos for interacting individuals in a large population," Mathematical Social Sciences, Elsevier, vol. 51(2), pages 125-152, March.
    15. Ed Hopkins, 2002. "Adaptive Learning Models of Consumer Behaviour (first version)," Edinburgh School of Economics Discussion Paper Series 80, Edinburgh School of Economics, University of Edinburgh.
    16. Schuster, Stephan, 2010. "Network Formation with Adaptive Agents," MPRA Paper 27388, University Library of Munich, Germany.
    17. Mario Bravo & Mathieu Faure, 2013. "Reinforcement Learning with Restrictions on the Action Set," AMSE Working Papers 1335, Aix-Marseille School of Economics, France, revised 01 Jul 2013.
    18. Mario Bravo, 2016. "An Adjusted Payoff-Based Procedure for Normal Form Games," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1469-1483, November.
    19. Dana Heller, 2000. "Parametric Adaptive Learning," Econometric Society World Congress 2000 Contributed Papers 1496, Econometric Society.
    20. Cominetti, Roberto & Melo, Emerson & Sorin, Sylvain, 2010. "A payoff-based learning procedure and its application to traffic games," Games and Economic Behavior, Elsevier, vol. 70(1), pages 71-83, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:mateco:v:46:y:2010:i:1:p:86-98. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: . General contact details of provider: http://www.elsevier.com/locate/jmateco .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/jmateco .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.