IDEAS home Printed from https://ideas.repec.org/a/eee/mateco/v46y2010i1p86-98.html
   My bibliography  Save this article

A note on adjusted replicator dynamics in iterated games

Author

Listed:
  • Alanyali, Murat

Abstract

We establish how a rich collection of evolutionary games can arise as asymptotically exact descriptions of player strategies in iterated games. We consider arbitrary normal-form games that are iteratively played by players that observe their own payoffs after each round. Each player's strategy is assumed to depend only past actions and past payoffs of the player. We study a class of autonomous reinforcement-learning rules for such players and show that variants of the adjusted replicator dynamics are asymptotically exact approximations of player strategies for small values of a step-size parameter adopted in learning. We also obtain a convergence result that identifies when a stable equilibrium of the limit dynamics characterizes equilibrium behavior of player strategies.

Suggested Citation

  • Alanyali, Murat, 2010. "A note on adjusted replicator dynamics in iterated games," Journal of Mathematical Economics, Elsevier, vol. 46(1), pages 86-98, January.
  • Handle: RePEc:eee:mateco:v:46:y:2010:i:1:p:86-98
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0304-4068(09)00081-0
    Download Restriction: Full text for ScienceDirect subscribers only
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
    2. Michel BenaÔm & J–rgen W. Weibull, 2003. "Deterministic Approximation of Stochastic Evolution in Games," Econometrica, Econometric Society, vol. 71(3), pages 873-903, May.
    3. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    4. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    5. Josef Hofbauer & Karl H. Schlag, 2000. "Sophisticated imitation in cyclic games," Journal of Evolutionary Economics, Springer, vol. 10(5), pages 523-543.
    6. Rustichini, Aldo, 1999. "Optimal Properties of Stimulus--Response Learning Models," Games and Economic Behavior, Elsevier, vol. 29(1-2), pages 244-273, October.
    7. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    8. Schlag, Karl H., 1998. "Why Imitate, and If So, How?, : A Boundedly Rational Approach to Multi-armed Bandits," Journal of Economic Theory, Elsevier, vol. 78(1), pages 130-156, January.
    9. Laslier, Jean-Francois & Topol, Richard & Walliser, Bernard, 2001. "A Behavioral Learning Process in Games," Games and Economic Behavior, Elsevier, vol. 37(2), pages 340-366, November.
    10. Schlag, Karl H., 1998. "Why Imitate, and If So, How?, : A Boundedly Rational Approach to Multi-armed Bandits," Journal of Economic Theory, Elsevier, vol. 78(1), pages 130-156, January.
    11. Sarin, Rajiv & Vahid, Farshid, 1999. "Payoff Assessments without Probabilities: A Simple Dynamic Model of Choice," Games and Economic Behavior, Elsevier, vol. 28(2), pages 294-309, August.
    12. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-881, September.
    13. Borgers, Tilman & Sarin, Rajiv, 2000. "Naive Reinforcement Learning with Endogenous Aspirations," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-950, November.
    14. Valentina Corradi & Rajiv Sarin, "undated". "Continuous Approximations of Stochastic Evolutionary Game Dynamics," ELSE working papers 002, ESRC Centre on Economics Learning and Social Evolution.
    15. Antonio J. Morales Siles, 2002. "Absolute Expediency and Imitative Behaviour," Economic Working Papers at Centro de Estudios Andaluces E2002/03, Centro de Estudios Andaluces.
    16. R. Boylan, 2010. "Continuous Approximation of Dynamical Systems with Randomly Matched Individuals," Levine's Working Paper Archive 372, David K. Levine.
    17. Corradi, Valentina & Sarin, Rajiv, 2000. "Continuous Approximations of Stochastic Evolutionary Game Dynamics," Journal of Economic Theory, Elsevier, vol. 94(2), pages 163-191, October.
    18. DellaVigna, Stefano & LiCalzi, Marco, 2001. "Learning to make risk neutral choices in a symmetric world," Mathematical Social Sciences, Elsevier, vol. 41(1), pages 19-37, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jonathan Newton, 2018. "Evolutionary Game Theory: A Renaissance," Games, MDPI, vol. 9(2), pages 1-67, May.
    2. Hopkins, Ed & Posch, Martin, 2005. "Attainability of boundary points under reinforcement learning," Games and Economic Behavior, Elsevier, vol. 53(1), pages 110-125, October.
    3. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    4. Funai, Naoki, 2022. "Reinforcement learning with foregone payoff information in normal form games," Journal of Economic Behavior & Organization, Elsevier, vol. 200(C), pages 638-660.
    5. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    6. Mertikopoulos, Panayotis & Sandholm, William H., 2018. "Riemannian game dynamics," Journal of Economic Theory, Elsevier, vol. 177(C), pages 315-364.
    7. Mengel, Friederike, 2012. "Learning across games," Games and Economic Behavior, Elsevier, vol. 74(2), pages 601-619.
    8. Ianni, Antonella, 2014. "Learning strict Nash equilibria through reinforcement," Journal of Mathematical Economics, Elsevier, vol. 50(C), pages 148-155.
    9. Erik Mohlin & Robert Ostling & Joseph Tao-yi Wang, 2014. "Learning by Imitation in Games: Theory, Field, and Laboratory," Economics Series Working Papers 734, University of Oxford, Department of Economics.
    10. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    11. Tanabe, Yasuo, 2006. "The propagation of chaos for interacting individuals in a large population," Mathematical Social Sciences, Elsevier, vol. 51(2), pages 125-152, March.
    12. Naoki Funai, 2019. "Convergence results on stochastic adaptive learning," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 68(4), pages 907-934, November.
    13. Panayotis Mertikopoulos & William H. Sandholm, 2016. "Learning in Games via Reinforcement and Regularization," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1297-1324, November.
    14. Sandholm, William H., 2003. "Evolution and equilibrium under inexact information," Games and Economic Behavior, Elsevier, vol. 44(2), pages 343-378, August.
    15. Sandholm,W.H., 1999. "Markov evolution with inexact information," Working papers 15, Wisconsin Madison - Social Systems.
    16. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 203, Economics Division, School of Social Sciences, University of Southampton.
    17. Hopkins, Ed, 2007. "Adaptive learning models of consumer behavior," Journal of Economic Behavior & Organization, Elsevier, vol. 64(3-4), pages 348-368.
    18. Maxwell Pak & Bing Xu, 2016. "Generalized reinforcement learning in perfect-information games," International Journal of Game Theory, Springer;Game Theory Society, vol. 45(4), pages 985-1011, November.
    19. Schuster, Stephan, 2012. "Applications in Agent-Based Computational Economics," MPRA Paper 47201, University Library of Munich, Germany.
    20. Ed Hopkins, 2002. "Adaptive Learning Models of Consumer Behaviour (first version)," Edinburgh School of Economics Discussion Paper Series 80, Edinburgh School of Economics, University of Edinburgh.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:mateco:v:46:y:2010:i:1:p:86-98. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/jmateco .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.