IDEAS home Printed from https://ideas.repec.org/a/spr/jogath/v44y2015i3p667-699.html
   My bibliography  Save this article

Nonconvergence to saddle boundary points under perturbed reinforcement learning

Author

Listed:
  • Georgios Chasparis
  • Jeff Shamma
  • Anders Rantzer

Abstract

For several reinforcement learning models in strategic-form games, convergence to action profiles that are not Nash equilibria may occur with positive probability under certain conditions on the payoff function. In this paper, we explore how an alternative reinforcement learning model, where the strategy of each agent is perturbed by a strategy-dependent perturbation (or mutations) function, may exclude convergence to non-Nash pure strategy profiles. This approach extends prior analysis on reinforcement learning in games that addresses the issue of convergence to saddle boundary points. It further provides a framework under which the effect of mutations can be analyzed in the context of reinforcement learning. Copyright Springer-Verlag Berlin Heidelberg 2015

Suggested Citation

  • Georgios Chasparis & Jeff Shamma & Anders Rantzer, 2015. "Nonconvergence to saddle boundary points under perturbed reinforcement learning," International Journal of Game Theory, Springer;Game Theory Society, vol. 44(3), pages 667-699, August.
  • Handle: RePEc:spr:jogath:v:44:y:2015:i:3:p:667-699
    DOI: 10.1007/s00182-014-0449-3
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s00182-014-0449-3
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s00182-014-0449-3?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
    2. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    3. Bergin, James & Lipman, Barton L, 1996. "Evolution with State-Dependent Mutations," Econometrica, Econometric Society, vol. 64(4), pages 943-956, July.
    4. Cho, In-Koo & Matsui, Akihiko, 2005. "Learning aspiration in repeated games," Journal of Economic Theory, Elsevier, vol. 124(2), pages 171-201, October.
    5. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    6. Hopkins, Ed & Posch, Martin, 2005. "Attainability of boundary points under reinforcement learning," Games and Economic Behavior, Elsevier, vol. 53(1), pages 110-125, October.
    7. Monderer, Dov & Shapley, Lloyd S., 1996. "Potential Games," Games and Economic Behavior, Elsevier, vol. 14(1), pages 124-143, May.
    8. Arthur, W Brian, 1993. "On Designing Economic Agents That Behave Like Human Agents," Journal of Evolutionary Economics, Springer, vol. 3(1), pages 1-22, February.
    9. Jorgen W. Weibull, 1997. "Evolutionary Game Theory," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262731215, December.
    10. Sandholm, William H., 2001. "Potential Games with Continuous Player Sets," Journal of Economic Theory, Elsevier, vol. 97(1), pages 81-108, March.
    11. Bonacich, Phillip & Liggett, Thomas M., 2003. "Asymptotics of a matrix valued Markov chain arising in sociology," Stochastic Processes and their Applications, Elsevier, vol. 104(1), pages 155-171, March.
    12. Georgios Chasparis & Jeff Shamma, 2012. "Distributed Dynamic Reinforcement of Efficient Outcomes in Multiagent Coordination and Network Formation," Dynamic Games and Applications, Springer, vol. 2(1), pages 18-50, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Georgios Chasparis & Jeff Shamma, 2012. "Distributed Dynamic Reinforcement of Efficient Outcomes in Multiagent Coordination and Network Formation," Dynamic Games and Applications, Springer, vol. 2(1), pages 18-50, March.
    2. Cominetti, Roberto & Melo, Emerson & Sorin, Sylvain, 2010. "A payoff-based learning procedure and its application to traffic games," Games and Economic Behavior, Elsevier, vol. 70(1), pages 71-83, September.
    3. Hopkins, Ed & Posch, Martin, 2005. "Attainability of boundary points under reinforcement learning," Games and Economic Behavior, Elsevier, vol. 53(1), pages 110-125, October.
    4. Ianni, Antonella, 2014. "Learning strict Nash equilibria through reinforcement," Journal of Mathematical Economics, Elsevier, vol. 50(C), pages 148-155.
    5. Schuster, Stephan, 2010. "Network Formation with Adaptive Agents," MPRA Paper 27388, University Library of Munich, Germany.
    6. Mario Bravo & Mathieu Faure, 2013. "Reinforcement Learning with Restrictions on the Action Set," AMSE Working Papers 1335, Aix-Marseille School of Economics, France, revised 01 Jul 2013.
    7. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    8. Erik Mohlin & Robert Ostling & Joseph Tao-yi Wang, 2014. "Learning by Imitation in Games: Theory, Field, and Laboratory," Economics Series Working Papers 734, University of Oxford, Department of Economics.
    9. Panayotis Mertikopoulos & William H. Sandholm, 2016. "Learning in Games via Reinforcement and Regularization," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1297-1324, November.
    10. Sandholm,W.H., 2003. "Excess payoff dynamics, potential dynamics, and stable games," Working papers 5, Wisconsin Madison - Social Systems.
    11. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    12. Funai, Naoki, 2022. "Reinforcement learning with foregone payoff information in normal form games," Journal of Economic Behavior & Organization, Elsevier, vol. 200(C), pages 638-660.
    13. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    14. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    15. Leslie, David S. & Collins, E.J., 2006. "Generalised weakened fictitious play," Games and Economic Behavior, Elsevier, vol. 56(2), pages 285-298, August.
    16. Jacques Durieu & Philippe Solal, 2012. "Models of Adaptive Learning in Game Theory," Chapters, in: Richard Arena & Agnès Festré & Nathalie Lazaric (ed.), Handbook of Knowledge and Economics, chapter 11, Edward Elgar Publishing.
    17. Alanyali, Murat, 2010. "A note on adjusted replicator dynamics in iterated games," Journal of Mathematical Economics, Elsevier, vol. 46(1), pages 86-98, January.
    18. Mertikopoulos, Panayotis & Sandholm, William H., 2018. "Riemannian game dynamics," Journal of Economic Theory, Elsevier, vol. 177(C), pages 315-364.
    19. Mengel, Friederike, 2012. "Learning across games," Games and Economic Behavior, Elsevier, vol. 74(2), pages 601-619.
    20. Farokhi, Farhad & Johansson, Karl H., 2015. "A piecewise-constant congestion taxing policy for repeated routing games," Transportation Research Part B: Methodological, Elsevier, vol. 78(C), pages 123-143.

    More about this item

    Keywords

    Learning in games; Reinforcement learning; Replicator dynamics; C72; C73; D83;
    All these keywords.

    JEL classification:

    • C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games
    • D83 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Search; Learning; Information and Knowledge; Communication; Belief; Unawareness

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jogath:v:44:y:2015:i:3:p:667-699. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.