IDEAS home Printed from https://ideas.repec.org/a/eee/jetheo/v151y2014icp584-595.html
   My bibliography  Save this article

The dynamics of generalized reinforcement learning

Author

Listed:
  • Lahkar, Ratul
  • Seymour, Robert M.

Abstract

We consider reinforcement learning in games with both positive and negative payoffs. The Cross rule is the prototypical reinforcement learning rule in games that have only positive payoffs. We extend this rule to incorporate negative payoffs to obtain the generalized reinforcement learning rule. Applying this rule to a population game, we obtain the generalized reinforcement dynamic which describes the evolution of mixed strategies in the population. We apply the dynamic to the class of Rock–Scissor–Paper (RSP) games to establish local convergence to the interior rest point in all such games, including the bad RSP game.

Suggested Citation

  • Lahkar, Ratul & Seymour, Robert M., 2014. "The dynamics of generalized reinforcement learning," Journal of Economic Theory, Elsevier, vol. 151(C), pages 584-595.
  • Handle: RePEc:eee:jetheo:v:151:y:2014:i:c:p:584-595
    DOI: 10.1016/j.jet.2014.01.002
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0022053114000039
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.jet.2014.01.002?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Fudenberg, Drew & Takahashi, Satoru, 2011. "Heterogeneous beliefs and local information in stochastic fictitious play," Games and Economic Behavior, Elsevier, vol. 71(1), pages 100-120, January.
    2. Gaunersdorfer Andrea & Hofbauer Josef, 1995. "Fictitious Play, Shapley Polygons, and the Replicator Equation," Games and Economic Behavior, Elsevier, vol. 11(2), pages 279-303, November.
    3. Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
    4. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    5. Karandikar, Rajeeva & Mookherjee, Dilip & Ray, Debraj & Vega-Redondo, Fernando, 1998. "Evolving Aspirations and Cooperation," Journal of Economic Theory, Elsevier, vol. 80(2), pages 292-331, June.
    6. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-881, September.
    7. Borgers, Tilman & Sarin, Rajiv, 2000. "Naive Reinforcement Learning with Endogenous Aspirations," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-950, November.
    8. Lahkar, Ratul & Seymour, Robert M., 2013. "Reinforcement learning in population games," Games and Economic Behavior, Elsevier, vol. 80(C), pages 10-38.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Schauf, Andrew & Oh, Poong, 2021. "Adaptation strategies and collective dynamics of extraction in networked commons of bistable resources," SocArXiv wmtqk, Center for Open Science.
    2. Jonathan Newton, 2018. "Evolutionary Game Theory: A Renaissance," Games, MDPI, vol. 9(2), pages 1-67, May.
    3. Lahkar, Ratul, 2017. "Equilibrium selection in the stag hunt game under generalized reinforcement learning," Journal of Economic Behavior & Organization, Elsevier, vol. 138(C), pages 63-68.
    4. Funai, Naoki, 2022. "Reinforcement learning with foregone payoff information in normal form games," Journal of Economic Behavior & Organization, Elsevier, vol. 200(C), pages 638-660.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Mengel, Friederike, 2012. "Learning across games," Games and Economic Behavior, Elsevier, vol. 74(2), pages 601-619.
    2. Schuster, Stephan, 2012. "Applications in Agent-Based Computational Economics," MPRA Paper 47201, University Library of Munich, Germany.
    3. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    4. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    5. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    6. Tilman Börgers & Antonio J. Morales & Rajiv Sarin, 2004. "Expedient and Monotone Learning Rules," Econometrica, Econometric Society, vol. 72(2), pages 383-405, March.
    7. Schuster, Stephan, 2010. "Network Formation with Adaptive Agents," MPRA Paper 27388, University Library of Munich, Germany.
    8. Laslier, Jean-Francois & Topol, Richard & Walliser, Bernard, 2001. "A Behavioral Learning Process in Games," Games and Economic Behavior, Elsevier, vol. 37(2), pages 340-366, November.
    9. Segismundo S. Izquierdo & Luis R. Izquierdo & Nicholas M. Gotts, 2008. "Reinforcement Learning Dynamics in Social Dilemmas," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 11(2), pages 1-1.
    10. Lahkar, Ratul & Seymour, Robert M., 2013. "Reinforcement learning in population games," Games and Economic Behavior, Elsevier, vol. 80(C), pages 10-38.
    11. Sarin, Rajiv & Vahid, Farshid, 2001. "Predicting How People Play Games: A Simple Dynamic Model of Choice," Games and Economic Behavior, Elsevier, vol. 34(1), pages 104-122, January.
    12. Dixon, Huw D. & Sbriglia, Patrizia & Somma, Ernesto, 2006. "Learning to collude: An experiment in convergence and equilibrium selection in oligopoly," Research in Economics, Elsevier, vol. 60(3), pages 155-167, September.
    13. Duffy, John, 2006. "Agent-Based Models and Human Subject Experiments," Handbook of Computational Economics, in: Leigh Tesfatsion & Kenneth L. Judd (ed.), Handbook of Computational Economics, edition 1, volume 2, chapter 19, pages 949-1011, Elsevier.
    14. Droste, Edward & Kosfeld, Michael & Voorneveld, Mark, 2003. "Best-reply matching in games," Mathematical Social Sciences, Elsevier, vol. 46(3), pages 291-309, December.
    15. Napel, Stefan, 2003. "Aspiration adaptation in the ultimatum minigame," Games and Economic Behavior, Elsevier, vol. 43(1), pages 86-106, April.
    16. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 203, Economics Division, School of Social Sciences, University of Southampton.
    17. DeJong, D.V. & Blume, A. & Neumann, G., 1998. "Learning in Sender-Receiver Games," Other publications TiSEM 4a8b4f46-f30b-4ad2-bb0c-1, Tilburg University, School of Economics and Management.
    18. Jean-François Laslier & Bernard Walliser, 2015. "Stubborn learning," Theory and Decision, Springer, vol. 79(1), pages 51-93, July.
    19. Ponti, Giovanni, 2000. "Continuous-time evolutionary dynamics: theory and practice," Research in Economics, Elsevier, vol. 54(2), pages 187-214, June.
    20. Franke, Reiner, 2003. "Reinforcement learning in the El Farol model," Journal of Economic Behavior & Organization, Elsevier, vol. 51(3), pages 367-388, July.

    More about this item

    Keywords

    Reinforcement learning; Negative reinforcement; Replicator dynamic;
    All these keywords.

    JEL classification:

    • C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:jetheo:v:151:y:2014:i:c:p:584-595. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/inca/622869 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.