IDEAS home Printed from https://ideas.repec.org/a/eee/jeborg/v138y2017icp63-68.html
   My bibliography  Save this article

Equilibrium selection in the stag hunt game under generalized reinforcement learning

Author

Listed:
  • Lahkar, Ratul

Abstract

We apply the generalized reinforcement (GR) learning protocol to the stag hunt game. GR learning combines positive and negative reinforcement. The GR learning rule generates the GR dynamic, which governs the evolution of the mixed strategy of agents in the population. We identify conditions under which the GR dynamic converges globally to one of the two pure strategy Nash equilibria of the game.

Suggested Citation

  • Lahkar, Ratul, 2017. "Equilibrium selection in the stag hunt game under generalized reinforcement learning," Journal of Economic Behavior & Organization, Elsevier, vol. 138(C), pages 63-68.
  • Handle: RePEc:eee:jeborg:v:138:y:2017:i:c:p:63-68
    DOI: 10.1016/j.jebo.2017.04.012
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0167268117301051
    Download Restriction: Full text for ScienceDirect subscribers only

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Karandikar, Rajeeva & Mookherjee, Dilip & Ray, Debraj & Vega-Redondo, Fernando, 1998. "Evolving Aspirations and Cooperation," Journal of Economic Theory, Elsevier, vol. 80(2), pages 292-331, June.
    2. Ellison, Glenn, 1993. "Learning, Local Interaction, and Coordination," Econometrica, Econometric Society, vol. 61(5), pages 1047-1071, September.
    3. William H. Sandholm, 2001. "Almost global convergence to p-dominant equilibrium," International Journal of Game Theory, Springer;Game Theory Society, vol. 30(1), pages 107-116.
    4. Bergin, James & Lipman, Barton L, 1996. "Evolution with State-Dependent Mutations," Econometrica, Econometric Society, vol. 64(4), pages 943-956, July.
    5. Borgers, Tilman & Sarin, Rajiv, 2000. "Naive Reinforcement Learning with Endogenous Aspirations," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-950, November.
    6. Hart, Sergiu & Mas-Colell, Andreu, 2006. "Stochastic uncoupled dynamics and Nash equilibrium," Games and Economic Behavior, Elsevier, vol. 57(2), pages 286-303, November.
    7. Kandori, Michihiro & Mailath, George J & Rob, Rafael, 1993. "Learning, Mutation, and Long Run Equilibria in Games," Econometrica, Econometric Society, vol. 61(1), pages 29-56, January.
    8. Pradelski, Bary S.R. & Young, H. Peyton, 2012. "Learning efficient Nash equilibria in distributed systems," Games and Economic Behavior, Elsevier, vol. 75(2), pages 882-897.
    9. Sandholm, William H. & Tercieux, Olivier & Oyama, Daisuke, 2015. "Sampling best response dynamics and deterministic equilibrium selection," Theoretical Economics, Econometric Society, vol. 10(1), January.
    10. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    11. Kreindler, Gabriel E. & Young, H. Peyton, 2013. "Fast convergence in evolutionary equilibrium selection," Games and Economic Behavior, Elsevier, vol. 80(C), pages 39-67.
    12. Young, H Peyton, 1993. "The Evolution of Conventions," Econometrica, Econometric Society, vol. 61(1), pages 57-84, January.
    13. Blume, Lawrence E., 2003. "How noise matters," Games and Economic Behavior, Elsevier, vol. 44(2), pages 251-271, August.
    14. Sergiu Hart & Andreu Mas-Colell, 2003. "Uncoupled Dynamics Do Not Lead to Nash Equilibrium," American Economic Review, American Economic Association, vol. 93(5), pages 1830-1836, December.
    15. Foster, Dean P. & Young, H. Peyton, 2006. "Regret testing: learning to play Nash equilibrium without knowing you have an opponent," Theoretical Economics, Econometric Society, vol. 1(3), pages 341-367, September.
    16. Myatt, David P. & Wallace, Chris C., 2004. "Adaptive play by idiosyncratic agents," Games and Economic Behavior, Elsevier, vol. 48(1), pages 124-138, July.
    17. Sandholm, William H. & Tercieux, Olivier & Oyama, Daisuke, 2015. "Sampling best response dynamics and deterministic equilibrium selection," Theoretical Economics, Econometric Society, vol. 10(1), January.
    18. Young, H. Peyton, 2009. "Learning by trial and error," Games and Economic Behavior, Elsevier, vol. 65(2), pages 626-643, March.
    19. Lahkar, Ratul & Seymour, Robert M., 2014. "The dynamics of generalized reinforcement learning," Journal of Economic Theory, Elsevier, vol. 151(C), pages 584-595.
    Full references (including those not matched with items on IDEAS)

    More about this item

    Keywords

    Reinforcement learning; Generalized reinforcement dynamic; Stag hunt game;

    JEL classification:

    • C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:jeborg:v:138:y:2017:i:c:p:63-68. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu). General contact details of provider: http://www.elsevier.com/locate/jebo .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.