IDEAS home Printed from https://ideas.repec.org/a/eee/gamebe/v80y2013icp10-38.html
   My bibliography  Save this article

Reinforcement learning in population games

Author

Listed:
  • Lahkar, Ratul
  • Seymour, Robert M.

Abstract

We study reinforcement learning in a population game. Agents in a population game revise mixed strategies using the Cross rule of reinforcement learning. The population state—the probability distribution over the set of mixed strategies—evolves according to the replicator continuity equation which, in its simplest form, is a partial differential equation. The replicator dynamic is a special case in which the initial population state is homogeneous, i.e. when all agents use the same mixed strategy. We apply the continuity dynamic to various classes of symmetric games. Using 3×3 coordination games, we show that equilibrium selection depends on the variance of the initial strategy distribution, or initial population heterogeneity. We give an example of a 2×2 game in which heterogeneity persists even as the mean population state converges to a mixed equilibrium. Finally, we apply the dynamic to negative definite and doubly symmetric games.

Suggested Citation

  • Lahkar, Ratul & Seymour, Robert M., 2013. "Reinforcement learning in population games," Games and Economic Behavior, Elsevier, vol. 80(C), pages 10-38.
  • Handle: RePEc:eee:gamebe:v:80:y:2013:i:c:p:10-38
    DOI: 10.1016/j.geb.2013.02.006
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0899825613000286
    Download Restriction: Full text for ScienceDirect subscribers only

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Fudenberg Drew & Kreps David M., 1993. "Learning Mixed Equilibria," Games and Economic Behavior, Elsevier, vol. 5(3), pages 320-367, July.
    2. Sandholm, William H., 2001. "Potential Games with Continuous Player Sets," Journal of Economic Theory, Elsevier, vol. 97(1), pages 81-108, March.
    3. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-881, September.
    4. repec:dau:papers:123456789/1014 is not listed on IDEAS
    5. Tilman Börgers & Antonio J. Morales & Rajiv Sarin, 2004. "Expedient and Monotone Learning Rules," Econometrica, Econometric Society, vol. 72(2), pages 383-405, March.
    6. Fudenberg, Drew & Takahashi, Satoru, 2011. "Heterogeneous beliefs and local information in stochastic fictitious play," Games and Economic Behavior, Elsevier, vol. 71(1), pages 100-120, January.
    7. Borgers, Tilman & Sarin, Rajiv, 2000. "Naive Reinforcement Learning with Endogenous Aspirations," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-950, November.
    8. Hopkins, Ed, 1999. "Learning, Matching, and Aggregation," Games and Economic Behavior, Elsevier, vol. 26(1), pages 79-110, January.
    9. Ellison, Glenn & Fudenberg, Drew, 2000. "Learning Purified Mixed Equilibria," Journal of Economic Theory, Elsevier, vol. 90(1), pages 84-115, January.
    10. Sergiu Hart & Andreu Mas-Colell, 2000. "A Simple Adaptive Procedure Leading to Correlated Equilibrium," Econometrica, Econometric Society, vol. 68(5), pages 1127-1150, September.
    11. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    12. Josef Hofbauer & Sylvain Sorin & Yannick Viossat, 2009. "Time Average Replicator and Best-Reply Dynamics," Mathematics of Operations Research, INFORMS, vol. 34(2), pages 263-269, May.
    13. Friedman, Daniel & Ostrov, Daniel N., 2008. "Conspicuous consumption dynamics," Games and Economic Behavior, Elsevier, vol. 64(1), pages 121-145, September.
    14. Ramsza, Michal & Seymour, Robert M., 2010. "Fictitious play in an evolutionary environment," Games and Economic Behavior, Elsevier, vol. 68(1), pages 303-324, January.
    15. Friedman, Daniel & Ostrov, Daniel N., 2010. "Gradient dynamics in population games: Some basic results," Journal of Mathematical Economics, Elsevier, vol. 46(5), pages 691-707, September.
    16. Hofbauer, Josef & Sandholm, William H., 2009. "Stable games and their dynamics," Journal of Economic Theory, Elsevier, vol. 144(4), pages 1665-1693.4, July.
    17. Ely, Jeffrey C. & Sandholm, William H., 2005. "Evolution in Bayesian games I: Theory," Games and Economic Behavior, Elsevier, vol. 53(1), pages 83-109, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Lahkar, Ratul & Seymour, Robert M., 2014. "The dynamics of generalized reinforcement learning," Journal of Economic Theory, Elsevier, vol. 151(C), pages 584-595.
    2. Dai Zusai, 2017. "Nonaggregable evolutionary dynamics under payoff heterogeneity," DETU Working Papers 1702, Department of Economics, Temple University.
    3. Wei, Fangfang & Jia, Ning & Ma, Shoufeng, 2016. "Day-to-day traffic dynamics considering social interaction: From individual route choice behavior to a network flow model," Transportation Research Part B: Methodological, Elsevier, vol. 94(C), pages 335-354.

    More about this item

    Keywords

    Reinforcement learning; Continuity equation; Replicator dynamics;

    JEL classification:

    • C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:gamebe:v:80:y:2013:i:c:p:10-38. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Dana Niculescu). General contact details of provider: http://www.elsevier.com/locate/inca/622836 .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.