IDEAS home Printed from https://ideas.repec.org/
MyIDEAS: Login to save this paper or follow this series

Learning Strict Nash Equilibria through Reinforcement

  • Ianni, Antonella

This paper studies the analytical properties of the reinforcement learning model proposed in Erev and Roth (1998), also termed cumulative reinforcement learning in Laslier et al (2001). This stochastic model of learning in games accounts for two main elements: the law of effect (positive reinforcement of actions that perform well) and the law of practice (the magnitude of the reinforcement effect decreases with players' experience). The main results of the paper show that, if the solution trajectories of the underlying replicator equation converge exponentially fast, then, with probability arbitrarily close to one, all the realizations of the reinforcement learning process will, from some time on, lie within an " band of that solution. The paper improves upon results currently available in the literature by showing that a reinforcement learning process that has been running for some time and is found suffciently close to a strict Nash equilibrium, will reach it with probability one.

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.

File URL: http://mpra.ub.uni-muenchen.de/33936/1/MPRA_paper_33936.pdf
File Function: original version
Download Restriction: no

Paper provided by University Library of Munich, Germany in its series MPRA Paper with number 33936.

as
in new window

Length:
Date of creation: 07 Oct 2011
Date of revision:
Handle: RePEc:pra:mprapa:33936
Contact details of provider: Postal: Schackstr. 4, D-80539 Munich, Germany
Phone: +49-(0)89-2180-2219
Fax: +49-(0)89-2180-3900
Web page: http://mpra.ub.uni-muenchen.de

More information through EDIRC

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:

as in new window
  1. Ritzberger, Klaus & Weibull, Jörgen W., 1993. "Evolutionary Selection in Normal Form Games," Working Paper Series 383, Research Institute of Industrial Economics.
  2. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
  3. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
  4. Ed Hopkins, 2004. "Two Competing Models of How People Learn in Games," ESE Discussion Papers 51, Edinburgh School of Economics, University of Edinburgh.
  5. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
  6. Hopkins, Ed & Posch, Martin, 2005. "Attainability of boundary points under reinforcement learning," Games and Economic Behavior, Elsevier, vol. 53(1), pages 110-125, October.
  7. Arthur, W Brian, 1993. "On Designing Economic Agents That Behave Like Human Agents," Journal of Evolutionary Economics, Springer, vol. 3(1), pages 1-22, February.
  8. Antonella Ianni, 2007. "Learning Strict Nash Equilibria through Reinforcement," Economics Working Papers ECO2007/21, European University Institute.
  9. Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
  10. Cross, John G, 1973. "A Stochastic Learning Model of Economic Behavior," The Quarterly Journal of Economics, MIT Press, vol. 87(2), pages 239-66, May.
  11. Michel BenaÔm & J–rgen W. Weibull, 2003. "Deterministic Approximation of Stochastic Evolution in Games," Econometrica, Econometric Society, vol. 71(3), pages 873-903, 05.
  12. Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
  13. Colin Camerer & Teck-Hua Ho, 1999. "Experience-weighted Attraction Learning in Normal Form Games," Econometrica, Econometric Society, vol. 67(4), pages 827-874, July.
  14. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
Full references (including those not matched with items on IDEAS)

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

When requesting a correction, please mention this item's handle: RePEc:pra:mprapa:33936. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Ekkehart Schlicht)

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.

This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.