Advanced Search
MyIDEAS: Login to save this article or follow this journal

Learning strict Nash equilibria through reinforcement

Contents:

Author Info

  • Ianni, Antonella
Registered author(s):

    Abstract

    This paper studies the analytical properties of the reinforcement learning model proposed in Erev and Roth (1998), also termed cumulative reinforcement learning in Laslier et al. (2001).

    Download Info

    If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
    File URL: http://www.sciencedirect.com/science/article/pii/S0304406813000323
    Download Restriction: Full text for ScienceDirect subscribers only

    As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.

    Bibliographic Info

    Article provided by Elsevier in its journal Journal of Mathematical Economics.

    Volume (Year): 50 (2014)
    Issue (Month): C ()
    Pages: 148-155

    as in new window
    Handle: RePEc:eee:mateco:v:50:y:2014:i:c:p:148-155

    Contact details of provider:
    Web page: http://www.elsevier.com/locate/jmateco

    Related research

    Keywords: Learning; Law of effect; Power law of practice; Strict Nash equilibrium; Replicator dynamics;

    References

    References listed on IDEAS
    Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
    as in new window
    1. Ed Hopkins & Martin Posch, 2003. "Attainability of Boundary Points under Reinforcement Learning," Levine's Working Paper Archive 506439000000000350, David K. Levine.
    2. Vega-Redondo,Fernando, 2003. "Economics and the Theory of Games," Cambridge Books, Cambridge University Press, number 9780521775908, April.
    3. Ritzberger, Klaus & Weibull, Jörgen W., 1993. "Evolutionary Selection in Normal Form Games," Working Paper Series 383, Research Institute of Industrial Economics.
    4. Laslier, J.-F. & Topol, R. & Walliser, B., 1999. "A Behavioral Learning Process in Games," Papers 99-03, Paris X - Nanterre, U.F.R. de Sc. Ec. Gest. Maths Infor..
    5. Drew Fudenberg & David K. Levine, 1998. "The Theory of Learning in Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262061945, December.
    6. Tilman B�rgers & Rajiv Sarin, . "Learning Through Reinforcement and Replicator Dynamics," ELSE working papers 051, ESRC Centre on Economics Learning and Social Evolution.
    7. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    8. Ianni, Antonella, 2011. "Learning Strict Nash Equilibria through Reinforcement," MPRA Paper 33936, University Library of Munich, Germany.
    9. Colin Camerer & Teck-Hua Ho, 1999. "Experience-weighted Attraction Learning in Normal Form Games," Econometrica, Econometric Society, vol. 67(4), pages 827-874, July.
    10. Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
    11. Arthur, W Brian, 1993. "On Designing Economic Agents That Behave Like Human Agents," Journal of Evolutionary Economics, Springer, vol. 3(1), pages 1-22, February.
    12. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    13. Ed Hopkins, 2001. "Two Competing Models of How People Learn in Games," Levine's Working Paper Archive 625018000000000226, David K. Levine.
    14. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
    15. Michel BenaÔm & J–rgen W. Weibull, 2003. "Deterministic Approximation of Stochastic Evolution in Games," Econometrica, Econometric Society, vol. 71(3), pages 873-903, 05.
    16. Cross, John G, 1973. "A Stochastic Learning Model of Economic Behavior," The Quarterly Journal of Economics, MIT Press, vol. 87(2), pages 239-66, May.
    17. Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
    Full references (including those not matched with items on IDEAS)

    Citations

    Lists

    This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

    Statistics

    Access and download statistics

    Corrections

    When requesting a correction, please mention this item's handle: RePEc:eee:mateco:v:50:y:2014:i:c:p:148-155. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei).

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If references are entirely missing, you can add them using this form.

    If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.