IDEAS home Printed from https://ideas.repec.org/p/edn/esedps/79.html
   My bibliography  Save this paper

Attainability of Boundary Points under Reinforcement Learning

Author

Abstract

This paper investigates the properties of the most common form of reinforcement learning (the "basic model" of Erev and Roth, American Economic Review, 88, 848-881, 1998). Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics.

Suggested Citation

  • Ed Hopkins & Martin Posch, 2003. "Attainability of Boundary Points under Reinforcement Learning," ESE Discussion Papers 79, Edinburgh School of Economics, University of Edinburgh.
  • Handle: RePEc:edn:esedps:79
    as

    Download full text from publisher

    File URL: http://www.econ.ed.ac.uk/papers/id79_esedps.pdf
    Download Restriction: no

    Other versions of this item:

    References listed on IDEAS

    as
    1. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-881, September.
    2. Laslier, Jean-Francois & Topol, Richard & Walliser, Bernard, 2001. "A Behavioral Learning Process in Games," Games and Economic Behavior, Elsevier, vol. 37(2), pages 340-366, November.
    3. Arthur, W Brian, 1993. "On Designing Economic Agents That Behave Like Human Agents," Journal of Evolutionary Economics, Springer, vol. 3(1), pages 1-22, February.
    4. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    5. Rustichini, Aldo, 1999. "Optimal Properties of Stimulus--Response Learning Models," Games and Economic Behavior, Elsevier, vol. 29(1-2), pages 244-273, October.
    6. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 203, Economics Division, School of Social Sciences, University of Southampton.
    7. Ellison, Glenn & Fudenberg, Drew, 2000. "Learning Purified Mixed Equilibria," Journal of Economic Theory, Elsevier, vol. 90(1), pages 84-115, January.
    8. William H. Sandholm, 2002. "Evolutionary Implementation and Congestion Pricing," Review of Economic Studies, Oxford University Press, vol. 69(3), pages 667-689.
    9. Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
    10. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    11. Hofbauer, Josef & Hopkins, Ed, 2005. "Learning in perturbed asymmetric games," Games and Economic Behavior, Elsevier, vol. 52(1), pages 133-152, July.
    12. Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
    13. Duffy, John & Hopkins, Ed, 2005. "Learning, information, and sorting in market entry games: theory and evidence," Games and Economic Behavior, Elsevier, vol. 51(1), pages 31-62, April.
    14. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 0203, Economics Division, School of Social Sciences, University of Southampton.
    15. Colin Camerer & Teck-Hua Ho, 1999. "Experience-weighted Attraction Learning in Normal Form Games," Econometrica, Econometric Society, vol. 67(4), pages 827-874, July.
    16. Sarin, Rajiv & Vahid, Farshid, 1999. "Payoff Assessments without Probabilities: A Simple Dynamic Model of Choice," Games and Economic Behavior, Elsevier, vol. 28(2), pages 294-309, August.
    17. Monderer, Dov & Shapley, Lloyd S., 1996. "Potential Games," Games and Economic Behavior, Elsevier, vol. 14(1), pages 124-143, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Alan Beggs, 2015. "Reference Points and Learning," Economics Series Working Papers 767, University of Oxford, Department of Economics.
    2. Fudenberg, Drew & Takahashi, Satoru, 2011. "Heterogeneous beliefs and local information in stochastic fictitious play," Games and Economic Behavior, Elsevier, vol. 71(1), pages 100-120, January.
    3. Oyarzun, Carlos & Ruf, Johannes, 2014. "Convergence in models with bounded expected relative hazard rates," Journal of Economic Theory, Elsevier, vol. 154(C), pages 229-244.
    4. Ianni, Antonella, 2014. "Learning strict Nash equilibria through reinforcement," Journal of Mathematical Economics, Elsevier, vol. 50(C), pages 148-155.
    5. Duffy, John & Hopkins, Ed, 2005. "Learning, information, and sorting in market entry games: theory and evidence," Games and Economic Behavior, Elsevier, vol. 51(1), pages 31-62, April.
    6. Josephson, Jens, 2008. "A numerical analysis of the evolutionary stability of learning rules," Journal of Economic Dynamics and Control, Elsevier, vol. 32(5), pages 1569-1599, May.
    7. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    8. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    9. Jacques Durieu & Philippe Solal, 2012. "Models of Adaptive Learning in Game Theory," Chapters,in: Handbook of Knowledge and Economics, chapter 11 Edward Elgar Publishing.
    10. Schuster, Stephan, 2010. "Network Formation with Adaptive Agents," MPRA Paper 27388, University Library of Munich, Germany.
    11. Georgios Chasparis & Jeff Shamma & Anders Rantzer, 2015. "Nonconvergence to saddle boundary points under perturbed reinforcement learning," International Journal of Game Theory, Springer;Game Theory Society, vol. 44(3), pages 667-699, August.
    12. Hopkins, Ed, 2007. "Adaptive learning models of consumer behavior," Journal of Economic Behavior & Organization, Elsevier, vol. 64(3-4), pages 348-368.
    13. Erik Mohlin & Robert Ostling & Joseph Tao-yi Wang, 2014. "Learning by Imitation in Games: Theory, Field, and Laboratory," Economics Series Working Papers 734, University of Oxford, Department of Economics.
    14. Mario Bravo & Mathieu Faure, 2013. "Reinforcement Learning with Restrictions on the Action Set," AMSE Working Papers 1335, Aix-Marseille School of Economics, Marseille, France, revised 01 Jul 2013.
    15. Leslie, David S. & Collins, E.J., 2006. "Generalised weakened fictitious play," Games and Economic Behavior, Elsevier, vol. 56(2), pages 285-298, August.
    16. Carlos Oyarzun & Rajiv Sarin, 2012. "Learning and Risk Aversion," Levine's Working Paper Archive 786969000000000572, David K. Levine.
    17. Georgios Chasparis & Jeff Shamma, 2012. "Distributed Dynamic Reinforcement of Efficient Outcomes in Multiagent Coordination and Network Formation," Dynamic Games and Applications, Springer, vol. 2(1), pages 18-50, March.
    18. Conor Mayo-Wilson & Kevin Zollman & David Danks, 2013. "Wisdom of crowds versus groupthink: learning in groups and in isolation," International Journal of Game Theory, Springer;Game Theory Society, vol. 42(3), pages 695-723, August.
    19. Schuster, Stephan, 2012. "Applications in Agent-Based Computational Economics," MPRA Paper 47201, University Library of Munich, Germany.

    More about this item

    Keywords

    learning in games; reinforcement learning; stochastic approximation; replicator dynamics;

    JEL classification:

    • C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games
    • D83 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Search; Learning; Information and Knowledge; Communication; Belief; Unawareness

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:edn:esedps:79. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Research Office). General contact details of provider: http://edirc.repec.org/data/deediuk.html .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.