IDEAS home Printed from https://ideas.repec.org/p/eui/euiwps/eco2008-13.html
   My bibliography  Save this paper

Learning within a Markovian Environment

Author

Listed:
  • Javier Rivas

Abstract

We investigate learning in a setting where each period a population has to choose between two actions and the payoff of each action is unknown by the players. The population learns according to reinforcement and the environment is non-stationary, meaning that there is correlation between the payoff of each action today and the payoff of each action in the past. We show that when players observe realized and foregone payoffs, a suboptimal mixed strategy is selected. On the other hand, when players only observe realized payoffs, a unique action, which is optimal if actions perform different enough, is selected in the long run. When looking for efficient reinforcement learning rules, we find that it is optimal to disregard the information from foregone payoffs and to learn as if only realized payoffs were observed.

Suggested Citation

  • Javier Rivas, 2008. "Learning within a Markovian Environment," Economics Working Papers ECO2008/13, European University Institute.
  • Handle: RePEc:eui:euiwps:eco2008/13
    as

    Download full text from publisher

    File URL: http://cadmus.iue.it/dspace/bitstream/1814/8084/1/ECO-2008-13.pdf
    File Function: main text
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-881, September.
    2. Glenn Ellison & Drew Fudenberg, 1995. "Word-of-Mouth Communication and Social Learning," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 110(1), pages 93-125.
    3. John G. Cross, 1973. "A Stochastic Learning Model of Economic Behavior," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 87(2), pages 239-266.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yves Ortiz & Martin schüle, 2011. "Limited Rationality and Strategic Interaction: A Probabilistic Multi-Agent Model," Working Papers 11.08, Swiss National Bank, Study Center Gerzensee.
    2. Rivas, Javier, 2013. "Probability matching and reinforcement learning," Journal of Mathematical Economics, Elsevier, vol. 49(1), pages 17-21.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Oyarzun, Carlos & Ruf, Johannes, 2014. "Convergence in models with bounded expected relative hazard rates," Journal of Economic Theory, Elsevier, vol. 154(C), pages 229-244.
    2. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 203, Economics Division, School of Social Sciences, University of Southampton.
    3. Osili, Una Okonkwo & Paulson, Anna, 2014. "Crises and confidence: Systemic banking crises and depositor behavior," Journal of Financial Economics, Elsevier, vol. 111(3), pages 646-660.
    4. Ponti, Giovanni, 2000. "Continuous-time evolutionary dynamics: theory and practice," Research in Economics, Elsevier, vol. 54(2), pages 187-214, June.
    5. Apesteguia, Jose & Huck, Steffen & Oechssler, Jorg, 2007. "Imitation--theory and experimental evidence," Journal of Economic Theory, Elsevier, vol. 136(1), pages 217-235, September.
    6. Hopkins, Ed, 2007. "Adaptive learning models of consumer behavior," Journal of Economic Behavior & Organization, Elsevier, vol. 64(3-4), pages 348-368.
    7. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    8. Jiayang Li & Zhaoran Wang & Yu Marco Nie, 2023. "Wardrop Equilibrium Can Be Boundedly Rational: A New Behavioral Theory of Route Choice," Papers 2304.02500, arXiv.org, revised Feb 2024.
    9. Bernergård, Axel & Mohlin, Erik, 2019. "Evolutionary selection against iteratively weakly dominated strategies," Games and Economic Behavior, Elsevier, vol. 117(C), pages 82-97.
    10. Jonathan Newton, 2018. "Evolutionary Game Theory: A Renaissance," Games, MDPI, vol. 9(2), pages 1-67, May.
    11. Innocenti, Stefania & Cowan, Robin, 2019. "Self-efficacy beliefs and imitation: A two-armed bandit experiment," European Economic Review, Elsevier, vol. 113(C), pages 156-172.
    12. Shu-Heng Chen & Yi-Lin Hsieh, 2011. "Reinforcement Learning in Experimental Asset Markets," Eastern Economic Journal, Palgrave Macmillan;Eastern Economic Association, vol. 37(1), pages 109-133.
    13. Aloys Prinz, 2019. "Learning (Not) to Evade Taxes," Games, MDPI, vol. 10(4), pages 1-18, September.
    14. Tilman Börgers & Antonio J. Morales & Rajiv Sarin, 2004. "Expedient and Monotone Learning Rules," Econometrica, Econometric Society, vol. 72(2), pages 383-405, March.
    15. Tassos Patokos, 2014. "Introducing Disappointment Dynamics and Comparing Behaviors in Evolutionary Games: Some Simulation Results," Games, MDPI, vol. 5(1), pages 1-25, January.
    16. Laslier, Jean-Francois & Topol, Richard & Walliser, Bernard, 2001. "A Behavioral Learning Process in Games," Games and Economic Behavior, Elsevier, vol. 37(2), pages 340-366, November.
    17. Segismundo S. Izquierdo & Luis R. Izquierdo & Nicholas M. Gotts, 2008. "Reinforcement Learning Dynamics in Social Dilemmas," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 11(2), pages 1-1.
    18. Atanasios Mitropoulos, 2001. "Learning Under Little Information: An Experiment on Mutual Fate Control," Game Theory and Information 0110003, University Library of Munich, Germany.
    19. Jaspersen, Johannes G. & Montibeller, Gilberto, 2020. "On the learning patterns and adaptive behavior of terrorist organizations," European Journal of Operational Research, Elsevier, vol. 282(1), pages 221-234.
    20. Atanasios Mitropoulos, 2001. "On the Measurement of the Predictive Success of Learning Theories in Repeated Games," Experimental 0110001, University Library of Munich, Germany.

    More about this item

    Keywords

    Adaptive Learning; Markov Chains; Non-stationarity; Reinforcement Learning;
    All these keywords.

    JEL classification:

    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eui:euiwps:eco2008/13. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Cécile Brière (email available below). General contact details of provider: https://edirc.repec.org/data/deiueit.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.