Learning within a Markovian Environment
AbstractWe investigate learning in a setting where each period a population has to choose between two actions and the payoff of each action is unknown by the players. The population learns according to reinforcement and the environment is non-stationary, meaning that there is correlation between the payoff of each action today and the payoff of each action in the past. We show that when players observe realized and foregone payoffs, a suboptimal mixed strategy is selected. On the other hand, when players only observe realized payoffs, a unique action, which is optimal if actions perform different enough, is selected in the long run. When looking for efficient reinforcement learning rules, we find that it is optimal to disregard the information from foregone payoffs and to learn as if only realized payoffs were observed.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by European University Institute in its series Economics Working Papers with number ECO2008/13.
Date of creation: 2008
Date of revision:
Contact details of provider:
Postal: Badia Fiesolana, Via dei Roccettini, 9, 50016 San Domenico di Fiesole (FI) Italy
Web page: http://www.eui.eu/ECO/
More information through EDIRC
Adaptive Learning; Markov Chains; Non-stationarity; Reinforcement Learning;
Find related papers by JEL classification:
- C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games
This paper has been announced in the following NEP Reports:
- NEP-ALL-2008-02-16 (All new papers)
- NEP-CBA-2008-02-16 (Central Banking)
- NEP-CBE-2008-02-16 (Cognitive & Behavioural Economics)
- NEP-EVO-2008-02-16 (Evolutionary Economics)
- NEP-GTH-2008-02-16 (Game Theory)
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
- Cross, John G, 1973. "A Stochastic Learning Model of Economic Behavior," The Quarterly Journal of Economics, MIT Press, vol. 87(2), pages 239-66, May.
- Ellison, Glenn & Fudenberg, Drew, 1995.
"Word-of-Mouth Communication and Social Learning,"
The Quarterly Journal of Economics,
MIT Press, vol. 110(1), pages 93-125, February.
- A. Banerjee & Drew Fudenberg, 2010. "Word-of-Mouth Communication and Social Learning," Levine's Working Paper Archive 425, David K. Levine.
- Fudenberg, Drew & Ellison, Glenn, 1995. "Word-of-Mouth Communication and Social Learning," Scholarly Articles 3196300, Harvard University Department of Economics.
- Yves Ortiz & Martin schüle, 2011. "Limited Rationality and Strategic Interaction: A Probabilistic Multi-Agent Model," Working Papers 11.08, Swiss National Bank, Study Center Gerzensee.
- Rivas, Javier, 2013.
"Probability matching and reinforcement learning,"
Journal of Mathematical Economics,
Elsevier, vol. 49(1), pages 17-21.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Marcia Gastaldo).
If references are entirely missing, you can add them using this form.