IDEAS home Printed from https://ideas.repec.org/
MyIDEAS: Login to save this paper or follow this series

Learning within a Markovian Environment

  • Javier Rivas

We investigate learning in a setting where each period a population has to choose between two actions and the payoff of each action is unknown by the players. The population learns according to reinforcement and the environment is non-stationary, meaning that there is correlation between the payoff of each action today and the payoff of each action in the past. We show that when players observe realized and foregone payoffs, a suboptimal mixed strategy is selected. On the other hand, when players only observe realized payoffs, a unique action, which is optimal if actions perform different enough, is selected in the long run. When looking for efficient reinforcement learning rules, we find that it is optimal to disregard the information from foregone payoffs and to learn as if only realized payoffs were observed.

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.

File URL: http://cadmus.iue.it/dspace/bitstream/1814/8084/1/ECO-2008-13.pdf
File Function: main text
Download Restriction: no

Paper provided by European University Institute in its series Economics Working Papers with number ECO2008/13.

as
in new window

Length:
Date of creation: 2008
Date of revision:
Handle: RePEc:eui:euiwps:eco2008/13
Contact details of provider: Postal: Badia Fiesolana, Via dei Roccettini, 9, 50014 San Domenico di Fiesole (FI) Italy
Phone: +39-055-4685.982
Fax: +39-055-4685.902
Web page: http://www.eui.eu/ECO/

More information through EDIRC

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:

as in new window
  1. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
  2. Fudenberg, Drew & Ellison, Glenn, 1995. "Word-of-Mouth Communication and Social Learning," Scholarly Articles 3196300, Harvard University Department of Economics.
  3. Cross, John G, 1973. "A Stochastic Learning Model of Economic Behavior," The Quarterly Journal of Economics, MIT Press, vol. 87(2), pages 239-66, May.
Full references (including those not matched with items on IDEAS)

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

When requesting a correction, please mention this item's handle: RePEc:eui:euiwps:eco2008/13. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Rhoda Lane)

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.

This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.