Advanced Search
MyIDEAS: Login to save this paper or follow this series

On the Convergence of Reinforcement Learning

Contents:

Author Info

  • Alan Beggs

Abstract

This paper examines the convergence of payoffs and strategies in Erev and Roth`s model of reinforcement learning. When all players use this rule it eliminates iteratively dominated strategies and in two-person constant-sum games average payoffs converge to the value of the game. Strategies converge in constant-sum games with unique equilibria if they are pure or in 2 × 2 games also if they are mixed. The long-run behaviour of the learning rule is governed by equations related to Maynard Smith`s version of the replicator dynamic. Properties of the learning rule against general opponents are also studied. In particular it is shown that it guarantees that the lim sup of a player`s average payoffs is at least his minmax payoff.

Download Info

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
File URL: http://www.economics.ox.ac.uk/materials/working_papers/paper96.pdf
Our checks indicate that this address may not be valid because: 404 NOT FOUND. If this is indeed the case, please notify (Caroline Wise)
Download Restriction: no

Bibliographic Info

Paper provided by University of Oxford, Department of Economics in its series Economics Series Working Papers with number 96.

as in new window
Length:
Date of creation: 01 Mar 2002
Date of revision:
Handle: RePEc:oxf:wpaper:96

Contact details of provider:
Postal: Manor Rd. Building, Oxford, OX1 3UQ
Email:
Web page: http://www.economics.ox.ac.uk/
More information through EDIRC

Related research

Keywords: reinforcement learning; games;

Other versions of this item:

Find related papers by JEL classification:

References

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
as in new window
  1. Rustichini, Aldo, 1999. "Optimal Properties of Stimulus--Response Learning Models," Games and Economic Behavior, Elsevier, vol. 29(1-2), pages 244-273, October.
  2. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
  3. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
  4. Gale, John & Binmore, Kenneth G. & Samuelson, Larry, 1995. "Learning to be imperfect: The ultimatum game," Games and Economic Behavior, Elsevier, vol. 8(1), pages 56-90.
  5. Arthur, W Brian, 1993. "On Designing Economic Agents That Behave Like Human Agents," Journal of Evolutionary Economics, Springer, vol. 3(1), pages 1-22, February.
  6. S. Hart & A. Mas-Collel, 2010. "A Simple Adaptive Procedure Leading to Correlated Equilibrium," Levine's Working Paper Archive 572, David K. Levine.
  7. Hofbauer, Josef & Karl H. Schlag, . "Sophisticated Imitation in Cyclic Games," Discussion Paper Serie B 427, University of Bonn, Germany, revised Mar 1998.
  8. Sergiu Hart & Andreu Mas-Colell, 1999. "A general class of adaptative strategies," Economics Working Papers 373, Department of Economics and Business, Universitat Pompeu Fabra.
  9. T. Borgers & R. Sarin, 2010. "Learning Through Reinforcement and Replicator Dynamics," Levine's Working Paper Archive 380, David K. Levine.
  10. Kuan, Chung-Ming & White, Halbert, 1994. "Adaptive Learning with Nonlinear Dynamics Driven by Dependent Processes," Econometrica, Econometric Society, vol. 62(5), pages 1087-1114, September.
  11. J.-F. Laslier & R. Topol & B. Walliser, 1999. "A behavioral learning process in games," THEMA Working Papers 99-03, THEMA (THéorie Economique, Modélisation et Applications), Université de Cergy-Pontoise.
  12. Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
  13. Colin Camerer & Teck-Hua Ho, 1999. "Experience-weighted Attraction Learning in Normal Form Games," Econometrica, Econometric Society, vol. 67(4), pages 827-874, July.
  14. Benaim, Michel & Hirsch, Morris W., 1999. "Mixed Equilibria and Dynamical Systems Arising from Fictitious Play in Perturbed Games," Games and Economic Behavior, Elsevier, vol. 29(1-2), pages 36-72, October.
Full references (including those not matched with items on IDEAS)

Citations

Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as in new window

Cited by:
This item has more than 25 citations. To prevent cluttering this page, these citations are listed on a separate page.

Lists

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

Statistics

Access and download statistics

Corrections

When requesting a correction, please mention this item's handle: RePEc:oxf:wpaper:96. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Caroline Wise).

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.