Almost-Rational Learning of Nash Equilibrium without Absolute Continuity
If players learn to play an infinitely repeated game using Bayesian learning, it is known that their strategies eventually approximate Nash equilibria of the repeated game under an absolute-continuity assumption on their prior beliefs. We suppose here that Bayesian learners do not start with such a "grain of truth", but with arbitrarily low probability they revise beliefs that are performing badly. We show that this process converges in probability to a Nash equilibrium of the repeated game.
|Date of creation:||01 Apr 2012|
|Contact details of provider:|| Postal: Manor Rd. Building, Oxford, OX1 3UQ|
Web page: https://www.economics.ox.ac.uk/
More information through EDIRC
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- John H. Nachbar, 1997.
"Prediction, Optimization, and Learning in Repeated Games,"
Econometric Society, vol. 65(2), pages 275-310, March.
- John H. Nachbar, 1995. "Prediction, Optimization, and Learning in Repeated Games," Game Theory and Information 9504001, EconWPA, revised 14 Feb 1996.
- John Nachbar, 2010. "Prediction, Optimization and Learning in Repeated Games," Levine's Working Paper Archive 576, David K. Levine.
- Ehud Lehrer & Sylvain Sorin, 1998. "-Consistent equilibrium in repeated games," International Journal of Game Theory, Springer;Game Theory Society, vol. 27(2), pages 231-244. Full references (including those not matched with items on IDEAS)
When requesting a correction, please mention this item's handle: RePEc:oxf:wpaper:602. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Monica Birds)
If references are entirely missing, you can add them using this form.