An Adaptive Learning Model in Coordination Games
AbstractIn this paper, we provide a theoretical prediction of the way in which adaptive players behave in the long run in games with strict Nash equilibria. In the model, each player picks the action which has the highest assessment, which is a weighted average of past payoffs. Each player updates his assessment of the chosen action in an adaptive manner. Almost sure convergence to a Nash equilibrium is shown under one of the following conditions: (i) that, at any non-Nash equilbrium action profile, there exists a player who can find another action which gives always better payoffs than his current payoff, (ii) that all non-Nash equilibrium action profiles give the same payoff. We show almost sure convergence to a Nash equilibrium in the following games: pure coordination games; the battle of the sexes games; the stag hunt game; and the first order static game. In the game of chicken and market entry games, players may end up playing a maximum action profile.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by Department of Economics, University of Birmingham in its series Discussion Papers with number 13-14.
Length: 43 pages
Date of creation: Jun 2013
Date of revision:
Adaptive Learning; Coordination Games;
Find related papers by JEL classification:
- C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
- D83 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Search, Learning, and Information
This paper has been announced in the following NEP Reports:
- NEP-ALL-2013-07-05 (All new papers)
- NEP-CDM-2013-07-05 (Collective Decision-Making)
- NEP-EVO-2013-07-05 (Evolutionary Economics)
- NEP-GTH-2013-07-05 (Game Theory)
- NEP-HPE-2013-07-05 (History & Philosophy of Economics)
- NEP-MIC-2013-07-05 (Microeconomics)
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Drew Fudenberg & David K. Levine, 1996.
"The Theory of Learning in Games,"
Levine's Working Paper Archive
624, David K. Levine.
- Van Huyck, John B & Battalio, Raymond C & Beil, Richard O, 1990.
"Tacit Coordination Games, Strategic Uncertainty, and Coordination Failure,"
American Economic Review,
American Economic Association, vol. 80(1), pages 234-48, March.
- J. B. Van Huyck & R. C. Battalio & R. O. Beil, 2010. "Tacit coordination games, strategic uncertainty, and coordination failure," Levine's Working Paper Archive 661465000000000393, David K. Levine.
- John B Van Huyck & Raymond C Battalio & Richard O Beil, 1997. "Tacit coordination games, strategic uncertainty, and coordination failure," Levine's Working Paper Archive 1225, David K. Levine.
- Chen, Yan & Khoroshilov, Yuri, 2003. "Learning under limited information," Games and Economic Behavior, Elsevier, vol. 44(1), pages 1-25, July.
- Colin Camerer & Teck-Hua Ho, 1999. "Experience-weighted Attraction Learning in Normal Form Games," Econometrica, Econometric Society, vol. 67(4), pages 827-874, July.
- Sarin, Rajiv, 1999. "Simple play in the Prisoner's Dilemma," Journal of Economic Behavior & Organization, Elsevier, vol. 40(1), pages 105-113, September.
- Cooper, Russell, et al, 1990. "Selection Criteria in Coordination Games: Some Experimental Results," American Economic Review, American Economic Association, vol. 80(1), pages 218-33, March.
- Alan Beggs, 2002.
"On the Convergence of Reinforcement Learning,"
Economics Series Working Papers
96, University of Oxford, Department of Economics.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Colin Rowat).
If references are entirely missing, you can add them using this form.