Revisiting log-linear learning: Asynchrony, completeness and payoff-based implementation
Log-linear learning is a learning algorithm that provides guarantees on the percentage of time that the action profile will be at a potential maximizer in potential games. The traditional analysis of log-linear learning focuses on explicitly computing the stationary distribution and hence requires a highly structured environment. Since the appeal of log-linear learning is not solely the explicit form of the stationary distribution, we seek to address to what degree one can relax the structural assumptions while maintaining that only potential function maximizers are stochastically stable. In this paper, we introduce slight variants of log-linear learning that provide the desired asymptotic guarantees while relaxing the structural assumptions to include synchronous updates, time-varying action sets, and limitations in information available to the players. The motivation for these relaxations stems from the applicability of log-linear learning to the control of multi-agent systems where these structural assumptions are unrealistic from an implementation perspective.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Monderer, Dov & Shapley, Lloyd S., 1996. "Potential Games," Games and Economic Behavior, Elsevier, vol. 14(1), pages 124-143, May.
- Alan Beggs, 2005.
"Waiting times and equilibrium selection,"
Springer, vol. 25(3), pages 599-628, 04.
- Young, H Peyton, 1993. "The Evolution of Conventions," Econometrica, Econometric Society, vol. 61(1), pages 57-84, January.
- Young, H. Peyton, 2009. "Learning by trial and error," Games and Economic Behavior, Elsevier, vol. 65(2), pages 626-643, March.
- Blume, Lawrence E., 2003.
"How noise matters,"
Games and Economic Behavior,
Elsevier, vol. 44(2), pages 251-271, August.
- Carlos Alos-Ferrer & Nick Netzer, 2008.
"The Logit-Response Dynamics,"
TWI Research Paper Series
28, Thurgauer Wirtschaftsinstitut, Universitï¿½t Konstanz.
- Voorneveld, Mark, 2000. "Best-response potential games," Economics Letters, Elsevier, vol. 66(3), pages 289-295, March.
- L. Blume, 2010.
"The Statistical Mechanics of Strategic Interaction,"
Levine's Working Paper Archive
488, David K. Levine.
- Blume Lawrence E., 1993. "The Statistical Mechanics of Strategic Interaction," Games and Economic Behavior, Elsevier, vol. 5(3), pages 387-424, July.
- Fabrizio Germano & Gábor Lugosi, 2004.
"Global Nash convergence of Foster and Young's regret testing,"
Economics Working Papers
788, Department of Economics and Business, Universitat Pompeu Fabra.
- Germano, Fabrizio & Lugosi, Gabor, 2007. "Global Nash convergence of Foster and Young's regret testing," Games and Economic Behavior, Elsevier, vol. 60(1), pages 135-154, July.
- Monderer, Dov & Shapley, Lloyd S., 1996. "Fictitious Play Property for Games with Identical Interests," Journal of Economic Theory, Elsevier, vol. 68(1), pages 258-265, January.
- Yakov Babichenko, 2010. "Completely Uncoupled Dynamics and Nash Equilibria," Discussion Paper Series dp529, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem.
- Foster, Dean P. & Young, H. Peyton, 2006. "Regret testing: learning to play Nash equilibrium without knowing you have an opponent," Theoretical Economics, Econometric Society, vol. 1(3), pages 341-367, September.
When requesting a correction, please mention this item's handle: RePEc:eee:gamebe:v:75:y:2012:i:2:p:788-808. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei)
If references are entirely missing, you can add them using this form.