A Behavioral Learning Process in Games
AbstractThe paper studies a behavioral learning process where an agent plays, at each period, an action with a probability which is proportional to the cumulative utility he got in the past with that action. The so-called CPR learning rule and the dynamic process it induces are formally stated and compared to other reinforcement rules as well as to fictitious play or the replicator dynamics.
Download InfoTo our knowledge, this item is not available for download. To find whether it is available, there are three options:
1. Check below under "Related research" whether another version of this item is available online.
2. Check on the provider's web page whether it is in fact available.
3. Perform a search for a similarly titled item that would be available.
Bibliographic InfoPaper provided by Paris X - Nanterre, U.F.R. de Sc. Ec. Gest. Maths Infor. in its series Papers with number 99-03.
Length: 34 pages
Date of creation: 1999
Date of revision:
Contact details of provider:
Postal: THEMA, Universite de Paris X-Nanterre, U.F.R. de science economiques, gestion, mathematiques et informatique, 200, avenue de la Republique 92001 Nanterre CEDEX.
LEARNING ; GAME THEORY ; BEHAVIOUR;
Other versions of this item:
- D83 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Search, Learning, and Information
- C70 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - General
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
- T. Borgers & R. Sarin, 2010.
"Learning Through Reinforcement and Replicator Dynamics,"
Levine's Working Paper Archive
380, David K. Levine.
- Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
- Tilman B�rgers & Rajiv Sarin, . "Learning Through Reinforcement and Replicator Dynamics," ELSE working papers 051, ESRC Centre on Economics Learning and Social Evolution.
- Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
- Kaniovski Yuri M. & Young H. Peyton, 1995. "Learning Dynamics in Games with Stochastic Perturbations," Games and Economic Behavior, Elsevier, vol. 11(2), pages 330-363, November.
- Cross, John G, 1973. "A Stochastic Learning Model of Economic Behavior," The Quarterly Journal of Economics, MIT Press, vol. 87(2), pages 239-66, May.
- Friedman, Daniel, 1991. "Evolutionary Games in Economics," Econometrica, Econometric Society, vol. 59(3), pages 637-66, May.
- Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
- Friederike Mengel, 2007.
"Learning Across Games,"
Working Papers. Serie AD
2007-05, Instituto Valenciano de Investigaciones Económicas, S.A. (Ivie).
- Ianni, Antonella, 2011.
"Learning Strict Nash Equilibria through Reinforcement,"
33936, University Library of Munich, Germany.
- Antonella Ianni, 2007. "Learning Strict Nash Equilibria through Reinforcement," Economics Working Papers ECO2007/21, European University Institute.
- Dürsch, Peter & Kolb, Albert & Oechssler, Jörg & Schipper, Burkhard C., 2005.
"Rage Against the Machines: How Subjects Learn to Play Against Computers,"
Discussion Paper Series of SFB/TR 15 Governance and the Efficiency of Economic Systems
63, Free University of Berlin, Humboldt University of Berlin, University of Bonn, University of Mannheim, University of Munich.
- Dürsch, Peter & Kolb, Albert & Oechssler, Jörg & Schipper, Burkhard, 2005. "Rage Against the Machines - How Subjects Learn to Play Against Computers," Sonderforschungsbereich 504 Publications 05-36, Sonderforschungsbereich 504, Universität Mannheim & Sonderforschungsbereich 504, University of Mannheim.
- Peter Dürsch & Albert Kolb & Jörg Oechssler & Burkhard C. Schipper, 2005. "Rage Against the Machines: How Subjects Learn to Play Against Computers," Working Papers 0423, University of Heidelberg, Department of Economics, revised Oct 2005.
- Peter Dürsch & Albert Kolb & Jörg Oechssler & Burkhard C. Schipper, 2005. "Rage Against the Machines: How Subjects Learn to Play Against Computers," Bonn Econ Discussion Papers bgse31_2005, University of Bonn, Germany.
- Peter Duersch & Albert Kolb & Joerg Oechssler & Burkhard Schipper, 2005. "Rage Against the Machines: How Subjects Learn to Play Against Computers," Game Theory and Information 0510012, EconWPA.
- Burkhard C. Schipper & Jorg Oechssler & Albert Kolb, 2005. "Rage Against the Machines: How Subjects Learn to Play Against Computers," Working Papers 516, University of California, Davis, Department of Economics.
- Walter Gutjahr, 2006. "Interaction dynamics of two reinforcement learners," Central European Journal of Operations Research, Springer, vol. 14(1), pages 59-86, February.
- Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
- Ed Hopkins & Martin Posch, 2004.
"Attainability of Boundary Points under Reinforcement Learning,"
ESE Discussion Papers
79, Edinburgh School of Economics, University of Edinburgh.
- Hopkins, Ed & Posch, Martin, 2005. "Attainability of boundary points under reinforcement learning," Games and Economic Behavior, Elsevier, vol. 53(1), pages 110-125, October.
- Ed Hopkins & Martin Posch, 2003. "Attainability of Boundary Points under Reinforcement Learning," Levine's Working Paper Archive 506439000000000350, David K. Levine.
- Viktoriya Semeshenko & Alexis Garapin & Bernard Ruffieux & Mirta Gordon, 2010. "Information-driven coordination: experimental results with heterogeneous individuals," Theory and Decision, Springer, vol. 69(1), pages 119-142, July.
- Schuster, Stephan, 2010. "Network Formation with Adaptive Agents," MPRA Paper 27388, University Library of Munich, Germany.
- Cominetti, Roberto & Melo, Emerson & Sorin, Sylvain, 2010. "A payoff-based learning procedure and its application to traffic games," Games and Economic Behavior, Elsevier, vol. 70(1), pages 71-83, September.
- Alanyali, Murat, 2010. "A note on adjusted replicator dynamics in iterated games," Journal of Mathematical Economics, Elsevier, vol. 46(1), pages 86-98, January.
- Peter Duersch & Albert Kolb & Jörg Oechssler & Burkhard Schipper, 2010. "Rage against the machines: how subjects play against learning algorithms," Economic Theory, Springer, vol. 43(3), pages 407-430, June.
- Carlos Oyarzun & Rajiv Sarin, 2012. "Learning and Risk Aversion," Levine's Working Paper Archive 786969000000000572, David K. Levine.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Thomas Krichel).
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.