Learning to Respond: The Use of Heuristics in Dynamic Games
While many learning models have been proposed in the game theoretic literature to track individuals’ behavior, surprisingly little research has focused on how well these models describe human adaptation in changing dynamic environments. Analysis of human behavior demonstrates that people are often remarkably responsive to changes in their environment, on time scales ranging from millennia (evolution) to milliseconds (reflex). The goal of this paper is to evaluate several prominent learning models in light of a laboratory experiment on responsiveness in a lowinformation dynamic game subject to changes in its underlying structure. While history-dependent reinforcement learning models track convergence of play well in repeated games, it is shown that they are ill suited to these environments, in which sastisficing models accurately predict behavior. A further objective is to determine which heuristics, or “rules of thumb,” when incorporated into learning models, are responsible for accurately capturing responsiveness. Reference points and a particular type of experimentation are found to be important in both describing and predicting play.
|Date of creation:||27 Jan 2003|
|Date of revision:|
|Note:||Type of Document - Acrobat PDF; prepared on IBM PC; to print on HP; pages: 34 ; figures: included. 35 pages, Acrobat PDF, figures included|
|Contact details of provider:|| Web page: http://econwpa.repec.org|
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Borgers, Tilman & Sarin, Rajiv, 2000.
"Naive Reinforcement Learning with Endogenous Aspirations,"
International Economic Review,
Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-50, November.
- Tilman B�rgers & Rajiv Sarin, . "Naive Reinforcement Learning With Endogenous Aspiration," ELSE working papers 037, ESRC Centre on Economics Learning and Social Evolution.
- T. Borgers & R. Sarin, 2010. "Naïve Reinforcement Learning With Endogenous Aspirations," Levine's Working Paper Archive 381, David K. Levine.
- Barry Sopher & Eric Friedman & Scott Shenker & Mikhael Shor, 2000. "Asynchronous Learning with Limited Information: An Experimental Analysis," Departmental Working Papers 200022, Rutgers University, Department of Economics.
- Gilboa, Itzhak & Schmeidler, David, 1996.
Games and Economic Behavior,
Elsevier, vol. 15(1), pages 1-26, July.
- Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
- Debraj Ray & Dilip Mookherjee & Fernando Vega Redondo & Rajeeva L. Karandikar, 1996.
"Evolving aspirations and cooperation,"
Working Papers. Serie AD
1996-06, Instituto Valenciano de Investigaciones Económicas, S.A. (Ivie).
- Selten,Reinhard, .
"Evolution,learning and economic behaviour,"
Discussion Paper Serie B
132, University of Bonn, Germany.
- Reinhard Selten, 1998.
"Axiomatic Characterization of the Quadratic Scoring Rule,"
Springer, vol. 1(1), pages 43-61, June.
- Selten, Reinhard, 1996. "Axiomatic Characterization of the Quadratic Scoring Rule," Discussion Paper Serie B 390, University of Bonn, Germany.
- Mookherjee Dilip & Sopher Barry, 1994. "Learning Behavior in an Experimental Matching Pennies Game," Games and Economic Behavior, Elsevier, vol. 7(1), pages 62-91, July.
- repec:kap:expeco:v:1:y:1998:i:1:p:43-62 is not listed on IDEAS
When requesting a correction, please mention this item's handle: RePEc:wpa:wuwpga:0301001. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (EconWPA)
If references are entirely missing, you can add them using this form.