Probability Matching and Reinforcement Learning
AbstractProbability matching occurs when an action is chosen with a frequency equivalent to the probability of that action being the best choice. This sub-optimal behavior has been reported repeatedly by psychologist and experimental economist. We provide an evolutionary foundation for this phenomenon by showing that learning by reinforcement can lead to probability matching and, if learning occurs suffciently slowly, probability matching does not only occur in choice frequencies but also in choice probabilities. Our results are completed by proving that there exists no quasi-linear reinforcement learning specification such that behavior is optimal for all environments where counterfactuals are observed.
Download InfoIf you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
Bibliographic InfoPaper provided by Department of Economics, University of Leicester in its series Discussion Papers in Economics with number 11/20.
Date of creation: Mar 2011
Date of revision:
Contact details of provider:
Postal: Department of Economics University of Leicester, University Road. Leicester. LE1 7RH. UK
Phone: +44 (0)116 252 2887
Fax: +44 (0)116 252 2908
Web page: http://www2.le.ac.uk/departments/economics
More information through EDIRC
Other versions of this item:
- C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games
This paper has been announced in the following NEP Reports:
- NEP-ALL-2011-03-26 (All new papers)
- NEP-CBE-2011-03-26 (Cognitive & Behavioural Economics)
- NEP-EVO-2011-03-26 (Evolutionary Economics)
- NEP-GTH-2011-03-26 (Game Theory)
- NEP-NEU-2011-03-26 (Neuroeconomics)
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Tilman B�rgers & Rajiv Sarin, .
"Naive Reinforcement Learning With Endogenous Aspiration,"
ELSE working papers
037, ESRC Centre on Economics Learning and Social Evolution.
- Borgers, Tilman & Sarin, Rajiv, 2000. "Naive Reinforcement Learning with Endogenous Aspirations," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-50, November.
- T. Borgers & R. Sarin, 2010. "Naïve Reinforcement Learning With Endogenous Aspirations," Levine's Working Paper Archive 381, David K. Levine.
- Rubinstein, Ariel, 2002. "Irrational diversification in multiple decision problems," European Economic Review, Elsevier, vol. 46(8), pages 1369-1378, September.
- Javier Rivas, 2008. "Learning within a Markovian Environment," Economics Working Papers ECO2008/13, European University Institute.
- Kosfeld, Michael & Droste, Edward & Voorneveld, Mark, 2002.
"A myopic adjustment process leading to best-reply matching,"
Games and Economic Behavior,
Elsevier, vol. 40(2), pages 270-298, August.
- Droste, E.J.R. & Kosfeld, M. & Voorneveld, M., 1998. "A Myopic Adjustment Process Leading to Best-Reply Matching," Discussion Paper 1998-111, Tilburg University, Center for Economic Research.
- Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
- Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Mrs. Alexandra Mazzuoccolo).
If references are entirely missing, you can add them using this form.