Advanced Search
MyIDEAS: Login

Probability matching and reinforcement learning

Contents:

Author Info

  • Rivas, Javier

Abstract

Probability matching occurs when an action is chosen with a frequency equivalent to the probability of that action being the best choice. This sub-optimal behavior has been reported repeatedly by psychologists and experimental economists. We provide an evolutionary foundation for this phenomenon by showing that learning by reinforcement can lead to probability matching and, if the learning occurs sufficiently slowly, probability matching does not only occur in choice frequencies but also in choice probabilities. Our results are completed by proving that there exists no quasi-linear reinforcement learning specification such that the behavior is optimal for all environments where counterfactuals are observed.

Download Info

If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
File URL: http://www.sciencedirect.com/science/article/pii/S0304406812000778
Download Restriction: Full text for ScienceDirect subscribers only

As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.

Bibliographic Info

Article provided by Elsevier in its journal Journal of Mathematical Economics.

Volume (Year): 49 (2013)
Issue (Month): 1 ()
Pages: 17-21

as in new window
Handle: RePEc:eee:mateco:v:49:y:2013:i:1:p:17-21

Contact details of provider:
Web page: http://www.elsevier.com/locate/jmateco

Related research

Keywords: Probability matching; Reinforcement learning;

Other versions of this item:

Find related papers by JEL classification:

References

References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
as in new window
  1. Kosfeld, Michael & Droste, Edward & Voorneveld, Mark, 2002. "A myopic adjustment process leading to best-reply matching," Games and Economic Behavior, Elsevier, vol. 40(2), pages 270-298, August.
  2. Javier Rivas, 2008. "Learning within a Markovian Environment," Economics Working Papers ECO2008/13, European University Institute.
  3. Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
  4. Borgers, Tilman & Sarin, Rajiv, 2000. "Naive Reinforcement Learning with Endogenous Aspirations," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 41(4), pages 921-50, November.
  5. Rubinstein, Ariel, 2002. "Irrational diversification in multiple decision problems," European Economic Review, Elsevier, vol. 46(8), pages 1369-1378, September.
  6. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
Full references (including those not matched with items on IDEAS)

Citations

Lists

This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.

Statistics

Access and download statistics

Corrections

When requesting a correction, please mention this item's handle: RePEc:eee:mateco:v:49:y:2013:i:1:p:17-21. See general information about how to correct material in RePEc.

For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei).

If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

If references are entirely missing, you can add them using this form.

If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.

If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.

Please note that corrections may take a couple of weeks to filter through the various RePEc services.