Estimating learning models from experimental data
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood with and without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties are obtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Andreas Blume & Douglas V. DeJong & George R. Neumann & N. E. Savin, 2002. "Learning and communication in sender-receiver games: an econometric investigation," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 17(3), pages 225-247.
- Cabrales, Antonio & Garcia-Fontes, Walter & Motta, Massimo, 2000.
"Risk dominance selects the leader: An experimental analysis,"
International Journal of Industrial Organization,
Elsevier, vol. 18(1), pages 137-162, January.
- Antonio Cabrales & Walter Garcia Fontes & Massimo Motta, 1997. "Risk dominance selects the leader. An experimental analysis," Economics Working Papers 222, Department of Economics and Business, Universitat Pompeu Fabra.
- Tilman Börgers & Rajiv Sarin, "undated".
"Learning Through Reinforcement and Replicator Dynamics,"
ELSE working papers
051, ESRC Centre on Economics Learning and Social Evolution.
- Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
- T. Borgers & R. Sarin, 2010. "Learning Through Reinforcement and Replicator Dynamics," Levine's Working Paper Archive 380, David K. Levine.
- John G. Cross, 1973. "A Stochastic Learning Model of Economic Behavior," The Quarterly Journal of Economics, Oxford University Press, vol. 87(2), pages 239-266.
- Martin Sefton, 1999. "A Model of Behavior in Coordination Game Experiments," Experimental Economics, Springer;Economic Science Association, vol. 2(2), pages 151-164, December.
- Matsui, Akihiko, 1992. "Best response dynamics and socially stable strategies," Journal of Economic Theory, Elsevier, vol. 57(2), pages 343-362, August.
- Kenneth Clark & Stephen Kay & Martin Sefton, 1997.
"When Are Nash Equilibria Self-Enforcing? An Experimental Analysis,"
- Kenneth Clark & Stephen Kay & Martin Sefton, 2001. "When are Nash equilibria self-enforcing? An experimental analysis," International Journal of Game Theory, Springer;Game Theory Society, vol. 29(4), pages 495-515.
- Clark, K. & Kay, S. & Sefton, M, 1997. "When Are Nash Equilibria Self Enforcing ? An Experimental Analysis," Working Papers 97-04, University of Iowa, Department of Economics.
- Guth, Werner & Schmittberger, Rolf & Schwarze, Bernd, 1982. "An experimental analysis of ultimatum bargaining," Journal of Economic Behavior & Organization, Elsevier, vol. 3(4), pages 367-388, December.
- George R. Neumann & Nathan E. Savin, 2000. "Learning and Communication in Sender-Receiver Games: An Econometric Investigation," Econometric Society World Congress 2000 Contributed Papers 1852, Econometric Society.
- Martin Posch, 1997. "Cycling in a stochastic learning algorithm for normal form games," Journal of Evolutionary Economics, Springer, vol. 7(2), pages 193-207.
- Tang, Fang-Fang, 1996. "Anticipatory Learning in Two-Person Games: An Experimental Study, Part II. Learning," Discussion Paper Serie B 363, University of Bonn, Germany.
When requesting a correction, please mention this item's handle: RePEc:upf:upfgen:501. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ()
If references are entirely missing, you can add them using this form.