Learning to play games in extensive form by valuation
Game theoretic models of learning which are based on the strategic form of the game cannot explain learning in games with large extensive form. We study learning in such games by using valuation of moves. A valuation for a player is a numeric assessment of her moves that purports to reflect their desirability. We consider a myopic player, who chooses moves with the highest valuation. Each time the game is played, the player revises her valuation by assigning the payoff obtained in the play to each of the moves she has made. We show for a repeated win-lose game that if the player has a winning strategy in the stage game, there is almost surely a time after which she always wins. When a player has more than two payoffs, a more elaborate learning procedure is required. We consider one that associates with each move the average payoff in the rounds in which this move was made. When all players adopt this learning procedure, with some perturbations, then, with probability 1 there is a time after which strategies that are close to subgame perfect equilibrium are played. A single player who adopts this procedure can guarantee only her individually rational payoff.
(This abstract was borrowed from another version of this item.)
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Ebbe Hendon & Hans Jørgen Jacobsen & Birgitte Sloth, .
"Fictitious Play in Extensive Form Games,"
94-06, University of Copenhagen. Department of Economics.
- Ross Cressman, 2003. "Evolutionary Dynamics and Extensive Form Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262033054, June.
- Hart, Sergiu, 2002.
"Evolutionary dynamics and backward induction,"
Games and Economic Behavior,
Elsevier, vol. 41(2), pages 227-264, November.
- T. Borgers & R. Sarin, 2010.
"Learning Through Reinforcement and Replicator Dynamics,"
Levine's Working Paper Archive
380, David K. Levine.
- Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
- Tilman B�rgers & Rajiv Sarin, . "Learning Through Reinforcement and Replicator Dynamics," ELSE working papers 051, ESRC Centre on Economics Learning and Social Evolution.
- Drew Fudenberg & David K. Levine, 1996.
"The Theory of Learning in Games,"
Levine's Working Paper Archive
624, David K. Levine.
- Fudenberg, D. & Levine, D.K., 1991.
"Self-Confirming Equilibrium ,"
581, Massachusetts Institute of Technology (MIT), Department of Economics.
- Noeldecke,Georg & Samuelson,Larry, .
"An evolutionary analysis of backward and forward induction,"
Discussion Paper Serie B
228, University of Bonn, Germany.
- Noldeke Georg & Samuelson Larry, 1993. "An Evolutionary Analysis of Backward and Forward Induction," Games and Economic Behavior, Elsevier, vol. 5(3), pages 425-454, July.
- G. Noldeke & L. Samuelson, 2010. "An Evolutionary Analysis of Backward and Forward Induction," Levine's Working Paper Archive 538, David K. Levine.
- Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
- Fudenberg, Drew & Levine, David, 1995.
"Consistency and Cautious Fictitious Play,"
3198694, Harvard University Department of Economics.
- Jehiel, Philippe & Samet, Dov, 2007.
Econometric Society, vol. 2(2), June.
- Philippe Jehiel & Dov Samet, 2003. "Valuation Equilibria," Game Theory and Information 0310003, EconWPA.
- Philippe Jehiel & Dov Samet, 2007. "Valuation Equilibrium," Post-Print halshs-00754229, HAL.
- Philippe Jehiel & Dov Samet, 2006. "Valuation Equilibria," Levine's Bibliography 784828000000000111, UCLA Department of Economics.
- Philippe Jehiel & Dov Samet, 2003. "Valuation Equilibria," Levine's Bibliography 666156000000000046, UCLA Department of Economics.
- Sergiu Hart & Andreu Mas-Colell, 1996.
"A simple adaptive procedure leading to correlated equilibrium,"
Economics Working Papers
200, Department of Economics and Business, Universitat Pompeu Fabra, revised Dec 1996.
- Sergiu Hart & Andreu Mas-Colell, 2000. "A Simple Adaptive Procedure Leading to Correlated Equilibrium," Econometrica, Econometric Society, vol. 68(5), pages 1127-1150, September.
- S. Hart & A. Mas-Collel, 2010. "A Simple Adaptive Procedure Leading to Correlated Equilibrium," Levine's Working Paper Archive 572, David K. Levine.
- Sergiu Hart & Andreu Mas-Colell, 1997. "A Simple Adaptive Procedure Leading to Correlated Equilibrium," Game Theory and Information 9703006, EconWPA, revised 24 Mar 1997.
- Itzhak Gilboa & David Schmeidler, 1992.
"Case-Based Decision Theory,"
994, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
- Debraj Ray & Dilip Mookherjee & Fernando Vega Redondo & Rajeeva L. Karandikar, 1996.
"Evolving aspirations and cooperation,"
Working Papers. Serie AD
1996-06, Instituto Valenciano de Investigaciones Económicas, S.A. (Ivie).
- Sarin, Rajiv & Vahid, Farshid, 1999. "Payoff Assessments without Probabilities: A Simple Dynamic Model of Choice," Games and Economic Behavior, Elsevier, vol. 28(2), pages 294-309, August.
- Fudenberg, Drew & Levine, David, 1998.
"Learning in games,"
European Economic Review,
Elsevier, vol. 42(3-5), pages 631-639, May.
- Cho, In-Koo & Matsui, Akihiko, 2005. "Learning aspiration in repeated games," Journal of Economic Theory, Elsevier, vol. 124(2), pages 171-201, October.
When requesting a correction, please mention this item's handle: RePEc:cla:levarc:391749000000000040. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (David K. Levine)
If references are entirely missing, you can add them using this form.