Learning to play games in extensive form by valuation
A valuation for a board game is an assignment of numeric values to different states of the board. The valuation reflects the desirability of the states for the player. It can be used by a player to decide on her next move during the play. We assume a myopic player, who chooses a move with the highest valuation. Valuations can also be revised, and hopefully improved, after each play of the game. Here, a very simple valuation revision is considered, in which the states of the board visited in a play are assigned the payoff obtained in the play. We show that by adopting such a learning process a player who has a winning strategy in a win-lose game can almost surely guarantee a win in a repeated game. When a player has more than two payoffs, a more elaborate learning procedure is required. We consider one that associates with each state the average payoff in the rounds in which this node was reached. When all players adopt this learning procedure, with some perturbations, then, with probability 1, strategies that are close to subgame perfect equilibrium are played after some time. A single player who adopts this procedure can guarantee only her individually rational payoff.
(This abstract was borrowed from another version of this item.)
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
As the access to this document is restricted, you may want to look for a different version under "Related research" (further below) or search for a different version of it.
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Noeldecke,Georg & Samuelson,Larry, .
"An evolutionary analysis of backward and forward induction,"
Discussion Paper Serie B
228, University of Bonn, Germany.
- Noldeke Georg & Samuelson Larry, 1993. "An Evolutionary Analysis of Backward and Forward Induction," Games and Economic Behavior, Elsevier, vol. 5(3), pages 425-454, July.
- G. Noldeke & L. Samuelson, 2010. "An Evolutionary Analysis of Backward and Forward Induction," Levine's Working Paper Archive 538, David K. Levine.
- Fudenberg, Drew & Levine, David K., 1995.
"Consistency and cautious fictitious play,"
Journal of Economic Dynamics and Control,
Elsevier, vol. 19(5-7), pages 1065-1089.
- Sergiu Hart & Andreu Mas-Colell, 1997.
"A Simple Adaptive Procedure Leading to Correlated Equilibrium,"
Game Theory and Information
9703006, EconWPA, revised 24 Mar 1997.
- Sergiu Hart & Andreu Mas-Colell, 2000. "A Simple Adaptive Procedure Leading to Correlated Equilibrium," Econometrica, Econometric Society, vol. 68(5), pages 1127-1150, September.
- Sergiu Hart & Andreu Mas-Colell, 1996. "A simple adaptive procedure leading to correlated equilibrium," Economics Working Papers 200, Department of Economics and Business, Universitat Pompeu Fabra, revised Dec 1996.
- S. Hart & A. Mas-Collel, 2010. "A Simple Adaptive Procedure Leading to Correlated Equilibrium," Levine's Working Paper Archive 572, David K. Levine.
- Debraj Ray & Dilip Mookherjee & Fernando Vega Redondo & Rajeeva L. Karandikar, 1996.
"Evolving aspirations and cooperation,"
Working Papers. Serie AD
1996-06, Instituto Valenciano de Investigaciones Económicas, S.A. (Ivie).
- Drew Fudenberg & David K. Levine, 1998.
"The Theory of Learning in Games,"
MIT Press Books,
The MIT Press,
edition 1, volume 1, number 0262061945, June.
- Cho, In-Koo & Matsui, Akihiko, 2005. "Learning aspiration in repeated games," Journal of Economic Theory, Elsevier, vol. 124(2), pages 171-201, October.
- Fudenberg, Drew & Levine, David, 1998.
"Learning in games,"
European Economic Review,
Elsevier, vol. 42(3-5), pages 631-639, May.
- Tilman B�rgers & Rajiv Sarin, .
"Learning Through Reinforcement and Replicator Dynamics,"
ELSE working papers
051, ESRC Centre on Economics Learning and Social Evolution.
- Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
- T. Borgers & R. Sarin, 2010. "Learning Through Reinforcement and Replicator Dynamics," Levine's Working Paper Archive 380, David K. Levine.
- Hendon, Ebbe & Jacobsen, Hans Jorgen & Sloth, Birgitte, 1996.
"Fictitious Play in Extensive Form Games,"
Games and Economic Behavior,
Elsevier, vol. 15(2), pages 177-202, August.
- Philippe Jehiel & Dov Samet, 2006.
784828000000000111, UCLA Department of Economics.
- Sarin, Rajiv & Vahid, Farshid, 1999. "Payoff Assessments without Probabilities: A Simple Dynamic Model of Choice," Games and Economic Behavior, Elsevier, vol. 28(2), pages 294-309, August.
- Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-81, September.
- Sergiu Hart, 1999.
"Evolutionary Dynamics and Backward Induction,"
Game Theory and Information
9905002, EconWPA, revised 23 Mar 2000.
- Fudenberg, D. & Levine, D.K., 1991.
"Self-Confirming Equilibrium ,"
581, Massachusetts Institute of Technology (MIT), Department of Economics.
- Gilboa, Itzhak & Schmeidler, David, 1995.
"Case-Based Decision Theory,"
The Quarterly Journal of Economics,
MIT Press, vol. 110(3), pages 605-39, August.
- Ross Cressman, 2003. "Evolutionary Dynamics and Extensive Form Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262033054, June.
When requesting a correction, please mention this item's handle: RePEc:eee:jetheo:v:124:y:2005:i:2:p:129-148. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Zhang, Lei)
If references are entirely missing, you can add them using this form.