We introduce a new solution concept for games in extensive form with perfect information, valuation equilibrium, which is based on a partition of each player's moves into similarity classes. A valuation of a player is a real-valued function on the set of her similarity classes. In this equilibrium each player's strategy is optimal in the sense that at each of her nodes, a player chooses a move that belongs to a class with maximum valuation. The valuation of each player is consistent with the strategy profile in the sense that the valuation of a similarity class is the player's expected payoff, given that the path (induced by the strategy profile) intersects the similarity class. The solution concept is applied to decision problems and multi-player extensive form games. It is contrasted with existing solution concepts. The valuation approach is next applied to stopping games, in which non-terminal moves form a single similarity class, and we note that the behaviors obtained echo some biases observed experimentally. Finally, we tentatively suggest a way of endogenizing the similarity partitions in which moves are categorized according to how well they perform relative to the expected equilibrium value, interpreted as the aspiration level.
(This abstract was borrowed from another version of this item.)
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Ariel Rubinstein, 2010.
"Perfect Equilibrium in a Bargaining Model,"
Levine's Working Paper Archive
661465000000000387, David K. Levine.
- Philippe Jehiel, 2005.
"Analogy-Based Expectation Equilibrium,"
784828000000000106, UCLA Department of Economics.
- Kreps, David M & Wilson, Robert, 1982.
Econometric Society, vol. 50(4), pages 863-94, July.
- Jehiel, Philippe & Samet, Dov, 2005.
"Learning to play games in extensive form by valuation,"
Journal of Economic Theory,
Elsevier, vol. 124(2), pages 129-148, October.
- Philippe Jehiel & Dov Samet, 2010. "Learning To Play Games In Extensive Form By Valuation," Levine's Working Paper Archive 391749000000000034, David K. Levine.
- Philippe Jehiel & Dov Samet, 2001. "Learning To Play Games In Extensive Form By Valuation," Levine's Working Paper Archive 391749000000000010, David K. Levine.
- Philippe Jehiel & Dov Samet, 2001. "Learning To Play Games In Extensive Form By Valuation," NajEcon Working Paper Reviews 391749000000000010, www.najecon.org.
- Philippe Jehiel & Dov Samet, 2010. "Learning to play games in extensive form by valuation," Levine's Working Paper Archive 391749000000000040, David K. Levine.
- Philippe Jehiel & Dov Samet, 2001. "Learning to play games in extensive form by valuation," Game Theory and Information 0012001, EconWPA.
- Rosenthal, Robert W., 1981. "Games of perfect information, predatory pricing and the chain-store paradox," Journal of Economic Theory, Elsevier, vol. 25(1), pages 92-100, August.
- Fudenberg, Drew & Levine, David, 1998.
"Learning in games,"
European Economic Review,
Elsevier, vol. 42(3-5), pages 631-639, May.
- Jakub Steiner & Colin Stewart, 2007. "Learning by Similarity in Coordination Problems," CERGE-EI Working Papers wp324, The Center for Economic Research and Graduate Education - Economic Institute, Prague.
- Rubinstein, Ariel, 1995.
"On the Interpretation of Decision Problems with Imperfect Recall,"
Mathematical Social Sciences,
Elsevier, vol. 30(3), pages 324-324, December.
- Piccione, Michele & Rubinstein, Ariel, 1997. "On the Interpretation of Decision Problems with Imperfect Recall," Games and Economic Behavior, Elsevier, vol. 20(1), pages 3-24, July.
When requesting a correction, please mention this item's handle: RePEc:cla:levrem:784828000000000111. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (David K. Levine)
If references are entirely missing, you can add them using this form.