IDEAS home Printed from https://ideas.repec.org/a/spr/jogath/v33y2005i2p219-227.html
   My bibliography  Save this article

A reinforcement learning process in extensive form games

Author

Listed:
  • Jean-François Laslier
  • Bernard Walliser

Abstract

The CPR ("cumulative proportional reinforcement") learning rule stipulates that an agent chooses a move with a probability proportional to the cumulative payoff she obtained in the past with that move. Previously considered for strategies in normal form games (Laslier, Topol and Walliser, Games and Econ. Behav., 2001), the CPR rule is here adapted for actions in perfect information extensive form games. The paper shows that the action-based CPR process converges with probability one to the (unique) subgame perfect equilibrium.
(This abstract was borrowed from another version of this item.)

Suggested Citation

  • Jean-François Laslier & Bernard Walliser, 2005. "A reinforcement learning process in extensive form games," International Journal of Game Theory, Springer;Game Theory Society, vol. 33(2), pages 219-227, June.
  • Handle: RePEc:spr:jogath:v:33:y:2005:i:2:p:219-227
    DOI: 10.1007/s001820400194
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1007/s001820400194
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s001820400194?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to look for a different version below or search for a different version of it.

    Other versions of this item:

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Maxwell Pak & Bing Xu, 2016. "Generalized reinforcement learning in perfect-information games," International Journal of Game Theory, Springer;Game Theory Society, vol. 45(4), pages 985-1011, November.
    2. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    3. Thorsten Chmura & Thomas Pitz, 2007. "An Extended Reinforcement Algorithm for Estimation of Human Behaviour in Experimental Congestion Games," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 10(2), pages 1-1.
    4. Ioannou, Christos A. & Romero, Julian, 2014. "A generalized approach to belief learning in repeated games," Games and Economic Behavior, Elsevier, vol. 87(C), pages 178-203.
    5. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    6. Schuster, Stephan, 2012. "Applications in Agent-Based Computational Economics," MPRA Paper 47201, University Library of Munich, Germany.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jogath:v:33:y:2005:i:2:p:219-227. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.