IDEAS home Printed from https://ideas.repec.org/p/hal/journl/halshs-00754083.html
   My bibliography  Save this paper

A reinforcement learning process in extensive form games

Author

Listed:
  • Jean-François Laslier

    (CECO - Laboratoire d'économétrie de l'École polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique)

  • Bernard Walliser

    (CERAS - Centre d'enseignement et de recherche en analyse socio-économique - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique)

Abstract

The CPR ("cumulative proportional reinforcement") learning rule stipulates that an agent chooses a move with a probability proportional to the cumulative payoff she obtained in the past with that move. Previously considered for strategies in normal form games (Laslier, Topol and Walliser, Games and Econ. Behav., 2001), the CPR rule is here adapted for actions in perfect information extensive form games. The paper shows that the action-based CPR process converges with probability one to the (unique) subgame perfect equilibrium.

Suggested Citation

  • Jean-François Laslier & Bernard Walliser, 2005. "A reinforcement learning process in extensive form games," Post-Print halshs-00754083, HAL.
  • Handle: RePEc:hal:journl:halshs-00754083
    DOI: 10.1007/s001820400194
    as

    Download full text from publisher

    To our knowledge, this item is not available for download. To find whether it is available, there are three options:
    1. Check below whether another version of this item is available online.
    2. Check on the provider's web page whether it is in fact available.
    3. Perform a search for a similarly titled item that would be available.

    Other versions of this item:

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Maxwell Pak & Bing Xu, 2016. "Generalized reinforcement learning in perfect-information games," International Journal of Game Theory, Springer;Game Theory Society, vol. 45(4), pages 985-1011, November.
    2. Ioannou, Christos A. & Romero, Julian, 2014. "A generalized approach to belief learning in repeated games," Games and Economic Behavior, Elsevier, vol. 87(C), pages 178-203.
    3. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    4. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    5. Thorsten Chmura & Thomas Pitz, 2007. "An Extended Reinforcement Algorithm for Estimation of Human Behaviour in Experimental Congestion Games," Journal of Artificial Societies and Social Simulation, Journal of Artificial Societies and Social Simulation, vol. 10(2), pages 1-1.
    6. Schuster, Stephan, 2012. "Applications in Agent-Based Computational Economics," MPRA Paper 47201, University Library of Munich, Germany.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hal:journl:halshs-00754083. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: CCSD (email available below). General contact details of provider: https://hal.archives-ouvertes.fr/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.