IDEAS home Printed from
   My bibliography  Save this paper

Reinforcement Learning and Human Behavior


  • Hanan Shteingart
  • Yonatan Loewenstein


The dominant computational approach to model operant learning and its underlying neural activity is model-free reinforcement learning (RL). However, there is accumulating behavioral and neuronal-related evidence that human (and animal) operant learning is far more multifaceted. Theoretical advances in RL, such as hierarchical and model-based RL extend the explanatory power of RL to account for some of these findings. Nevertheless, some other aspects of human behavior remain inexplicable even in the simplest tasks. Here we review developments and remaining challenges in relating RL models to human operant learning. In particular, we emphasize that learning a model of the world is an essential step prior or in parallel to learning the policy in RL and discuss alternative models that directly learn a policy without an explicit world model in terms of state-action pairs.

Suggested Citation

  • Hanan Shteingart & Yonatan Loewenstein, 2014. "Reinforcement Learning and Human Behavior," Discussion Paper Series dp656, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem.
  • Handle: RePEc:huj:dispap:dp656

    Download full text from publisher

    File URL:
    Download Restriction: no

    References listed on IDEAS

    1. Vulkan, Nir, 2000. " An Economist's Perspective on Probability Matching," Journal of Economic Surveys, Wiley Blackwell, vol. 14(1), pages 101-118, February.
    Full references (including those not matched with items on IDEAS)


    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.

    Cited by:

    1. Tal Neiman & Yonatan Loewenstein, 2014. "Spatial Generalization in Operant Learning: Lessons from Professional Basketball," Discussion Paper Series dp665, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem.
    2. Gianluigi Mongillo & Hanan Shteingart & Yonatan Loewenstein, 2014. "The Misbehavior of Reinforcement Learning," Discussion Paper Series dp661, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:


    Access and download statistics


    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:huj:dispap:dp656. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Michael Simkin). General contact details of provider: .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.