Reinforcement Learning and Human Behavior
The dominant computational approach to model operant learning and its underlying neural activity is model-free reinforcement learning (RL). However, there is accumulating behavioral and neuronal-related evidence that human (and animal) operant learning is far more multifaceted. Theoretical advances in RL, such as hierarchical and model-based RL extend the explanatory power of RL to account for some of these findings. Nevertheless, some other aspects of human behavior remain inexplicable even in the simplest tasks. Here we review developments and remaining challenges in relating RL models to human operant learning. In particular, we emphasize that learning a model of the world is an essential step prior or in parallel to learning the policy in RL and discuss alternative models that directly learn a policy without an explicit world model in terms of state-action pairs.
|Date of creation:||Jan 2014|
|Publication status:||Published in Current Opinion in Neurobiology 2014, 25:93–98|
|Contact details of provider:|| Postal: Feldman Building - Givat Ram - 91904 Jerusalem|
Web page: http://www.ratio.huji.ac.il/
More information through EDIRC
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
- Vulkan, Nir, 2000. " An Economist's Perspective on Probability Matching," Journal of Economic Surveys, Wiley Blackwell, vol. 14(1), pages 101-118, February.
When requesting a correction, please mention this item's handle: RePEc:huj:dispap:dp656. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Tomer Siedner)
If references are entirely missing, you can add them using this form.