Author
Listed:
- Stefano Palminteri
(Institute of Cognitive Neuroscience (ICN), University College London (UCL)
Laboratoire de Neurosciences Cognitives (LNC), Institut National de la Santé et Recherche Médical (INSERM) U960, École Normale Supérieure (ENS))
- Mehdi Khamassi
(Instintut des Systèmes Intelligents et Robotique (ISIR), Centre National de la Recherche Scientifique (CNRS) UMR 7222, Université Pierre et Marie Curie (UPMC)
Università degli study di Trento)
- Mateus Joffily
(Università degli study di Trento
Groupe d’Analyse et de Théorie Economique, Centre National de la Recherche Scientifique (CNRS) UMR 5229, Université de Lyon)
- Giorgio Coricelli
(Laboratoire de Neurosciences Cognitives (LNC), Institut National de la Santé et Recherche Médical (INSERM) U960, École Normale Supérieure (ENS)
Università degli study di Trento
University of Southern California (USC))
Abstract
Compared with reward seeking, punishment avoidance learning is less clearly understood at both the computational and neurobiological levels. Here we demonstrate, using computational modelling and fMRI in humans, that learning option values in a relative—context-dependent—scale offers a simple computational solution for avoidance learning. The context (or state) value sets the reference point to which an outcome should be compared before updating the option value. Consequently, in contexts with an overall negative expected value, successful punishment avoidance acquires a positive value, thus reinforcing the response. As revealed by post-learning assessment of options values, contextual influences are enhanced when subjects are informed about the result of the forgone alternative (counterfactual information). This is mirrored at the neural level by a shift in negative outcome encoding from the anterior insula to the ventral striatum, suggesting that value contextualization also limits the need to mobilize an opponent punishment learning system.
Suggested Citation
Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015.
"Contextual modulation of value signals in reward and punishment learning,"
Nature Communications, Nature, vol. 6(1), pages 1-14, November.
Handle:
RePEc:nat:natcom:v:6:y:2015:i:1:d:10.1038_ncomms9096
DOI: 10.1038/ncomms9096
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:6:y:2015:i:1:d:10.1038_ncomms9096. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.