IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1005684.html
   My bibliography  Save this article

Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing

Author

Listed:
  • Stefano Palminteri
  • Germain Lefebvre
  • Emma J Kilford
  • Sarah-Jayne Blakemore

Abstract

Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.Author summary: While the investigation of decision-making biases has a long history in economics and psychology, learning biases have been much less systematically investigated. This is surprising as most of the choices we deal with in everyday life are recurrent, thus allowing learning to occur and therefore influencing future decision-making. Combining behavioural testing and computational modeling, here we show that the valence of an outcome biases both factual and counterfactual learning. When considering factual and counterfactual learning together, it appears that people tend to preferentially take into account information that confirms their current choice. Increasing our understanding of learning biases will enable the refinement of existing models of value-based decision-making.

Suggested Citation

  • Stefano Palminteri & Germain Lefebvre & Emma J Kilford & Sarah-Jayne Blakemore, 2017. "Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing," PLOS Computational Biology, Public Library of Science, vol. 13(8), pages 1-22, August.
  • Handle: RePEc:plo:pcbi00:1005684
    DOI: 10.1371/journal.pcbi.1005684
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005684
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005684&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1005684?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Nature Communications, Nature, vol. 6(1), pages 1-14, November.
    2. Stefano DellaVigna, 2009. "Psychology and Economics: Evidence from the Field," Journal of Economic Literature, American Economic Association, vol. 47(2), pages 315-372, June.
    3. Karl J Friston & Jean Daunizeau & Stefan J Kiebel, 2009. "Reinforcement Learning or Active Inference?," PLOS ONE, Public Library of Science, vol. 4(7), pages 1-13, July.
    4. Jean Daunizeau & Vincent Adam & Lionel Rigoux, 2014. "VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data," PLOS Computational Biology, Public Library of Science, vol. 10(1), pages 1-16, January.
    5. repec:cup:judgdm:v:5:y:2010:i:1:p:1-10 is not listed on IDEAS
    6. Jörg Rieskamp & Brenda Lea K. Krugel & Hauke R. Heekeren, 2011. "The Neural Basis of Following Advice," SFB 649 Discussion Papers SFB649DP2011-038, Sonderforschungsbereich 649, Humboldt University, Berlin, Germany.
    7. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Post-Print halshs-01236045, HAL.
    8. Stefano Palminteri & Emma J Kilford & Giorgio Coricelli & Sarah-Jayne Blakemore, 2016. "The Computational Development of Reinforcement Learning during Adolescence," PLOS Computational Biology, Public Library of Science, vol. 12(6), pages 1-25, June.
    9. Germain Lefebvre & Maël Lebreton & Florent Meyniel & Sacha Bourgeois-Gironde & Stefano Palminteri, 2017. "Behavioural and neural characterization of optimistic reinforcement learning," Nature Human Behaviour, Nature, vol. 1(4), pages 1-9, April.
    10. Brit Grosskopf & Ido Erev & Eldad Yechiam, 2006. "Foregone with the Wind: Indirect Payoff Information and its Implications for Choice," International Journal of Game Theory, Springer;Game Theory Society, vol. 34(2), pages 285-302, August.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Johann Lussange & Ivan Lazarevich & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2021. "Modelling Stock Markets by Multi-agent Reinforcement Learning," Computational Economics, Springer;Society for Computational Economics, vol. 57(1), pages 113-147, January.
    2. Damien Challet & Vincent Ragel, 2023. "Recurrent Neural Networks with more flexible memory: better predictions than rough volatility," Working Papers hal-04165354, HAL.
    3. Simon Ciranka & Juan Linde-Domingo & Ivan Padezhki & Clara Wicharz & Charley M. Wu & Bernhard Spitzer, 2022. "Asymmetric reinforcement learning facilitates human inference of transitive relations," Nature Human Behaviour, Nature, vol. 6(4), pages 555-564, April.
    4. Cristofaro, Matteo, 2020. "“I feel and think, therefore I am”: An Affect-Cognitive Theory of management decisions," European Management Journal, Elsevier, vol. 38(2), pages 344-355.
    5. Aurélien Nioche & Basile Garcia & Germain Lefebvre & Thomas Boraud & Nicolas P. Rougier & Sacha Bourgeois-Gironde, 2019. "Coordination over a unique medium of exchange under information scarcity," Palgrave Communications, Palgrave Macmillan, vol. 5(1), pages 1-11, December.
    6. Nura Sidarus & Stefano Palminteri & Valérian Chambon, 2019. "Cost-benefit trade-offs in decision-making and learning," PLOS Computational Biology, Public Library of Science, vol. 15(9), pages 1-28, September.
    7. Johann Lussange & Stefano Vrizzi & Stefano Palminteri & Boris Gutkin, 2024. "Modelling crypto markets by multi-agent reinforcement learning," Papers 2402.10803, arXiv.org.
    8. Daniel J. Benjamin, 2018. "Errors in Probabilistic Reasoning and Judgment Biases," NBER Working Papers 25200, National Bureau of Economic Research, Inc.
    9. Kim A.G.J. Romijnders & Liesbeth van Osch & Hein de Vries & Reinskje Talhout, 2019. "A Deliberate Choice? Exploring the Decision to Switch from Cigarettes to E-Cigarettes," IJERPH, MDPI, vol. 16(4), pages 1-11, February.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Maël Lebreton & Karin Bacily & Stefano Palminteri & Jan B Engelmann, 2019. "Contextual influence on confidence judgments in human reinforcement learning," PLOS Computational Biology, Public Library of Science, vol. 15(4), pages 1-27, April.
    2. Chih-Chung Ting & Nahuel Salem-Garcia & Stefano Palminteri & Jan B. Engelmann & Maël Lebreton, 2023. "Neural and computational underpinnings of biased confidence in human reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-18, December.
    3. Antoine Collomb-Clerc & Maëlle C. M. Gueguen & Lorella Minotti & Philippe Kahane & Vincent Navarro & Fabrice Bartolomei & Romain Carron & Jean Regis & Stephan Chabardès & Stefano Palminteri & Julien B, 2023. "Human thalamic low-frequency oscillations correlate with expected value and outcomes during reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-10, December.
    4. Johann Lussange & Stefano Vrizzi & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2023. "Stock Price Formation: Precepts from a Multi-Agent Reinforcement Learning Model," Computational Economics, Springer;Society for Computational Economics, vol. 61(4), pages 1523-1544, April.
    5. Johann Lussange & Ivan Lazarevich & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2021. "Modelling Stock Markets by Multi-agent Reinforcement Learning," Computational Economics, Springer;Society for Computational Economics, vol. 57(1), pages 113-147, January.
    6. Lefebvre, Germain & Nioche, Aurélien & Bourgeois-Gironde, Sacha & Palminteri, Stefano, 2018. "An Empirical Investigation of the Emergence of Money: Contrasting Temporal Difference and Opportunity Cost Reinforcement Learning," MPRA Paper 85586, University Library of Munich, Germany.
    7. Johann Lussange & Boris Gutkin, 2023. "Order book regulatory impact on stock market quality: a multi-agent reinforcement learning perspective," Papers 2302.04184, arXiv.org.
    8. Stefano Palminteri & Emma J Kilford & Giorgio Coricelli & Sarah-Jayne Blakemore, 2016. "The Computational Development of Reinforcement Learning during Adolescence," PLOS Computational Biology, Public Library of Science, vol. 12(6), pages 1-25, June.
    9. Simon Ciranka & Juan Linde-Domingo & Ivan Padezhki & Clara Wicharz & Charley M. Wu & Bernhard Spitzer, 2022. "Asymmetric reinforcement learning facilitates human inference of transitive relations," Nature Human Behaviour, Nature, vol. 6(4), pages 555-564, April.
    10. Lou Safra & Coralie Chevallier & Stefano Palminteri, 2019. "Depressive symptoms are associated with blunted reward learning in social contexts," PLOS Computational Biology, Public Library of Science, vol. 15(7), pages 1-22, July.
    11. M. A. Pisauro & E. F. Fouragnan & D. H. Arabadzhiyska & M. A. J. Apps & M. G. Philiastides, 2022. "Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition," Nature Communications, Nature, vol. 13(1), pages 1-18, December.
    12. Masiliūnas, Aidas, 2023. "Learning in rent-seeking contests with payoff risk and foregone payoff information," Games and Economic Behavior, Elsevier, vol. 140(C), pages 50-72.
    13. Koen M. M. Frolichs & Gabriela Rosenblau & Christoph W. Korn, 2022. "Incorporating social knowledge structures into computational models," Nature Communications, Nature, vol. 13(1), pages 1-18, December.
    14. Daniel J. Benjamin, 2018. "Errors in Probabilistic Reasoning and Judgment Biases," NBER Working Papers 25200, National Bureau of Economic Research, Inc.
    15. Nura Sidarus & Stefano Palminteri & Valérian Chambon, 2019. "Cost-benefit trade-offs in decision-making and learning," PLOS Computational Biology, Public Library of Science, vol. 15(9), pages 1-28, September.
    16. Wei-Hsiang Lin & Justin L Gardner & Shih-Wei Wu, 2020. "Context effects on probability estimation," PLOS Biology, Public Library of Science, vol. 18(3), pages 1-45, March.
    17. Mikhail S. Spektor & Hannah Seidler, 2022. "Violations of economic rationality due to irrelevant information during learning in decision from experience," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 17(2), pages 425-448, March.
    18. Johann Lussange & Stefano Vrizzi & Stefano Palminteri & Boris Gutkin, 2024. "Modelling crypto markets by multi-agent reinforcement learning," Papers 2402.10803, arXiv.org.
    19. repec:cup:judgdm:v:17:y:2022:i:2:p:425-448 is not listed on IDEAS
    20. Wettstein, Dominik J. & Boes, Stefan, 2022. "How value-based policy interventions influence price negotiations for new medicines: An experimental approach and initial evidence," Health Policy, Elsevier, vol. 126(2), pages 112-121.
    21. Andreas R. Kostøl & Andreas S. Myhre, 2021. "Labor Supply Responses to Learning the Tax and Benefit Schedule," American Economic Review, American Economic Association, vol. 111(11), pages 3733-3766, November.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1005684. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.