IDEAS home Printed from https://ideas.repec.org/a/nat/nathum/v6y2022i4d10.1038_s41562-021-01263-w.html
   My bibliography  Save this article

Asymmetric reinforcement learning facilitates human inference of transitive relations

Author

Listed:
  • Simon Ciranka

    (Max Planck Institute for Human Development
    Max Planck UCL Centre for Computational Psychiatry and Ageing Research)

  • Juan Linde-Domingo

    (Max Planck Institute for Human Development)

  • Ivan Padezhki

    (Max Planck Institute for Human Development)

  • Clara Wicharz

    (Max Planck Institute for Human Development)

  • Charley M. Wu

    (Max Planck Institute for Human Development
    University of Tübingen)

  • Bernhard Spitzer

    (Max Planck Institute for Human Development
    Max Planck UCL Centre for Computational Psychiatry and Ageing Research)

Abstract

Humans and other animals are capable of inferring never-experienced relations (for example, A > C) from other relational observations (for example, A > B and B > C). The processes behind such transitive inference are subject to intense research. Here we demonstrate a new aspect of relational learning, building on previous evidence that transitive inference can be accomplished through simple reinforcement learning mechanisms. We show in simulations that inference of novel relations benefits from an asymmetric learning policy, where observers update only their belief about the winner (or loser) in a pair. Across four experiments (n = 145), we find substantial empirical support for such asymmetries in inferential learning. The learning policy favoured by our simulations and experiments gives rise to a compression of values that is routinely observed in psychophysics and behavioural economics. In other words, a seemingly biased learning strategy that yields well-known cognitive distortions can be beneficial for transitive inferential judgements.

Suggested Citation

  • Simon Ciranka & Juan Linde-Domingo & Ivan Padezhki & Clara Wicharz & Charley M. Wu & Bernhard Spitzer, 2022. "Asymmetric reinforcement learning facilitates human inference of transitive relations," Nature Human Behaviour, Nature, vol. 6(4), pages 555-564, April.
  • Handle: RePEc:nat:nathum:v:6:y:2022:i:4:d:10.1038_s41562-021-01263-w
    DOI: 10.1038/s41562-021-01263-w
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41562-021-01263-w
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41562-021-01263-w?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Mullen, Katharine M. & Ardia, David & Gil, David L. & Windover, Donald & Cline, James, 2011. "DEoptim: An R Package for Global Optimization by Differential Evolution," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 40(i06).
    2. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Nature Communications, Nature, vol. 6(1), pages 1-14, November.
    3. Vickie Li & Santiago Herce Castañón & Joshua A Solomon & Hildward Vandormael & Christopher Summerfield, 2017. "Robust averaging protects decisions from noise in neural computations," PLOS Computational Biology, Public Library of Science, vol. 13(8), pages 1-19, August.
    4. Greg Jensen & Fabian Muñoz & Yelda Alkan & Vincent P Ferrera & Herbert S Terrace, 2015. "Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model," PLOS Computational Biology, Public Library of Science, vol. 11(9), pages 1-27, September.
    5. Daniel Kahneman & Amos Tversky, 2013. "Prospect Theory: An Analysis of Decision Under Risk," World Scientific Book Chapters, in: Leonard C MacLean & William T Ziemba (ed.), HANDBOOK OF THE FUNDAMENTALS OF FINANCIAL DECISION MAKING Part I, chapter 6, pages 99-127, World Scientific Publishing Co. Pte. Ltd..
    6. Stefano Palminteri & Germain Lefebvre & Emma J Kilford & Sarah-Jayne Blakemore, 2017. "Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing," PLOS Computational Biology, Public Library of Science, vol. 13(8), pages 1-22, August.
    7. Samuel J. Cheyette & Steven T. Piantadosi, 2020. "A unified account of numerosity perception," Nature Human Behaviour, Nature, vol. 4(12), pages 1265-1272, December.
    8. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Post-Print halshs-01236045, HAL.
    9. Bernhard Spitzer & Leonhard Waschke & Christopher Summerfield, 2017. "Selective overweighting of larger magnitudes during noisy numerical comparison," Nature Human Behaviour, Nature, vol. 1(8), pages 1-8, August.
    10. Germain Lefebvre & Maël Lebreton & Florent Meyniel & Sacha Bourgeois-Gironde & Stefano Palminteri, 2017. "Behavioural and neural characterization of optimistic reinforcement learning," Nature Human Behaviour, Nature, vol. 1(4), pages 1-9, April.
    11. Charley M. Wu & Eric Schulz & Maarten Speekenbrink & Jonathan D. Nelson & Björn Meder, 2018. "Generalization guides human exploration in vast decision spaces," Nature Human Behaviour, Nature, vol. 2(12), pages 915-924, December.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Huang, Shengzhi & Huang, Yong & Bu, Yi & Luo, Zhuoran & Lu, Wei, 2023. "Disclosing the interactive mechanism behind scientists’ topic selection behavior from the perspective of the productivity and the impact," Journal of Informetrics, Elsevier, vol. 17(2).
    2. Anna P. Giron & Simon Ciranka & Eric Schulz & Wouter Bos & Azzurra Ruggeri & Björn Meder & Charley M. Wu, 2023. "Developmental changes in exploration resemble stochastic optimization," Nature Human Behaviour, Nature, vol. 7(11), pages 1955-1967, November.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Johann Lussange & Ivan Lazarevich & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2021. "Modelling Stock Markets by Multi-agent Reinforcement Learning," Computational Economics, Springer;Society for Computational Economics, vol. 57(1), pages 113-147, January.
    2. Stefano Palminteri & Germain Lefebvre & Emma J Kilford & Sarah-Jayne Blakemore, 2017. "Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing," PLOS Computational Biology, Public Library of Science, vol. 13(8), pages 1-22, August.
    3. Johann Lussange & Boris Gutkin, 2023. "Order book regulatory impact on stock market quality: a multi-agent reinforcement learning perspective," Papers 2302.04184, arXiv.org.
    4. Wei-Hsiang Lin & Justin L Gardner & Shih-Wei Wu, 2020. "Context effects on probability estimation," PLOS Biology, Public Library of Science, vol. 18(3), pages 1-45, March.
    5. Mikhail S. Spektor & Hannah Seidler, 2022. "Violations of economic rationality due to irrelevant information during learning in decision from experience," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 17(2), pages 425-448, March.
    6. Johann Lussange & Stefano Vrizzi & Stefano Palminteri & Boris Gutkin, 2024. "Modelling crypto markets by multi-agent reinforcement learning," Papers 2402.10803, arXiv.org.
    7. Johann Lussange & Stefano Vrizzi & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2023. "Stock Price Formation: Precepts from a Multi-Agent Reinforcement Learning Model," Computational Economics, Springer;Society for Computational Economics, vol. 61(4), pages 1523-1544, April.
    8. repec:cup:judgdm:v:17:y:2022:i:2:p:425-448 is not listed on IDEAS
    9. Olschewski, Sebastian & Diao, Linan & Rieskamp, Jörg, 2021. "Reinforcement learning about asset variability and correlation in repeated portfolio decisions," Journal of Behavioral and Experimental Finance, Elsevier, vol. 32(C).
    10. Ryan Webb & Paul W. Glimcher & Kenway Louie, 2021. "The Normalization of Consumer Valuations: Context-Dependent Preferences from Neurobiological Constraints," Management Science, INFORMS, vol. 67(1), pages 93-125, January.
    11. Cristofaro, Matteo, 2020. "“I feel and think, therefore I am”: An Affect-Cognitive Theory of management decisions," European Management Journal, Elsevier, vol. 38(2), pages 344-355.
    12. Daniel J. Benjamin, 2018. "Errors in Probabilistic Reasoning and Judgment Biases," NBER Working Papers 25200, National Bureau of Economic Research, Inc.
    13. Maël Lebreton & Karin Bacily & Stefano Palminteri & Jan B Engelmann, 2019. "Contextual influence on confidence judgments in human reinforcement learning," PLOS Computational Biology, Public Library of Science, vol. 15(4), pages 1-27, April.
    14. Lefebvre, Germain & Nioche, Aurélien & Bourgeois-Gironde, Sacha & Palminteri, Stefano, 2018. "An Empirical Investigation of the Emergence of Money: Contrasting Temporal Difference and Opportunity Cost Reinforcement Learning," MPRA Paper 85586, University Library of Munich, Germany.
    15. Aurélien Nioche & Basile Garcia & Germain Lefebvre & Thomas Boraud & Nicolas P. Rougier & Sacha Bourgeois-Gironde, 2019. "Coordination over a unique medium of exchange under information scarcity," Palgrave Communications, Palgrave Macmillan, vol. 5(1), pages 1-11, December.
    16. Stefano Palminteri & Emma J Kilford & Giorgio Coricelli & Sarah-Jayne Blakemore, 2016. "The Computational Development of Reinforcement Learning during Adolescence," PLOS Computational Biology, Public Library of Science, vol. 12(6), pages 1-25, June.
    17. Lou Safra & Coralie Chevallier & Stefano Palminteri, 2019. "Depressive symptoms are associated with blunted reward learning in social contexts," PLOS Computational Biology, Public Library of Science, vol. 15(7), pages 1-22, July.
    18. Chih-Chung Ting & Nahuel Salem-Garcia & Stefano Palminteri & Jan B. Engelmann & Maël Lebreton, 2023. "Neural and computational underpinnings of biased confidence in human reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-18, December.
    19. Antoine Collomb-Clerc & Maëlle C. M. Gueguen & Lorella Minotti & Philippe Kahane & Vincent Navarro & Fabrice Bartolomei & Romain Carron & Jean Regis & Stephan Chabardès & Stefano Palminteri & Julien B, 2023. "Human thalamic low-frequency oscillations correlate with expected value and outcomes during reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-10, December.
    20. M. A. Pisauro & E. F. Fouragnan & D. H. Arabadzhiyska & M. A. J. Apps & M. G. Philiastides, 2022. "Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition," Nature Communications, Nature, vol. 13(1), pages 1-18, December.
    21. Koen M. M. Frolichs & Gabriela Rosenblau & Christoph W. Korn, 2022. "Incorporating social knowledge structures into computational models," Nature Communications, Nature, vol. 13(1), pages 1-18, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nathum:v:6:y:2022:i:4:d:10.1038_s41562-021-01263-w. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.