IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1012568.html
   My bibliography  Save this article

An inductive bias for slowly changing features in human reinforcement learning

Author

Listed:
  • Noa L Hedrich
  • Eric Schulz
  • Sam Hall-McMaster
  • Nicolas W Schuck

Abstract

Identifying goal-relevant features in novel environments is a central challenge for efficient behaviour. We asked whether humans address this challenge by relying on prior knowledge about common properties of reward-predicting features. One such property is the rate of change of features, given that behaviourally relevant processes tend to change on a slower timescale than noise. Hence, we asked whether humans are biased to learn more when task-relevant features are slow rather than fast. To test this idea, 295 human participants were asked to learn the rewards of two-dimensional bandits when either a slowly or quickly changing feature of the bandit predicted reward. Across two experiments and one preregistered replication, participants accrued more reward when a bandit’s relevant feature changed slowly, and its irrelevant feature quickly, as compared to the opposite. We did not find a difference in the ability to generalise to unseen feature values between conditions. Testing how feature speed could affect learning with a set of four function approximation Kalman filter models revealed that participants had a higher learning rate for the slow feature, and adjusted their learning to both the relevance and the speed of feature changes. The larger the improvement in participants’ performance for slow compared to fast bandits, the more strongly they adjusted their learning rates. These results provide evidence that human reinforcement learning favours slower features, suggesting a bias in how humans approach reward learning.Author summary: Learning experiments in the laboratory are often assumed to exist in a vacuum, where participants solve a given task independently of how they learn in more natural circumstances. But humans and other animals are in fact well known to “meta learn”, i.e. to leverage generalisable assumptions about how to learn from other experiences. Taking inspiration from a well-known machine learning technique known as slow feature analysis, we investigated one specific instance of such an assumption in learning: the possibility that humans tend to focus on slowly rather than quickly changing features when learning about rewards. To test this, we developed a task where participants had to learn the value of stimuli composed of two features. Participants indeed learned better from a slowly rather than quickly changing feature that predicted reward. Computational modelling of participant behaviour indicated that participants had a higher learning rate for slowly changing features from the outset. Hence, our results support the idea that human reinforcement learning reflects a priori assumptions about the reward structure in natural environments.

Suggested Citation

  • Noa L Hedrich & Eric Schulz & Sam Hall-McMaster & Nicolas W Schuck, 2024. "An inductive bias for slowly changing features in human reinforcement learning," PLOS Computational Biology, Public Library of Science, vol. 20(11), pages 1-30, November.
  • Handle: RePEc:plo:pcbi00:1012568
    DOI: 10.1371/journal.pcbi.1012568
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012568
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1012568&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1012568?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Nature Communications, Nature, vol. 6(1), pages 1-14, November.
    2. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Post-Print halshs-01236045, HAL.
    3. Mathias Franzius & Henning Sprekeler & Laurenz Wiskott, 2007. "Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells," PLOS Computational Biology, Public Library of Science, vol. 3(8), pages 1-18, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Antoine Collomb-Clerc & Maëlle C. M. Gueguen & Lorella Minotti & Philippe Kahane & Vincent Navarro & Fabrice Bartolomei & Romain Carron & Jean Regis & Stephan Chabardès & Stefano Palminteri & Julien B, 2023. "Human thalamic low-frequency oscillations correlate with expected value and outcomes during reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-10, December.
    2. Johann Lussange & Stefano Vrizzi & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2023. "Stock Price Formation: Precepts from a Multi-Agent Reinforcement Learning Model," Computational Economics, Springer;Society for Computational Economics, vol. 61(4), pages 1523-1544, April.
    3. Johann Lussange & Ivan Lazarevich & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2021. "Modelling Stock Markets by Multi-agent Reinforcement Learning," Computational Economics, Springer;Society for Computational Economics, vol. 57(1), pages 113-147, January.
    4. M. A. Pisauro & E. F. Fouragnan & D. H. Arabadzhiyska & M. A. J. Apps & M. G. Philiastides, 2022. "Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition," Nature Communications, Nature, vol. 13(1), pages 1-18, December.
    5. Koen M. M. Frolichs & Gabriela Rosenblau & Christoph W. Korn, 2022. "Incorporating social knowledge structures into computational models," Nature Communications, Nature, vol. 13(1), pages 1-18, December.
    6. Maël Lebreton & Karin Bacily & Stefano Palminteri & Jan B Engelmann, 2019. "Contextual influence on confidence judgments in human reinforcement learning," PLOS Computational Biology, Public Library of Science, vol. 15(4), pages 1-27, April.
    7. Stefano Palminteri & Germain Lefebvre & Emma J Kilford & Sarah-Jayne Blakemore, 2017. "Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing," PLOS Computational Biology, Public Library of Science, vol. 13(8), pages 1-22, August.
    8. Lefebvre, Germain & Nioche, Aurélien & Bourgeois-Gironde, Sacha & Palminteri, Stefano, 2018. "An Empirical Investigation of the Emergence of Money: Contrasting Temporal Difference and Opportunity Cost Reinforcement Learning," MPRA Paper 85586, University Library of Munich, Germany.
    9. Johann Lussange & Boris Gutkin, 2023. "Order book regulatory impact on stock market quality: a multi-agent reinforcement learning perspective," Papers 2302.04184, arXiv.org.
    10. Wei-Hsiang Lin & Justin L Gardner & Shih-Wei Wu, 2020. "Context effects on probability estimation," PLOS Biology, Public Library of Science, vol. 18(3), pages 1-45, March.
    11. Johann Lussange & Stefano Vrizzi & Stefano Palminteri & Boris Gutkin, 2024. "Mesoscale effects of trader learning behaviors in financial markets: A multi-agent reinforcement learning study," PLOS ONE, Public Library of Science, vol. 19(4), pages 1-40, April.
    12. Mikhail S. Spektor & Hannah Seidler, 2022. "Violations of economic rationality due to irrelevant information during learning in decision from experience," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 17(2), pages 425-448, March.
    13. Sepulveda, Pradyumna & Aitsahalia, Ines & Kumar, Krishan & Atkin, Tobias & Iigaya, Kiyohito, 2024. "Addressing Altered Anticipation as a Transdiagnostic Target through Computational Psychiatry," OSF Preprints dtm3r, Center for Open Science.
    14. Stefano Palminteri & Emma J Kilford & Giorgio Coricelli & Sarah-Jayne Blakemore, 2016. "The Computational Development of Reinforcement Learning during Adolescence," PLOS Computational Biology, Public Library of Science, vol. 12(6), pages 1-25, June.
    15. Simon Ciranka & Juan Linde-Domingo & Ivan Padezhki & Clara Wicharz & Charley M. Wu & Bernhard Spitzer, 2022. "Asymmetric reinforcement learning facilitates human inference of transitive relations," Nature Human Behaviour, Nature, vol. 6(4), pages 555-564, April.
    16. Gaia Molinaro & Anne G E Collins, 2023. "Intrinsic rewards explain context-sensitive valuation in reinforcement learning," PLOS Biology, Public Library of Science, vol. 21(7), pages 1-31, July.
    17. Lou Safra & Coralie Chevallier & Stefano Palminteri, 2019. "Depressive symptoms are associated with blunted reward learning in social contexts," PLOS Computational Biology, Public Library of Science, vol. 15(7), pages 1-22, July.
    18. repec:cup:judgdm:v:17:y:2022:i:2:p:425-448 is not listed on IDEAS
    19. Chih-Chung Ting & Nahuel Salem-Garcia & Stefano Palminteri & Jan B. Engelmann & Maël Lebreton, 2023. "Neural and computational underpinnings of biased confidence in human reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-18, December.
    20. repec:osf:osfxxx:dtm3r_v1 is not listed on IDEAS
    21. Robert Legenstein & Niko Wilbert & Laurenz Wiskott, 2010. "Reinforcement Learning on Slow Features of High-Dimensional Input Streams," PLOS Computational Biology, Public Library of Science, vol. 6(8), pages 1-13, August.
    22. Sven Dähne & Niko Wilbert & Laurenz Wiskott, 2014. "Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells," PLOS Computational Biology, Public Library of Science, vol. 10(5), pages 1-13, May.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1012568. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.