IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1004523.html
   My bibliography  Save this article

Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model

Author

Listed:
  • Greg Jensen
  • Fabian Muñoz
  • Yelda Alkan
  • Vincent P Ferrera
  • Herbert S Terrace

Abstract

Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort’s success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.Author Summary: Although machine learning systems can solve a wide variety of problems, they remain limited in their ability to make logical inferences. We developed a new computational model, called betasort, which addresses these limitations for a certain class of problems: Those in which the algorithm must infer the order of a set of items by trial and error. Unlike extant machine learning systems (but like children and many non-human animals), betasort is able to perform “transitive inferences” about the ordering of a set of images. The patterns of error made by betasort resemble those made by children and non-human animals, and the resulting learning achieved at low computational cost. Additionally, betasort is difficult to classify as either “model-free” or “model-based” according to the formal specifications of those classifications in the machine learning literature. One of the broader implications of these results is that achieving a more comprehensive understanding of how the brain learns will require analysts to entertain other candidate learning models.

Suggested Citation

  • Greg Jensen & Fabian Muñoz & Yelda Alkan & Vincent P Ferrera & Herbert S Terrace, 2015. "Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model," PLOS Computational Biology, Public Library of Science, vol. 11(9), pages 1-27, September.
  • Handle: RePEc:plo:pcbi00:1004523
    DOI: 10.1371/journal.pcbi.1004523
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004523
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1004523&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1004523?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Emanuele Raineri & Marc Dabad & Simon Heath, 2014. "A Note on Exact Differences between Beta Distributions in Genomic (Methylation) Studies," PLOS ONE, Public Library of Science, vol. 9(5), pages 1-5, May.
    2. Pascale Waelti & Anthony Dickinson & Wolfram Schultz, 2001. "Dopamine responses comply with basic assumptions of formal learning theory," Nature, Nature, vol. 412(6842), pages 43-48, July.
    3. Takahashi, Taiki & Oono, Hidemi & Radford, Mark H.B., 2008. "Psychophysics of time perception and intertemporal choice models," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 387(8), pages 2066-2074.
    4. Logan Grosenick & Tricia S. Clement & Russell D. Fernald, 2007. "Erratum: Fish can infer social rank by observation alone," Nature, Nature, vol. 446(7131), pages 102-102, March.
    5. David B. McDonald & Daizaburo Shizuka, 2013. "Comparative transitive and temporal orderliness in dominance networks," Behavioral Ecology, International Society for Behavioral Ecology, vol. 24(2), pages 511-520.
    6. Freeman Dyson, 2004. "A meeting with Enrico Fermi," Nature, Nature, vol. 427(6972), pages 297-297, January.
    7. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    8. Guillermo Paz-y-Miño C & Alan B. Bond & Alan C. Kamil & Russell P. Balda, 2004. "Pinyon jays use transitive inference to predict social dominance," Nature, Nature, vol. 430(7001), pages 778-781, August.
    9. Logan Grosenick & Tricia S. Clement & Russell D. Fernald, 2007. "Fish can infer social rank by observation alone," Nature, Nature, vol. 445(7126), pages 429-432, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Simon Ciranka & Juan Linde-Domingo & Ivan Padezhki & Clara Wicharz & Charley M. Wu & Bernhard Spitzer, 2022. "Asymmetric reinforcement learning facilitates human inference of transitive relations," Nature Human Behaviour, Nature, vol. 6(4), pages 555-564, April.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ivan D Chase & W Brent Lindquist, 2016. "The Fragility of Individual-Based Explanations of Social Hierarchies: A Test Using Animal Pecking Orders," PLOS ONE, Public Library of Science, vol. 11(7), pages 1-16, July.
    2. Takashi Hotta & Kentaro Ueno & Yuya Hataji & Hika Kuroshima & Kazuo Fujita & Masanori Kohda, 2020. "Transitive inference in cleaner wrasses (Labroides dimidiatus)," PLOS ONE, Public Library of Science, vol. 15(8), pages 1-13, August.
    3. Andrea Polonioli, 2013. "Re-assessing the Heuristics debate," Mind & Society: Cognitive Studies in Economics and Social Sciences, Springer;Fondazione Rosselli, vol. 12(2), pages 263-271, November.
    4. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    5. Elizabeth A Hobson & Simon DeDeo, 2015. "Social Feedback and the Emergence of Rank in Animal Society," PLOS Computational Biology, Public Library of Science, vol. 11(9), pages 1-20, September.
    6. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    7. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    8. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    9. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    10. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    11. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    12. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    13. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    14. Giles W Story & Ivaylo Vlaev & Ben Seymour & Joel S Winston & Ara Darzi & Raymond J Dolan, 2013. "Dread and the Disvalue of Future Pain," PLOS Computational Biology, Public Library of Science, vol. 9(11), pages 1-18, November.
    15. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    16. Smith, Trenton G. & Tasnadi, Attila, 2007. "A theory of natural addiction," Games and Economic Behavior, Elsevier, vol. 59(2), pages 316-344, May.
    17. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    18. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    19. Wang, Xuan & Shu, Gequn & Tian, Hua & Wang, Rui & Cai, Jinwen, 2020. "Operation performance comparison of CCHP systems with cascade waste heat recovery systems by simulation and operation optimisation," Energy, Elsevier, vol. 206(C).
    20. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1004523. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.