Author
Listed:
- Stefano Palminteri
(Institut National de la Santé et de la Recherche Médicale, Laboratoire de Neurosciences Cognitives et Computationnelles
Ecole Normale Supérieure, Université Paris Sciences et Lettres, Departement d’Etudes Cognitives)
Abstract
The reinforcement learning framework provides a computational and behavioral foundation for understanding how agents learn to maximize rewards and minimize punishments through interaction with their environment. This framework has been widely applied across disciplines, including artificial intelligence, animal psychology, and economics. Over the last decade, a growing body of research has shown that human reinforcement learning often deviates from normative standards, exhibiting systematic biases. The first aim of this paper is to propose a conceptual framework and a taxonomy for evaluating computational biases within reinforcement learning. We specifically propose a distinction between praxic biases, characterized by a mismatch between internal representations and selected actions, and epistemic biases, characterized by a mismatch between past experiences and internal representations. Building on this foundation, we characterize and discuss two primary types of epistemic biases: relative valuation and biased update. We describe their behavioral signatures and discuss their potential adaptive roles. Finally, we eleborate on how these findings may shape future developments in both theoretical and applied domains. Notably, despite being widely used in clinical and educational settings, reinforcement-based interventions have been comparatively neglected in the domains of behavioral public policy and decision-making improvement, particularly when compared to more popular approaches such as nudges and boosts. In this review, we offer an explanation for this comparative neglect that we believe rooted in common historical and epistemological misconceptions, and advocate for a greater integration of reinforcement learning into the design of behavioral public policy.
Suggested Citation
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:minsoc:v:24:y:2025:i:2:d:10.1007_s11299-025-00329-w. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.