Author
Listed:
- Sreejan Kumar
- Ishita Dasgupta
- Nathaniel D Daw
- Jonathan D Cohen
- Thomas L Griffiths
Abstract
The ability to acquire abstract knowledge is a hallmark of human intelligence and is believed by many to be one of the core differences between humans and neural network models. Agents can be endowed with an inductive bias towards abstraction through meta-learning, where they are trained on a distribution of tasks that share some abstract structure that can be learned and applied. However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction. In this work, we compare the performance of humans and agents in a meta-reinforcement learning paradigm in which tasks are generated from abstract rules. We define a novel methodology for building “task metamers” that closely match the statistics of the abstract tasks but use a different underlying generative process, and evaluate performance on both abstract and metamer tasks. We find that humans perform better at abstract tasks than metamer tasks whereas common neural network architectures typically perform worse on the abstract tasks than the matched metamers. This work provides a foundation for characterizing differences between humans and machine learning that can be used in future work towards developing machines with more human-like behavior.Author summary: There has been a recent explosion of progress in artificial intelligence models, in the form of neural network models. As these models achieve human-level performance in a variety of task domains, one may ask what exactly is the difference between these models and human intelligence. Many researchers have hypothesized that neural networks often learn to solve problems through simple pattern matching while humans can often understand a problem’s underlying abstract concepts or causal mechanisms and solve it using reasoning. Because it is difficult to tell which of these two strategies is being employed in problem solving, this work develops a method to disentangle the two from a human’s or neural network model’s behavior. The findings confirm that humans typically use abstraction to solve problems whereas neural networks typically use pattern matching instead.
Suggested Citation
Sreejan Kumar & Ishita Dasgupta & Nathaniel D Daw & Jonathan D Cohen & Thomas L Griffiths, 2023.
"Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning,"
PLOS Computational Biology, Public Library of Science, vol. 19(8), pages 1-21, August.
Handle:
RePEc:plo:pcbi00:1011316
DOI: 10.1371/journal.pcbi.1011316
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1011316. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.