Author
Listed:
- Rebecca K West
- William J Harrison
- Natasha Matthews
- Jason B Mattingley
- David K Sewell
Abstract
The mechanisms that enable humans to evaluate their confidence across a range of different decisions remain poorly understood. To bridge this gap in understanding, we used computational modelling to investigate the processes that underlie confidence judgements for perceptual decisions and the extent to which these computations are the same in the visual and auditory modalities. Participants completed two versions of a categorisation task with visual or auditory stimuli and made confidence judgements about their category decisions. In each modality, we varied both evidence strength, (i.e., the strength of the evidence for a particular category) and sensory uncertainty (i.e., the intensity of the sensory signal). We evaluated several classes of computational models which formalise the mapping of evidence strength and sensory uncertainty to confidence in different ways: 1) unscaled evidence strength models, 2) scaled evidence strength models, and 3) Bayesian models. Our model comparison results showed that across tasks and modalities, participants take evidence strength and sensory uncertainty into account in a way that is consistent with the scaled evidence strength class. Notably, the Bayesian class provided a relatively poor account of the data across modalities, particularly in the more complex categorisation task. Our findings suggest that a common process is used for evaluating confidence in perceptual decisions across domains, but that the parameter settings governing the process are tuned differently in each modality. Overall, our results highlight the impact of sensory uncertainty on confidence and the unity of metacognitive processing across sensory modalities.Author summary: In this study, we investigated the computational processes that describe how people derive a sense of confidence in their decisions. In particular, we used computational models to describe how decision confidence is generated from different stimulus features, specifically evidence strength and sensory uncertainty, and determined whether the same computations generalise to both visual and auditory decisions. We tested a range of different computational models from three distinct theoretical classes, where each class of models instantiated different algorithmic hypotheses about the computations that are used to generate confidence. We found that a single class of models, in which confidence is derived from a subjective assessment of the strength of the evidence for a particular choice scaled by an estimate of sensory uncertainty, provided the best account of confidence for both visual and auditory decisions. Our findings suggest that the same type of algorithm is used for evaluating confidence across sensory modalities but that the ‘settings’ (or parameters) of this process are fine-tuned within each modality.
Suggested Citation
Rebecca K West & William J Harrison & Natasha Matthews & Jason B Mattingley & David K Sewell, 2023.
"Modality independent or modality specific? Common computations underlie confidence judgements in visual and auditory decisions,"
PLOS Computational Biology, Public Library of Science, vol. 19(7), pages 1-39, July.
Handle:
RePEc:plo:pcbi00:1011245
DOI: 10.1371/journal.pcbi.1011245
Download full text from publisher
References listed on IDEAS
- Laurence Aitchison & Dan Bang & Bahador Bahrami & Peter E Latham, 2015.
"Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making,"
PLOS Computational Biology, Public Library of Science, vol. 11(10), pages 1-23, October.
- Elien Bellon & Wim Fias & Bert De Smedt, 2020.
"Metacognition across domains: Is the association between arithmetic and metacognitive monitoring domain-specific?,"
PLOS ONE, Public Library of Science, vol. 15(3), pages 1-19, March.
Full references (including those not matched with items on IDEAS)
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.
- Manuel Rausch & Michael Zehetleitner, 2019.
"The folded X-pattern is not necessarily a statistical signature of decision confidence,"
PLOS Computational Biology, Public Library of Science, vol. 15(10), pages 1-18, October.
- Philipp Schustek & Rubén Moreno-Bote, 2018.
"Instance-based generalization for human judgments about uncertainty,"
PLOS Computational Biology, Public Library of Science, vol. 14(6), pages 1-27, June.
- repec:hal:journl:hal-03329211 is not listed on IDEAS
- repec:hal:cesptp:hal-03329211 is not listed on IDEAS
- repec:hal:pseptp:hal-03329211 is not listed on IDEAS
- William T Adler & Wei Ji Ma, 2018.
"Comparing Bayesian and non-Bayesian accounts of human confidence reports,"
PLOS Computational Biology, Public Library of Science, vol. 14(11), pages 1-34, November.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1011245. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.