IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1004865.html
   My bibliography  Save this article

The Development of Audio-Visual Integration for Temporal Judgements

Author

Listed:
  • Wendy J Adams

Abstract

Adults combine information from different sensory modalities to estimate object properties such as size or location. This process is optimal in that (i) sensory information is weighted according to relative reliability: more reliable estimates have more influence on the combined estimate and (ii) the combined estimate is more reliable than the component uni-modal estimates. Previous studies suggest that optimal sensory integration does not emerge until around 10 years of age. Younger children rely on a single modality or combine information using inappropriate sensory weights. Children aged 4–11 and adults completed a simple audio-visual task in which they reported either the number of beeps or the number of flashes in uni-modal and bi-modal conditions. In bi-modal trials, beeps and flashes differed in number by 0, 1 or 2. Mutual interactions between the sensory signals were evident at all ages: the reported number of flashes was influenced by the number of simultaneously presented beeps and vice versa. Furthermore, for all ages, the relative strength of these interactions was predicted by the relative reliabilities of the two modalities, in other words, all observers weighted the signals appropriately. The degree of cross-modal interaction decreased with age: the youngest observers could not ignore the task-irrelevant modality—they fully combined vision and audition such that they perceived equal numbers of flashes and beeps for bi-modal stimuli. Older observers showed much smaller effects of the task-irrelevant modality. Do these interactions reflect optimal integration? Full or partial cross-modal integration predicts improved reliability in bi-modal conditions. In contrast, switching between modalities reduces reliability. Model comparison suggests that older observers employed partial integration, whereas younger observers (up to around 8 years) did not integrate, but followed a sub-optimal switching strategy, responding according to either visual or auditory information on each trial.Author Summary: To complete everyday activities, such as judging where or when something occurred, we combine information from multiple senses such as vision and audition. In adults, this merging of information is optimal: more reliable sensory estimates have more influence (higher weight) in the combined, multisensory estimate. Multisensory integration can result in illusions: if a single visual flash (e.g. a bright disk appearing briefly on a screen) occurs at the same time as two beeps, we sometimes perceive two flashes. This is because auditory information is generally more reliable than vision for judging when things happen; it dominates our audio-visual percept for temporal tasks. Previous work suggests that children don’t combine information from different senses in this adult-like way until around 10 years. To investigate this further, we asked children and adults to report the number of visual flashes or auditory beeps when these were presented simultaneously. Surprisingly, all children used appropriate sensory weights: audition—the more reliable signal—tended to dominate perception, with less weight given to vision. However, children didn’t show the adult-like reduction in uncertainty until around 8–10 years. Before that age, they switched between using only auditory or only visual information on each trial.

Suggested Citation

  • Wendy J Adams, 2016. "The Development of Audio-Visual Integration for Temporal Judgements," PLOS Computational Biology, Public Library of Science, vol. 12(4), pages 1-17, April.
  • Handle: RePEc:plo:pcbi00:1004865
    DOI: 10.1371/journal.pcbi.1004865
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004865
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1004865&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1004865?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Ladan Shams & Yukiyasu Kamitani & Shinsuke Shimojo, 2000. "What you see is what you hear," Nature, Nature, vol. 408(6814), pages 788-788, December.
    2. David R Wozny & Ulrik R Beierholm & Ladan Shams, 2010. "Probability Matching as a Computational Strategy Used in Perception," PLOS Computational Biology, Public Library of Science, vol. 6(8), pages 1-7, August.
    3. Konrad P Körding & Ulrik Beierholm & Wei Ji Ma & Steven Quartz & Joshua B Tenenbaum & Ladan Shams, 2007. "Causal Inference in Multisensory Perception," PLOS ONE, Public Library of Science, vol. 2(9), pages 1-10, September.
    4. Marc O. Ernst & Martin S. Banks, 2002. "Humans integrate visual and haptic information in a statistically optimal fashion," Nature, Nature, vol. 415(6870), pages 429-433, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Adam N Sanborn & Ulrik R Beierholm, 2016. "Fast and Accurate Learning When Making Discrete Numerical Estimates," PLOS Computational Biology, Public Library of Science, vol. 12(4), pages 1-28, April.
    2. Jannes Jegminat & Maya A Jastrzębowska & Matthew V Pachai & Michael H Herzog & Jean-Pascal Pfister, 2020. "Bayesian regression explains how human participants handle parameter uncertainty," PLOS Computational Biology, Public Library of Science, vol. 16(5), pages 1-23, May.
    3. Peter W Battaglia & Daniel Kersten & Paul R Schrater, 2011. "How Haptic Size Sensations Improve Distance Perception," PLOS Computational Biology, Public Library of Science, vol. 7(6), pages 1-13, June.
    4. Luigi Acerbi & Kalpana Dokka & Dora E Angelaki & Wei Ji Ma, 2018. "Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception," PLOS Computational Biology, Public Library of Science, vol. 14(7), pages 1-38, July.
    5. Christoph Kayser & Ladan Shams, 2015. "Multisensory Causal Inference in the Brain," PLOS Biology, Public Library of Science, vol. 13(2), pages 1-7, February.
    6. Patricia Besson & Christophe Bourdin & Lionel Bringoux, 2011. "A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics," PLOS ONE, Public Library of Science, vol. 6(8), pages 1-11, August.
    7. Tim Genewein & Eduard Hez & Zeynab Razzaghpanah & Daniel A Braun, 2015. "Structure Learning in Bayesian Sensorimotor Integration," PLOS Computational Biology, Public Library of Science, vol. 11(8), pages 1-27, August.
    8. Noelle R B Stiles & Monica Li & Carmel A Levitan & Yukiyasu Kamitani & Shinsuke Shimojo, 2018. "What you saw is what you will hear: Two new illusions with audiovisual postdictive effects," PLOS ONE, Public Library of Science, vol. 13(10), pages 1-22, October.
    9. Yoshiaki Tsushima & Sho Okada & Yuka Kawai & Akio Sumita & Hiroshi Ando & Mitsunori Miki, 2020. "Effect of illumination on perceived temperature," PLOS ONE, Public Library of Science, vol. 15(8), pages 1-8, August.
    10. Guido Marco Cicchini & Giovanni D’Errico & David Charles Burr, 2022. "Crowding results from optimal integration of visual targets with contextual information," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    11. Jeroen Atsma & Femke Maij & Mathieu Koppen & David E Irwin & W Pieter Medendorp, 2016. "Causal Inference for Spatial Constancy across Saccades," PLOS Computational Biology, Public Library of Science, vol. 12(3), pages 1-20, March.
    12. Ksander N de Winkel & Mikhail Katliar & Heinrich H Bülthoff, 2017. "Causal Inference in Multisensory Heading Estimation," PLOS ONE, Public Library of Science, vol. 12(1), pages 1-20, January.
    13. Sophie Smit & Anina N Rich & Regine Zopf, 2019. "Visual body form and orientation cues do not modulate visuo-tactile temporal integration," PLOS ONE, Public Library of Science, vol. 14(12), pages 1-20, December.
    14. David R Wozny & Ulrik R Beierholm & Ladan Shams, 2010. "Probability Matching as a Computational Strategy Used in Perception," PLOS Computational Biology, Public Library of Science, vol. 6(8), pages 1-7, August.
    15. Pedram Daee & Maryam S Mirian & Majid Nili Ahmadabadi, 2014. "Reward Maximization Justifies the Transition from Sensory Selection at Childhood to Sensory Integration at Adulthood," PLOS ONE, Public Library of Science, vol. 9(7), pages 1-13, July.
    16. Srishti Goel & Julian Jara-Ettinger & Desmond C. Ong & Maria Gendron, 2024. "Face and context integration in emotion inference is limited and variable across categories and individuals," Nature Communications, Nature, vol. 15(1), pages 1-17, December.
    17. Amy A Kalia & Paul R Schrater & Gordon E Legge, 2013. "Combining Path Integration and Remembered Landmarks When Navigating without Vision," PLOS ONE, Public Library of Science, vol. 8(9), pages 1-8, September.
    18. Catarina Mendonça & Pietro Mandelli & Ville Pulkki, 2016. "Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models," PLOS ONE, Public Library of Science, vol. 11(12), pages 1-18, December.
    19. Alice Masini & Marcello Lanari & Sofia Marini & Alessia Tessari & Stefania Toselli & Rita Stagni & Maria Cristina Bisi & Laura Bragonzoni & Davide Gori & Alessandra Sansavini & Andrea Ceciliani & Laur, 2020. "A Multiple Targeted Research Protocol for a Quasi-Experimental Trial in Primary School Children Based on an Active Break Intervention: The Imola Active Breaks (I-MOVE) Study," IJERPH, MDPI, vol. 17(17), pages 1-16, August.
    20. Jacques Pesnot Lerousseau & Cesare V. Parise & Marc O. Ernst & Virginie Wassenhove, 2022. "Multisensory correlation computations in the human brain identified by a time-resolved encoding model," Nature Communications, Nature, vol. 13(1), pages 1-12, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1004865. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.