IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1002942.html
   My bibliography  Save this article

Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

Author

Listed:
  • R Channing Moore
  • Tyler Lee
  • Frédéric E Theunissen

Abstract

Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. Author Summary: Birds and humans excel at the task of detecting important sounds, such as song and speech, in difficult listening environments such as in a large bird colony or in a crowded bar. How our brains achieve such a feat remains a mystery to both neuroscientists and audio engineers. In our research, we found a population of neurons in the brain of songbirds that are able to extract a song signal from a background of noise. We explain how the neurons are able to perform this task and show how a biologically inspired algorithm could outperform the best noise-reduction methods proposed by engineers.

Suggested Citation

  • R Channing Moore & Tyler Lee & Frédéric E Theunissen, 2013. "Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise," PLOS Computational Biology, Public Library of Science, vol. 9(3), pages 1-14, March.
  • Handle: RePEc:plo:pcbi00:1002942
    DOI: 10.1371/journal.pcbi.1002942
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002942
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1002942&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1002942?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Daniel Bendor & Xiaoqin Wang, 2005. "The neuronal representation of pitch in primate auditory cortex," Nature, Nature, vol. 436(7054), pages 1161-1165, August.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Julie E Elie & Frédéric E Theunissen, 2019. "Invariant neural responses for sensory categories revealed by the time-varying information for communication calls," PLOS Computational Biology, Public Library of Science, vol. 15(9), pages 1-43, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Mark R. Saddler & Ray Gonzalez & Josh H. McDermott, 2021. "Deep neural network models reveal interplay of peripheral coding and stimulus statistics in pitch perception," Nature Communications, Nature, vol. 12(1), pages 1-25, December.
    2. Falk Lieder & Klaas E Stephan & Jean Daunizeau & Marta I Garrido & Karl J Friston, 2013. "A Neurocomputational Model of the Mismatch Negativity," PLOS Computational Biology, Public Library of Science, vol. 9(11), pages 1-14, November.
    3. Philip J Monahan & Kevin de Souza & William J Idsardi, 2008. "Neuromagnetic Evidence for Early Auditory Restoration of Fundamental Pitch," PLOS ONE, Public Library of Science, vol. 3(8), pages 1-6, August.
    4. Gwangsu Kim & Dong-Kyum Kim & Hawoong Jeong, 2024. "Spontaneous emergence of rudimentary music detectors in deep neural networks," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    5. Daniel Bendor, 2015. "The Role of Inhibition in a Computational Model of an Auditory Cortical Neuron during the Encoding of Temporal Information," PLOS Computational Biology, Public Library of Science, vol. 11(4), pages 1-25, April.
    6. Christophe Micheyl & Paul R Schrater & Andrew J Oxenham, 2013. "Auditory Frequency and Intensity Discrimination Explained Using a Cortical Population Rate Code," PLOS Computational Biology, Public Library of Science, vol. 9(11), pages 1-7, November.
    7. Patrick C M Wong & Bharath Chandrasekaran & Jing Zheng, 2012. "The Derived Allele of ASPM Is Associated with Lexical Tone Perception," PLOS ONE, Public Library of Science, vol. 7(4), pages 1-8, April.
    8. Oded Barzelay & Miriam Furst & Omri Barak, 2017. "A New Approach to Model Pitch Perception Using Sparse Coding," PLOS Computational Biology, Public Library of Science, vol. 13(1), pages 1-36, January.
    9. Weiping Yang & Jingjing Yang & Yulin Gao & Xiaoyu Tang & Yanna Ren & Satoshi Takahashi & Jinglong Wu, 2015. "Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study," PLOS ONE, Public Library of Science, vol. 10(9), pages 1-15, September.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1002942. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.