IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1012240.html
   My bibliography  Save this article

Jointly efficient encoding and decoding in neural populations

Author

Listed:
  • Simone Blanco Malerba
  • Aurora Micheli
  • Michael Woodford
  • Rava Azeredo da Silveira

Abstract

The efficient coding approach proposes that neural systems represent as much sensory information as biological constraints allow. It aims at formalizing encoding as a constrained optimal process. A different approach, that aims at formalizing decoding, proposes that neural systems instantiate a generative model of the sensory world. Here, we put forth a normative framework that characterizes neural systems as jointly optimizing encoding and decoding. It takes the form of a variational autoencoder: sensory stimuli are encoded in the noisy activity of neurons to be interpreted by a flexible decoder; encoding must allow for an accurate stimulus reconstruction from neural activity. Jointly, neural activity is required to represent the statistics of latent features which are mapped by the decoder into distributions over sensory stimuli; decoding correspondingly optimizes the accuracy of the generative model. This framework yields in a family of encoding-decoding models, which result in equally accurate generative models, indexed by a measure of the stimulus-induced deviation of neural activity from the marginal distribution over neural activity. Each member of this family predicts a specific relation between properties of the sensory neurons—such as the arrangement of the tuning curve means (preferred stimuli) and widths (degrees of selectivity) in the population—as a function of the statistics of the sensory world. Our approach thus generalizes the efficient coding approach. Notably, here, the form of the constraint on the optimization derives from the requirement of an accurate generative model, while it is arbitrary in efficient coding models. Moreover, solutions do not require the knowledge of the stimulus distribution, but are learned on the basis of data samples; the constraint further acts as regularizer, allowing the model to generalize beyond the training data. Finally, we characterize the family of models we obtain through alternate measures of performance, such as the error in stimulus reconstruction. We find that a range of models admits comparable performance; in particular, a population of sensory neurons with broad tuning curves as observed experimentally yields both low reconstruction stimulus error and an accurate generative model that generalizes robustly to unseen data.Author summary: Our brain represents the sensory world in the activity of populations of neurons. Two theories have addressed the nature of these representations. The first theory—efficient coding—posits that neurons encode as much information as possible about sensory stimuli, subject to resource constraints such as limits on energy consumption. The second one—generative modeling—focuses on decoding, and is organized around the idea that neural activity plays the role of a latent variable from which sensory stimuli can be simulated. Our work subsumes the two approaches in a unifying framework based on the mathematics of variational autoencoders. Unlike in efficient coding, which assumes full knowledge of stimulus statistics, here representations are learned from examples, in a joint optimization of encoding and decoding. This new framework yields a range of optimal representations, corresponding to different models of neural selectivity and reconstruction performances, depending on the resource constraint. The form of the constraint is not arbitrary but derives from the optimization framework, and its strength tunes the ability of the model to generalize beyond the training example. Central to the approach, and to the nature of the representations it implies, is the interplay of encoding and decoding, itself central to brain processing.

Suggested Citation

  • Simone Blanco Malerba & Aurora Micheli & Michael Woodford & Rava Azeredo da Silveira, 2024. "Jointly efficient encoding and decoding in neural populations," PLOS Computational Biology, Public Library of Science, vol. 20(7), pages 1-32, July.
  • Handle: RePEc:plo:pcbi00:1012240
    DOI: 10.1371/journal.pcbi.1012240
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012240
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1012240&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1012240?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Irina Higgins & Le Chang & Victoria Langston & Demis Hassabis & Christopher Summerfield & Doris Tsao & Matthew Botvinick, 2021. "Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons," Nature Communications, Nature, vol. 12(1), pages 1-14, December.
    2. Ari S. Benjamin & Ling-Qi Zhang & Cheng Qiu & Alan A. Stocker & Konrad P. Kording, 2022. "Efficient neural codes naturally emerge through gradient descent learning," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    3. Jonathan Schaffner & Sherry Dongqi Bao & Philippe N. Tobler & Todd A. Hare & Rafael Polania, 2023. "Sensory perception relies on fitness-maximizing codes," Nature Human Behaviour, Nature, vol. 7(7), pages 1135-1151, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Niccolo Pescetelli & Patrik Reichert & Alex Rutherford, 2022. "A variational-autoencoder approach to solve the hidden profile task in hybrid human-machine teams," PLOS ONE, Public Library of Science, vol. 17(8), pages 1-20, August.
    2. Oded Rotem & Tamar Schwartz & Ron Maor & Yishay Tauber & Maya Tsarfati Shapiro & Marcos Meseguer & Daniella Gilboa & Daniel S. Seidman & Assaf Zaritsky, 2024. "Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization," Nature Communications, Nature, vol. 15(1), pages 1-19, December.
    3. Miguel Ibáñez-Berganza & Carlo Lucibello & Luca Mariani & Giovanni Pezzulo, 2024. "Information-theoretical analysis of the neural code for decoupled face representation," PLOS ONE, Public Library of Science, vol. 19(1), pages 1-23, January.
    4. Valeria Fascianelli & Aldo Battista & Fabio Stefanini & Satoshi Tsujimoto & Aldo Genovesio & Stefano Fusi, 2024. "Neural representational geometries reflect behavioral differences in monkeys and recurrent neural networks," Nature Communications, Nature, vol. 15(1), pages 1-19, December.
    5. W. Jeffrey Johnston & Stefano Fusi, 2023. "Abstract representations emerge naturally in neural networks trained to perform multiple tasks," Nature Communications, Nature, vol. 14(1), pages 1-18, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1012240. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.