IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v7y2016i1d10.1038_ncomms13276.html
   My bibliography  Save this article

Random synaptic feedback weights support error backpropagation for deep learning

Author

Listed:
  • Timothy P. Lillicrap

    (University of Oxford
    Google DeepMind, 5 New Street Square)

  • Daniel Cownden

    (School of Biology, University of St Andrews, Harold Mitchel Building, St Andrews)

  • Douglas B. Tweed

    (University of Toronto
    Centre for Vision Research, York University)

  • Colin J. Akerman

    (University of Oxford)

Abstract

The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron’s axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.

Suggested Citation

  • Timothy P. Lillicrap & Daniel Cownden & Douglas B. Tweed & Colin J. Akerman, 2016. "Random synaptic feedback weights support error backpropagation for deep learning," Nature Communications, Nature, vol. 7(1), pages 1-10, December.
  • Handle: RePEc:nat:natcom:v:7:y:2016:i:1:d:10.1038_ncomms13276
    DOI: 10.1038/ncomms13276
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/ncomms13276
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/ncomms13276?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Giorgia Dellaferrera & Stanisław Woźniak & Giacomo Indiveri & Angeliki Pantazi & Evangelos Eleftheriou, 2022. "Introducing principles of synaptic integration in the optimization of deep neural networks," Nature Communications, Nature, vol. 13(1), pages 1-14, December.
    2. Stefano Recanatesi & Gabriel Koch Ocker & Michael A Buice & Eric Shea-Brown, 2019. "Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity," PLOS Computational Biology, Public Library of Science, vol. 15(7), pages 1-29, July.
    3. Ertam, Fatih, 2019. "An efficient hybrid deep learning approach for internet security," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 535(C).
    4. Robert Rosenbaum, 2022. "On the relationship between predictive coding and backpropagation," PLOS ONE, Public Library of Science, vol. 17(3), pages 1-27, March.
    5. Navid Shervani-Tabar & Robert Rosenbaum, 2023. "Meta-learning biologically plausible plasticity rules with random feedback pathways," Nature Communications, Nature, vol. 14(1), pages 1-12, December.
    6. Mitsumasa Nakajima & Katsuma Inoue & Kenji Tanaka & Yasuo Kuniyoshi & Toshikazu Hashimoto & Kohei Nakajima, 2022. "Physical deep learning with biologically inspired training method: gradient-free approach for physical hardware," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    7. Keitaro Obara & Teppei Ebina & Shin-Ichiro Terada & Takanori Uka & Misako Komatsu & Masafumi Takaji & Akiya Watakabe & Kenta Kobayashi & Yoshito Masamizu & Hiroaki Mizukami & Tetsuo Yamamori & Kiyoto , 2023. "Change detection in the primate auditory cortex through feedback of prediction error signals," Nature Communications, Nature, vol. 14(1), pages 1-17, December.
    8. Alexander Ororbia & Daniel Kifer, 2022. "The neural coding framework for learning generative models," Nature Communications, Nature, vol. 13(1), pages 1-14, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:7:y:2016:i:1:d:10.1038_ncomms13276. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.