↓ Skip to main content

Deep Supervised Learning Using Local Errors

Overview of attention for article published in Frontiers in Neuroscience, August 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (77th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (62nd percentile)

Mentioned by

blogs
1 blog
twitter
3 X users

Citations

dimensions_citation
75 Dimensions

Readers on

mendeley
130 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Deep Supervised Learning Using Local Errors
Published in
Frontiers in Neuroscience, August 2018
DOI 10.3389/fnins.2018.00608
Pubmed ID
Authors

Hesham Mostafa, Vishwajith Ramesh, Gert Cauwenberghs

Abstract

Error backpropagation is a highly effective mechanism for learning high-quality hierarchical features in deep networks. Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers. Learning using delayed and non-local errors makes it hard to reconcile backpropagation with the learning mechanisms observed in biological neural networks as it requires the neurons to maintain a memory of the input long enough until the higher-layer errors arrive. In this paper, we propose an alternative learning mechanism where errors are generated locally in each layer using fixed, random auxiliary classifiers. Lower layers could thus be trained independently of higher layers and training could either proceed layer by layer, or simultaneously in all layers using local error information. We address biological plausibility concerns such as weight symmetry requirements and show that the proposed learning mechanism based on fixed, broad, and random tuning of each neuron to the classification categories outperforms the biologically-motivated feedback alignment learning technique on the CIFAR10 dataset, approaching the performance of standard backpropagation. Our approach highlights a potential biological mechanism for the supervised, or task-dependent, learning of feature hierarchies. In addition, we show that it is well suited for learning deep networks in custom hardware where it can drastically reduce memory traffic and data communication overheads. Code used to run all learning experiments is available under https://gitlab.com/hesham-mostafa/learning-using-local-erros.git.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 130 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 130 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 31 24%
Student > Master 19 15%
Researcher 13 10%
Student > Bachelor 12 9%
Student > Doctoral Student 7 5%
Other 17 13%
Unknown 31 24%
Readers by discipline Count As %
Computer Science 37 28%
Engineering 27 21%
Neuroscience 16 12%
Agricultural and Biological Sciences 4 3%
Physics and Astronomy 2 2%
Other 11 8%
Unknown 33 25%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 9. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 December 2019.
All research outputs
#4,218,763
of 25,385,509 outputs
Outputs from Frontiers in Neuroscience
#3,460
of 11,542 outputs
Outputs of similar age
#76,388
of 345,542 outputs
Outputs of similar age from Frontiers in Neuroscience
#91
of 244 outputs
Altmetric has tracked 25,385,509 research outputs across all sources so far. Compared to these this one has done well and is in the 83rd percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 11,542 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 11.0. This one has gotten more attention than average, scoring higher than 70% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 345,542 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 77% of its contemporaries.
We're also able to compare this research output to 244 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 62% of its contemporaries.