↓ Skip to main content

Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

Overview of attention for article published in Frontiers in Computational Neuroscience, May 2017
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#27 of 1,475)
  • High Attention Score compared to outputs of the same age (96th percentile)
  • High Attention Score compared to outputs of the same age and source (94th percentile)

Mentioned by

blogs
2 blogs
twitter
88 X users
patent
2 patents
facebook
1 Facebook page
googleplus
4 Google+ users
reddit
1 Redditor

Citations

dimensions_citation
263 Dimensions

Readers on

mendeley
703 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
Published in
Frontiers in Computational Neuroscience, May 2017
DOI 10.3389/fncom.2017.00024
Pubmed ID
Authors

Benjamin Scellier, Yoshua Bengio

Abstract

We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal "back-propagated" during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task.

X Demographics

X Demographics

The data shown below were collected from the profiles of 88 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 703 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 4 <1%
United Kingdom 2 <1%
Canada 2 <1%
Spain 2 <1%
Netherlands 1 <1%
Russia 1 <1%
Switzerland 1 <1%
Japan 1 <1%
Germany 1 <1%
Other 0 0%
Unknown 688 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 177 25%
Researcher 113 16%
Student > Master 112 16%
Student > Bachelor 73 10%
Other 27 4%
Other 69 10%
Unknown 132 19%
Readers by discipline Count As %
Computer Science 240 34%
Engineering 88 13%
Neuroscience 84 12%
Physics and Astronomy 44 6%
Agricultural and Biological Sciences 28 4%
Other 67 10%
Unknown 152 22%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 73. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 December 2023.
All research outputs
#595,916
of 25,801,916 outputs
Outputs from Frontiers in Computational Neuroscience
#27
of 1,475 outputs
Outputs of similar age
#12,038
of 325,455 outputs
Outputs of similar age from Frontiers in Computational Neuroscience
#2
of 39 outputs
Altmetric has tracked 25,801,916 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 97th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,475 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 7.0. This one has done particularly well, scoring higher than 98% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 325,455 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 96% of its contemporaries.
We're also able to compare this research output to 39 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 94% of its contemporaries.