↓ Skip to main content

Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

Overview of attention for article published in Frontiers in Neuroscience, September 2017
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (63rd percentile)
  • Above-average Attention Score compared to outputs of the same age and source (56th percentile)

Mentioned by

twitter
2 X users
patent
1 patent

Citations

dimensions_citation
7 Dimensions

Readers on

mendeley
43 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks
Published in
Frontiers in Neuroscience, September 2017
DOI 10.3389/fnins.2017.00496
Pubmed ID
Authors

Hesham Mostafa, Bruno Pedroni, Sadique Sheik, Gert Cauwenberghs

Abstract

Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 43 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 43 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 16 37%
Researcher 6 14%
Student > Master 6 14%
Student > Bachelor 3 7%
Student > Postgraduate 2 5%
Other 3 7%
Unknown 7 16%
Readers by discipline Count As %
Engineering 18 42%
Computer Science 10 23%
Neuroscience 3 7%
Agricultural and Biological Sciences 1 2%
Immunology and Microbiology 1 2%
Other 4 9%
Unknown 6 14%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 23 January 2020.
All research outputs
#7,962,193
of 25,382,440 outputs
Outputs from Frontiers in Neuroscience
#5,072
of 11,542 outputs
Outputs of similar age
#115,710
of 323,170 outputs
Outputs of similar age from Frontiers in Neuroscience
#69
of 159 outputs
Altmetric has tracked 25,382,440 research outputs across all sources so far. This one has received more attention than most of these and is in the 67th percentile.
So far Altmetric has tracked 11,542 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 11.0. This one has gotten more attention than average, scoring higher than 55% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 323,170 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 63% of its contemporaries.
We're also able to compare this research output to 159 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 56% of its contemporaries.