↓ Skip to main content

Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation

Overview of attention for article published in Frontiers in Neuroscience, November 2016
Altmetric Badge

Mentioned by

twitter
2 X users

Citations

dimensions_citation
27 Dimensions

Readers on

mendeley
96 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation
Published in
Frontiers in Neuroscience, November 2016
DOI 10.3389/fnins.2016.00496
Pubmed ID
Authors

Qian Liu, Garibaldi Pineda-García, Evangelos Stromatias, Teresa Serrano-Gotarredona, Steve B. Furber

Abstract

Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 96 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 96 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 29 30%
Student > Master 21 22%
Researcher 8 8%
Student > Doctoral Student 6 6%
Student > Postgraduate 3 3%
Other 10 10%
Unknown 19 20%
Readers by discipline Count As %
Computer Science 30 31%
Engineering 21 22%
Neuroscience 9 9%
Physics and Astronomy 4 4%
Psychology 2 2%
Other 7 7%
Unknown 23 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 February 2017.
All research outputs
#19,945,185
of 25,374,917 outputs
Outputs from Frontiers in Neuroscience
#8,670
of 11,542 outputs
Outputs of similar age
#231,216
of 317,143 outputs
Outputs of similar age from Frontiers in Neuroscience
#89
of 135 outputs
Altmetric has tracked 25,374,917 research outputs across all sources so far. This one is in the 18th percentile – i.e., 18% of other outputs scored the same or lower than it.
So far Altmetric has tracked 11,542 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.9. This one is in the 18th percentile – i.e., 18% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 317,143 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 23rd percentile – i.e., 23% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 135 others from the same source and published within six weeks on either side of this one. This one is in the 20th percentile – i.e., 20% of its contemporaries scored the same or lower than it.