↓ Skip to main content

SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition

Overview of attention for article published in Frontiers in Neuroscience, August 2018
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
4 X users

Citations

dimensions_citation
26 Dimensions

Readers on

mendeley
50 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
Published in
Frontiers in Neuroscience, August 2018
DOI 10.3389/fnins.2018.00524
Pubmed ID
Authors

Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy

Abstract

In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a "divide and learn" strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 50 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 50 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 12 24%
Researcher 9 18%
Student > Master 6 12%
Student > Doctoral Student 3 6%
Other 3 6%
Other 4 8%
Unknown 13 26%
Readers by discipline Count As %
Engineering 15 30%
Computer Science 8 16%
Neuroscience 6 12%
Physics and Astronomy 3 6%
Social Sciences 2 4%
Other 3 6%
Unknown 13 26%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 08 September 2018.
All research outputs
#15,331,355
of 25,622,179 outputs
Outputs from Frontiers in Neuroscience
#6,468
of 11,639 outputs
Outputs of similar age
#181,525
of 343,262 outputs
Outputs of similar age from Frontiers in Neuroscience
#138
of 232 outputs
Altmetric has tracked 25,622,179 research outputs across all sources so far. This one is in the 38th percentile – i.e., 38% of other outputs scored the same or lower than it.
So far Altmetric has tracked 11,639 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 11.0. This one is in the 42nd percentile – i.e., 42% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 343,262 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 45th percentile – i.e., 45% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 232 others from the same source and published within six weeks on either side of this one. This one is in the 38th percentile – i.e., 38% of its contemporaries scored the same or lower than it.