↓ Skip to main content

Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology

Overview of attention for article published in Journal of Computational Neuroscience, February 2018
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (51st percentile)
  • High Attention Score compared to outputs of the same age and source (80th percentile)

Mentioned by

twitter
6 X users

Citations

dimensions_citation
3 Dimensions

Readers on

mendeley
41 Mendeley
citeulike
1 CiteULike
Title
Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology
Published in
Journal of Computational Neuroscience, February 2018
DOI 10.1007/s10827-018-0678-8
Pubmed ID
Authors

Amin Karbasi, Amir Hesam Salavati, Martin Vetterli

Abstract

The connectivity of a neuronal network has a major effect on its functionality and role. It is generally believed that the complex network structure of the brain provides a physiological basis for information processing. Therefore, identifying the network's topology has received a lot of attentions in neuroscience and has been the center of many research initiatives such as Human Connectome Project. Nevertheless, direct and invasive approaches that slice and observe the neural tissue have proven to be time consuming, complex and costly. As a result, the inverse methods that utilize firing activity of neurons in order to identify the (functional) connections have gained momentum recently, especially in light of rapid advances in recording technologies; It will soon be possible to simultaneously monitor the activities of tens of thousands of neurons in real time. While there are a number of excellent approaches that aim to identify the functional connections from firing activities, the scalability of the proposed techniques plays a major challenge in applying them on large-scale datasets of recorded firing activities. In exceptional cases where scalability has not been an issue, the theoretical performance guarantees are usually limited to a specific family of neurons or the type of firing activities. In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links/connections. We develop a scalable learning mechanism and derive the conditions under which the estimated graph for a network of Leaky Integrate and Fire (LIf) neurons matches the true underlying synaptic connections. We then validate the performance of the algorithm using artificially generated data (for benchmarking) and real data recorded from multiple hippocampal areas in rats.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 41 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 41 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 8 20%
Researcher 8 20%
Student > Master 5 12%
Student > Doctoral Student 3 7%
Student > Bachelor 2 5%
Other 4 10%
Unknown 11 27%
Readers by discipline Count As %
Neuroscience 11 27%
Engineering 5 12%
Agricultural and Biological Sciences 4 10%
Physics and Astronomy 3 7%
Computer Science 2 5%
Other 3 7%
Unknown 13 32%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 28 February 2018.
All research outputs
#13,740,804
of 24,319,828 outputs
Outputs from Journal of Computational Neuroscience
#125
of 317 outputs
Outputs of similar age
#161,350
of 334,845 outputs
Outputs of similar age from Journal of Computational Neuroscience
#2
of 5 outputs
Altmetric has tracked 24,319,828 research outputs across all sources so far. This one is in the 43rd percentile – i.e., 43% of other outputs scored the same or lower than it.
So far Altmetric has tracked 317 research outputs from this source. They receive a mean Attention Score of 3.5. This one has gotten more attention than average, scoring higher than 59% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 334,845 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 51% of its contemporaries.
We're also able to compare this research output to 5 others from the same source and published within six weeks on either side of this one. This one has scored higher than 3 of them.