↓ Skip to main content

Software for Brain Network Simulations: A Comparative Study

Overview of attention for article published in Frontiers in Neuroinformatics, July 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (83rd percentile)
  • High Attention Score compared to outputs of the same age and source (94th percentile)

Mentioned by

twitter
18 X users
facebook
1 Facebook page
reddit
1 Redditor

Citations

dimensions_citation
50 Dimensions

Readers on

mendeley
143 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Software for Brain Network Simulations: A Comparative Study
Published in
Frontiers in Neuroinformatics, July 2017
DOI 10.3389/fninf.2017.00046
Pubmed ID
Authors

Ruben A. Tikidji-Hamburyan, Vikram Narayana, Zeki Bozkus, Tarek A. El-Ghazawi

Abstract

Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models.

X Demographics

X Demographics

The data shown below were collected from the profiles of 18 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 143 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 143 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 35 24%
Researcher 28 20%
Student > Master 15 10%
Student > Bachelor 12 8%
Student > Doctoral Student 9 6%
Other 21 15%
Unknown 23 16%
Readers by discipline Count As %
Neuroscience 41 29%
Engineering 33 23%
Computer Science 16 11%
Agricultural and Biological Sciences 12 8%
Medicine and Dentistry 3 2%
Other 11 8%
Unknown 27 19%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 12. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 28 September 2023.
All research outputs
#2,964,187
of 24,516,705 outputs
Outputs from Frontiers in Neuroinformatics
#124
of 806 outputs
Outputs of similar age
#53,324
of 319,124 outputs
Outputs of similar age from Frontiers in Neuroinformatics
#2
of 19 outputs
Altmetric has tracked 24,516,705 research outputs across all sources so far. Compared to these this one has done well and is in the 87th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 806 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 7.9. This one has done well, scoring higher than 84% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 319,124 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 83% of its contemporaries.
We're also able to compare this research output to 19 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 94% of its contemporaries.