↓ Skip to main content

Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures

Overview of attention for article published in Frontiers in Neuroinformatics, May 2018
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
3 X users

Readers on

mendeley
56 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures
Published in
Frontiers in Neuroinformatics, May 2018
DOI 10.3389/fninf.2018.00020
Pubmed ID
Authors

Tiina Manninen, Jugoslava Aćimović, Riikka Havela, Heidi Teppola, Marja-Leena Linne

Abstract

The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 56 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 56 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 16 29%
Researcher 8 14%
Student > Master 6 11%
Student > Bachelor 4 7%
Student > Doctoral Student 3 5%
Other 9 16%
Unknown 10 18%
Readers by discipline Count As %
Engineering 10 18%
Neuroscience 9 16%
Computer Science 9 16%
Medicine and Dentistry 3 5%
Agricultural and Biological Sciences 2 4%
Other 13 23%
Unknown 10 18%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 30 April 2022.
All research outputs
#14,103,984
of 23,041,514 outputs
Outputs from Frontiers in Neuroinformatics
#462
of 754 outputs
Outputs of similar age
#178,584
of 326,163 outputs
Outputs of similar age from Frontiers in Neuroinformatics
#14
of 21 outputs
Altmetric has tracked 23,041,514 research outputs across all sources so far. This one is in the 37th percentile – i.e., 37% of other outputs scored the same or lower than it.
So far Altmetric has tracked 754 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 8.2. This one is in the 36th percentile – i.e., 36% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 326,163 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 43rd percentile – i.e., 43% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 21 others from the same source and published within six weeks on either side of this one. This one is in the 23rd percentile – i.e., 23% of its contemporaries scored the same or lower than it.