↓ Skip to main content

Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models

Overview of attention for article published in Frontiers in Neuroinformatics, August 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (79th percentile)
  • Good Attention Score compared to outputs of the same age and source (78th percentile)

Mentioned by

twitter
17 X users

Citations

dimensions_citation
36 Dimensions

Readers on

mendeley
38 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models
Published in
Frontiers in Neuroinformatics, August 2018
DOI 10.3389/fninf.2018.00046
Pubmed ID
Authors

Robin Pauli, Philipp Weidel, Susanne Kunkel, Abigail Morrison

Abstract

Any modeler who has attempted to reproduce a spiking neural network model from its description in a paper has discovered what a painful endeavor this is. Even when all parameters appear to have been specified, which is rare, typically the initial attempt to reproduce the network does not yield results that are recognizably akin to those in the original publication. Causes include inaccurately reported or hidden parameters (e.g., wrong unit or the existence of an initialization distribution), differences in implementation of model dynamics, and ambiguities in the text description of the network experiment. The very fact that adequate reproduction often cannot be achieved until a series of such causes have been tracked down and resolved is in itself disconcerting, as it reveals unreported model dependencies on specific implementation choices that either were not clear to the original authors, or that they chose not to disclose. In either case, such dependencies diminish the credibility of the model's claims about the behavior of the target system. To demonstrate these issues, we provide a worked example of reproducing a seminal study for which, unusually, source code was provided at time of publication. Despite this seemingly optimal starting position, reproducing the results was time consuming and frustrating. Further examination of the correctly reproduced model reveals that it is highly sensitive to implementation choices such as the realization of background noise, the integration timestep, and the thresholding parameter of the analysis algorithm. From this process, we derive a guideline of best practices that would substantially reduce the investment in reproducing neural network studies, whilst simultaneously increasing their scientific quality. We propose that this guideline can be used by authors and reviewers to assess and improve the reproducibility of future network models.

X Demographics

X Demographics

The data shown below were collected from the profiles of 17 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 38 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 38 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 14 37%
Researcher 9 24%
Student > Master 3 8%
Other 2 5%
Student > Bachelor 2 5%
Other 4 11%
Unknown 4 11%
Readers by discipline Count As %
Computer Science 9 24%
Neuroscience 8 21%
Engineering 5 13%
Agricultural and Biological Sciences 4 11%
Physics and Astronomy 2 5%
Other 5 13%
Unknown 5 13%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 10. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 07 October 2023.
All research outputs
#3,427,474
of 24,140,950 outputs
Outputs from Frontiers in Neuroinformatics
#187
of 790 outputs
Outputs of similar age
#66,990
of 334,733 outputs
Outputs of similar age from Frontiers in Neuroinformatics
#6
of 23 outputs
Altmetric has tracked 24,140,950 research outputs across all sources so far. Compared to these this one has done well and is in the 85th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 790 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 8.0. This one has done well, scoring higher than 76% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 334,733 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 79% of its contemporaries.
We're also able to compare this research output to 23 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 78% of its contemporaries.