↓ Skip to main content

Uncertainpy: A Python Toolbox for Uncertainty Quantification and Sensitivity Analysis in Computational Neuroscience

Overview of attention for article published in Frontiers in Neuroinformatics, August 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (87th percentile)
  • High Attention Score compared to outputs of the same age and source (99th percentile)

Mentioned by

twitter
25 X users
facebook
1 Facebook page

Citations

dimensions_citation
76 Dimensions

Readers on

mendeley
234 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Uncertainpy: A Python Toolbox for Uncertainty Quantification and Sensitivity Analysis in Computational Neuroscience
Published in
Frontiers in Neuroinformatics, August 2018
DOI 10.3389/fninf.2018.00049
Pubmed ID
Authors

Simen Tennøe, Geir Halnes, Gaute T. Einevoll

Abstract

Computational models in neuroscience typically contain many parameters that are poorly constrained by experimental data. Uncertainty quantification and sensitivity analysis provide rigorous procedures to quantify how the model output depends on this parameter uncertainty. Unfortunately, the application of such methods is not yet standard within the field of neuroscience. Here we present Uncertainpy, an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models. Uncertainpy aims to make it quick and easy to get started with uncertainty analysis, without any need for detailed prior knowledge. The toolbox allows uncertainty quantification and sensitivity analysis to be performed on already existing models without needing to modify the model equations or model implementation. Uncertainpy bases its analysis on polynomial chaos expansions, which are more efficient than the more standard Monte-Carlo based approaches. Uncertainpy is tailored for neuroscience applications by its built-in capability for calculating characteristic features in the model output. The toolbox does not merely perform a point-to-point comparison of the "raw" model output (e.g., membrane voltage traces), but can also calculate the uncertainty and sensitivity of salient model response features such as spike timing, action potential width, average interspike interval, and other features relevant for various neural and neural network models. Uncertainpy comes with several common models and features built in, and including custom models and new features is easy. The aim of the current paper is to present Uncertainpy to the neuroscience community in a user-oriented manner. To demonstrate its broad applicability, we perform an uncertainty quantification and sensitivity analysis of three case studies relevant for neuroscience: the original Hodgkin-Huxley point-neuron model for action potential generation, a multi-compartmental model of a thalamic interneuron implemented in the NEURON simulator, and a sparsely connected recurrent network model implemented in the NEST simulator.

X Demographics

X Demographics

The data shown below were collected from the profiles of 25 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 234 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 234 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 62 26%
Researcher 46 20%
Student > Master 34 15%
Student > Doctoral Student 11 5%
Other 9 4%
Other 27 12%
Unknown 45 19%
Readers by discipline Count As %
Engineering 57 24%
Neuroscience 26 11%
Computer Science 20 9%
Agricultural and Biological Sciences 12 5%
Mathematics 9 4%
Other 47 20%
Unknown 63 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 16. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 01 March 2020.
All research outputs
#2,138,334
of 24,562,945 outputs
Outputs from Frontiers in Neuroinformatics
#74
of 806 outputs
Outputs of similar age
#43,309
of 335,656 outputs
Outputs of similar age from Frontiers in Neuroinformatics
#1
of 24 outputs
Altmetric has tracked 24,562,945 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 91st percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 806 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 7.9. This one has done particularly well, scoring higher than 90% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 335,656 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 87% of its contemporaries.
We're also able to compare this research output to 24 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 99% of its contemporaries.