↓ Skip to main content

NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail

Overview of attention for article published in PLoS Computational Biology, June 2010
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (93rd percentile)
  • High Attention Score compared to outputs of the same age and source (86th percentile)

Mentioned by

blogs
1 blog
patent
6 patents
wikipedia
2 Wikipedia pages

Citations

dimensions_citation
290 Dimensions

Readers on

mendeley
272 Mendeley
citeulike
8 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail
Published in
PLoS Computational Biology, June 2010
DOI 10.1371/journal.pcbi.1000815
Pubmed ID
Authors

Padraig Gleeson, Sharon Crook, Robert C. Cannon, Michael L. Hines, Guy O. Billings, Matteo Farinella, Thomas M. Morse, Andrew P. Davison, Subhasis Ray, Upinder S. Bhalla, Simon R. Barnes, Yoana D. Dimitrova, R. Angus Silver

Abstract

Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 272 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 7 3%
United States 6 2%
Germany 5 2%
France 1 <1%
Latvia 1 <1%
Portugal 1 <1%
Finland 1 <1%
India 1 <1%
Belarus 1 <1%
Other 5 2%
Unknown 243 89%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 75 28%
Researcher 63 23%
Student > Master 29 11%
Professor 15 6%
Student > Bachelor 14 5%
Other 51 19%
Unknown 25 9%
Readers by discipline Count As %
Agricultural and Biological Sciences 81 30%
Computer Science 47 17%
Neuroscience 47 17%
Engineering 31 11%
Physics and Astronomy 11 4%
Other 20 7%
Unknown 35 13%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 18. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 August 2017.
All research outputs
#2,090,512
of 25,870,940 outputs
Outputs from PLoS Computational Biology
#1,819
of 9,061 outputs
Outputs of similar age
#7,207
of 105,842 outputs
Outputs of similar age from PLoS Computational Biology
#7
of 53 outputs
Altmetric has tracked 25,870,940 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 91st percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 9,061 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 20.3. This one has done well, scoring higher than 79% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 105,842 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 93% of its contemporaries.
We're also able to compare this research output to 53 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 86% of its contemporaries.