↓ Skip to main content

Optimizing Experimental Design for Comparing Models of Brain Function

Overview of attention for article published in PLoS Computational Biology, November 2011
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Above-average Attention Score compared to outputs of the same age and source (53rd percentile)

Mentioned by

wikipedia
1 Wikipedia page

Citations

dimensions_citation
42 Dimensions

Readers on

mendeley
241 Mendeley
citeulike
5 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Optimizing Experimental Design for Comparing Models of Brain Function
Published in
PLoS Computational Biology, November 2011
DOI 10.1371/journal.pcbi.1002280
Pubmed ID
Authors

Jean Daunizeau, Kerstin Preuschoff, Karl Friston, Klaas Stephan

Abstract

This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 241 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 6 2%
Germany 3 1%
Japan 2 <1%
France 1 <1%
Austria 1 <1%
Canada 1 <1%
Netherlands 1 <1%
Russia 1 <1%
Iran, Islamic Republic of 1 <1%
Other 2 <1%
Unknown 222 92%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 68 28%
Researcher 54 22%
Student > Master 25 10%
Student > Bachelor 16 7%
Student > Postgraduate 13 5%
Other 46 19%
Unknown 19 8%
Readers by discipline Count As %
Psychology 61 25%
Neuroscience 50 21%
Agricultural and Biological Sciences 36 15%
Medicine and Dentistry 18 7%
Engineering 15 6%
Other 32 13%
Unknown 29 12%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 27 November 2013.
All research outputs
#8,544,090
of 25,394,764 outputs
Outputs from PLoS Computational Biology
#5,639
of 8,964 outputs
Outputs of similar age
#71,660
of 244,588 outputs
Outputs of similar age from PLoS Computational Biology
#57
of 141 outputs
Altmetric has tracked 25,394,764 research outputs across all sources so far. This one is in the 43rd percentile – i.e., 43% of other outputs scored the same or lower than it.
So far Altmetric has tracked 8,964 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 20.4. This one is in the 33rd percentile – i.e., 33% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 244,588 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 38th percentile – i.e., 38% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 141 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 53% of its contemporaries.