↓ Skip to main content

A Robust Classifier to Distinguish Noise from fMRI Independent Components

Overview of attention for article published in PLOS ONE, April 2014
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (85th percentile)
  • Good Attention Score compared to outputs of the same age and source (79th percentile)

Mentioned by

twitter
7 X users
patent
1 patent
googleplus
2 Google+ users

Citations

dimensions_citation
25 Dimensions

Readers on

mendeley
83 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A Robust Classifier to Distinguish Noise from fMRI Independent Components
Published in
PLOS ONE, April 2014
DOI 10.1371/journal.pone.0095493
Pubmed ID
Authors

Vanessa Sochat, Kaustubh Supekar, Juan Bustillo, Vince Calhoun, Jessica A. Turner, Daniel L. Rubin

Abstract

Analyzing Functional Magnetic Resonance Imaging (fMRI) of resting brains to determine the spatial location and activity of intrinsic brain networks--a novel and burgeoning research field--is limited by the lack of ground truth and the tendency of analyses to overfit the data. Independent Component Analysis (ICA) is commonly used to separate the data into signal and Gaussian noise components, and then map these components on to spatial networks. Identifying noise from this data, however, is a tedious process that has proven hard to automate, particularly when data from different institutions, subjects, and scanners is used. Here we present an automated method to delineate noisy independent components in ICA using a data-driven infrastructure that queries a database of 246 spatial and temporal features to discover a computational signature of different types of noise. We evaluated the performance of our method to detect noisy components from healthy control fMRI (sensitivity = 0.91, specificity = 0.82, cross validation accuracy (CVA) = 0.87, area under the curve (AUC) = 0.93), and demonstrate its generalizability by showing equivalent performance on (1) an age- and scanner-matched cohort of schizophrenia patients from the same institution (sensitivity = 0.89, specificity = 0.83, CVA = 0.86), (2) an age-matched cohort on an equivalent scanner from a different institution (sensitivity = 0.88, specificity = 0.88, CVA = 0.88), and (3) an age-matched cohort on a different scanner from a different institution (sensitivity = 0.72, specificity = 0.92, CVA = 0.79). We additionally compare our approach with a recently published method. Our results suggest that our method is robust to noise variations due to population as well as scanner differences, thereby making it well suited to the goal of automatically distinguishing noise from functional networks to enable investigation of human brain function.

X Demographics

X Demographics

The data shown below were collected from the profiles of 7 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 83 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 2 2%
United Kingdom 1 1%
Germany 1 1%
Singapore 1 1%
Canada 1 1%
Unknown 77 93%

Demographic breakdown

Readers by professional status Count As %
Researcher 21 25%
Student > Ph. D. Student 16 19%
Professor 7 8%
Professor > Associate Professor 5 6%
Student > Master 5 6%
Other 13 16%
Unknown 16 19%
Readers by discipline Count As %
Neuroscience 14 17%
Engineering 11 13%
Medicine and Dentistry 11 13%
Psychology 9 11%
Computer Science 7 8%
Other 13 16%
Unknown 18 22%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 10. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 January 2024.
All research outputs
#3,566,477
of 25,182,110 outputs
Outputs from PLOS ONE
#46,759
of 218,348 outputs
Outputs of similar age
#33,866
of 233,197 outputs
Outputs of similar age from PLOS ONE
#1,001
of 4,979 outputs
Altmetric has tracked 25,182,110 research outputs across all sources so far. Compared to these this one has done well and is in the 85th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 218,348 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 15.7. This one has done well, scoring higher than 78% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 233,197 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 85% of its contemporaries.
We're also able to compare this research output to 4,979 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 79% of its contemporaries.