↓ Skip to main content

A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

Overview of attention for article published in PLOS ONE, April 2014
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (89th percentile)
  • High Attention Score compared to outputs of the same age and source (85th percentile)

Mentioned by

news
1 news outlet
twitter
9 X users

Citations

dimensions_citation
41 Dimensions

Readers on

mendeley
152 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI
Published in
PLOS ONE, April 2014
DOI 10.1371/journal.pone.0095753
Pubmed ID
Authors

Elizabeth M. Sweeney, Joshua T. Vogelstein, Jennifer L. Cuzzocreo, Peter A. Calabresi, Daniel S. Reich, Ciprian M. Crainiceanu, Russell T. Shinohara

Abstract

Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

X Demographics

X Demographics

The data shown below were collected from the profiles of 9 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 152 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Germany 1 <1%
Switzerland 1 <1%
France 1 <1%
Brazil 1 <1%
Spain 1 <1%
United States 1 <1%
Unknown 146 96%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 44 29%
Researcher 23 15%
Student > Master 16 11%
Professor > Associate Professor 9 6%
Other 9 6%
Other 20 13%
Unknown 31 20%
Readers by discipline Count As %
Engineering 29 19%
Medicine and Dentistry 22 14%
Computer Science 21 14%
Neuroscience 15 10%
Nursing and Health Professions 6 4%
Other 24 16%
Unknown 35 23%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 15. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 08 February 2024.
All research outputs
#2,403,075
of 25,337,969 outputs
Outputs from PLOS ONE
#29,380
of 219,841 outputs
Outputs of similar age
#23,362
of 234,570 outputs
Outputs of similar age from PLOS ONE
#692
of 4,865 outputs
Altmetric has tracked 25,337,969 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 90th percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 219,841 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 15.7. This one has done well, scoring higher than 86% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 234,570 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 89% of its contemporaries.
We're also able to compare this research output to 4,865 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 85% of its contemporaries.