↓ Skip to main content

Preliminary analysis using multi-atlas labeling algorithms for tracing longitudinal change

Overview of attention for article published in Frontiers in Neuroscience, July 2015
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
2 X users

Citations

dimensions_citation
28 Dimensions

Readers on

mendeley
32 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Preliminary analysis using multi-atlas labeling algorithms for tracing longitudinal change
Published in
Frontiers in Neuroscience, July 2015
DOI 10.3389/fnins.2015.00242
Pubmed ID
Authors

Regina E. Y. Kim, Spencer Lourens, Jeffrey D. Long, Jane S. Paulsen, Hans J. Johnson

Abstract

Multicenter longitudinal neuroimaging has great potential to provide efficient and consistent biomarkers for research of neurodegenerative diseases and aging. In rare disease studies it is of primary importance to have a reliable tool that performs consistently for data from many different collection sites to increase study power. A multi-atlas labeling algorithm is a powerful brain image segmentation approach that is becoming increasingly popular in image processing. The present study examined the performance of multi-atlas labeling tools for subcortical identification using two types of in-vivo image database: Traveling Human Phantom (THP) and PREDICT-HD. We compared the accuracy (Dice Similarity Coefficient; DSC and intraclass correlation; ICC), multicenter reliability (Coefficient of Variance; CV), and longitudinal reliability (volume trajectory smoothness and Akaike Information Criterion; AIC) of three automated segmentation approaches: two multi-atlas labeling tools, MABMIS and MALF, and a machine-learning-based tool, BRAINSCut. In general, MALF showed the best performance (higher DSC, ICC, lower CV, AIC, and smoother trajectory) with a couple of exceptions. First, the results of accumben, where BRAINSCut showed higher reliability, were still premature to discuss their reliability levels since their validity is still in doubt (DSC < 0.7, ICC < 0.7). For caudate, BRAINSCut presented slightly better accuracy while MALF showed significantly smoother longitudinal trajectory. We discuss advantages and limitations of these performance variations and conclude that improved segmentation quality can be achieved using multi-atlas labeling methods. While multi-atlas labeling methods are likely to help improve overall segmentation quality, caution has to be taken when one chooses an approach, as our results suggest that segmentation outcome can vary depending on research interest.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 32 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 2 6%
France 1 3%
Unknown 29 91%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 8 25%
Student > Master 5 16%
Researcher 4 13%
Student > Doctoral Student 4 13%
Student > Bachelor 2 6%
Other 3 9%
Unknown 6 19%
Readers by discipline Count As %
Engineering 7 22%
Medicine and Dentistry 3 9%
Neuroscience 3 9%
Psychology 3 9%
Nursing and Health Professions 2 6%
Other 4 13%
Unknown 10 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 July 2015.
All research outputs
#17,236,404
of 25,374,647 outputs
Outputs from Frontiers in Neuroscience
#7,937
of 11,542 outputs
Outputs of similar age
#164,339
of 276,422 outputs
Outputs of similar age from Frontiers in Neuroscience
#73
of 103 outputs
Altmetric has tracked 25,374,647 research outputs across all sources so far. This one is in the 31st percentile – i.e., 31% of other outputs scored the same or lower than it.
So far Altmetric has tracked 11,542 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.9. This one is in the 30th percentile – i.e., 30% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 276,422 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 40th percentile – i.e., 40% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 103 others from the same source and published within six weeks on either side of this one. This one is in the 26th percentile – i.e., 26% of its contemporaries scored the same or lower than it.