↓ Skip to main content

High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks

Overview of attention for article published in Journal of Digital Imaging, October 2016
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (87th percentile)
  • High Attention Score compared to outputs of the same age and source (90th percentile)

Mentioned by

twitter
21 X users
facebook
1 Facebook page
googleplus
1 Google+ user

Citations

dimensions_citation
113 Dimensions

Readers on

mendeley
203 Mendeley
Title
High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks
Published in
Journal of Digital Imaging, October 2016
DOI 10.1007/s10278-016-9914-9
Pubmed ID
Authors

Alvin Rajkomar, Sneha Lingam, Andrew G. Taylor, Michael Blum, John Mongan

Abstract

The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

X Demographics

X Demographics

The data shown below were collected from the profiles of 21 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 203 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Denmark 1 <1%
Unknown 202 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 33 16%
Researcher 31 15%
Student > Master 28 14%
Student > Bachelor 15 7%
Other 15 7%
Other 32 16%
Unknown 49 24%
Readers by discipline Count As %
Medicine and Dentistry 44 22%
Computer Science 44 22%
Engineering 25 12%
Nursing and Health Professions 7 3%
Biochemistry, Genetics and Molecular Biology 6 3%
Other 22 11%
Unknown 55 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 15. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 24 July 2019.
All research outputs
#2,366,658
of 25,382,440 outputs
Outputs from Journal of Digital Imaging
#55
of 1,143 outputs
Outputs of similar age
#39,668
of 326,654 outputs
Outputs of similar age from Journal of Digital Imaging
#2
of 11 outputs
Altmetric has tracked 25,382,440 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 90th percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,143 research outputs from this source. They receive a mean Attention Score of 4.7. This one has done particularly well, scoring higher than 95% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 326,654 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 87% of its contemporaries.
We're also able to compare this research output to 11 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 90% of its contemporaries.