↓ Skip to main content

Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images

Overview of attention for article published in Journal of Digital Imaging, November 2016
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (84th percentile)
  • High Attention Score compared to outputs of the same age and source (99th percentile)

Mentioned by

twitter
1 X user
patent
3 patents

Citations

dimensions_citation
209 Dimensions

Readers on

mendeley
201 Mendeley
Title
Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images
Published in
Journal of Digital Imaging, November 2016
DOI 10.1007/s10278-016-9929-2
Pubmed ID
Authors

Phillip M. Cheng, Harshawn S. Malhi

Abstract

The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 201 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 <1%
Unknown 200 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 33 16%
Student > Master 24 12%
Student > Bachelor 19 9%
Researcher 16 8%
Student > Doctoral Student 11 5%
Other 39 19%
Unknown 59 29%
Readers by discipline Count As %
Engineering 40 20%
Computer Science 40 20%
Medicine and Dentistry 31 15%
Veterinary Science and Veterinary Medicine 3 1%
Chemistry 3 1%
Other 17 8%
Unknown 67 33%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 10. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 18 August 2022.
All research outputs
#3,651,914
of 25,366,663 outputs
Outputs from Journal of Digital Imaging
#104
of 1,123 outputs
Outputs of similar age
#67,107
of 430,033 outputs
Outputs of similar age from Journal of Digital Imaging
#1
of 12 outputs
Altmetric has tracked 25,366,663 research outputs across all sources so far. Compared to these this one has done well and is in the 85th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,123 research outputs from this source. They receive a mean Attention Score of 4.7. This one has done particularly well, scoring higher than 90% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 430,033 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 84% of its contemporaries.
We're also able to compare this research output to 12 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 99% of its contemporaries.