↓ Skip to main content

Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine

Overview of attention for article published in Journal of Digital Imaging, April 2018
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Above-average Attention Score compared to outputs of the same age and source (63rd percentile)

Mentioned by

twitter
6 X users

Citations

dimensions_citation
68 Dimensions

Readers on

mendeley
114 Mendeley
Title
Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine
Published in
Journal of Digital Imaging, April 2018
DOI 10.1007/s10278-018-0084-9
Pubmed ID
Authors

Sajib Kumar Saha, Basura Fernando, Jorge Cuadros, Di Xiao, Yogesan Kanagasingam

Abstract

Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 114 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 114 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 13 11%
Researcher 13 11%
Student > Master 11 10%
Student > Doctoral Student 8 7%
Student > Bachelor 6 5%
Other 12 11%
Unknown 51 45%
Readers by discipline Count As %
Medicine and Dentistry 19 17%
Engineering 11 10%
Computer Science 9 8%
Nursing and Health Professions 5 4%
Biochemistry, Genetics and Molecular Biology 3 3%
Other 9 8%
Unknown 58 51%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 25 May 2018.
All research outputs
#13,357,452
of 23,045,021 outputs
Outputs from Journal of Digital Imaging
#600
of 1,064 outputs
Outputs of similar age
#163,833
of 326,468 outputs
Outputs of similar age from Journal of Digital Imaging
#11
of 30 outputs
Altmetric has tracked 23,045,021 research outputs across all sources so far. This one is in the 41st percentile – i.e., 41% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,064 research outputs from this source. They receive a mean Attention Score of 4.6. This one is in the 42nd percentile – i.e., 42% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 326,468 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 48th percentile – i.e., 48% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 30 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 63% of its contemporaries.