↓ Skip to main content

Deep Learning for Image-Based Cassava Disease Detection

Overview of attention for article published in Frontiers in Plant Science, October 2017
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (97th percentile)
  • High Attention Score compared to outputs of the same age and source (99th percentile)

Mentioned by

news
7 news outlets
blogs
2 blogs
twitter
42 X users
patent
1 patent
facebook
5 Facebook pages
video
1 YouTube creator

Citations

dimensions_citation
470 Dimensions

Readers on

mendeley
648 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Deep Learning for Image-Based Cassava Disease Detection
Published in
Frontiers in Plant Science, October 2017
DOI 10.3389/fpls.2017.01852
Pubmed ID
Authors

Amanda Ramcharan, Kelsee Baranowski, Peter McCloskey, Babuali Ahmed, James Legg, David P. Hughes

Abstract

Cassava is the third largest source of carbohydrates for human food in the world but is vulnerable to virus diseases, which threaten to destabilize food security in sub-Saharan Africa. Novel methods of cassava disease detection are needed to support improved control which will prevent this crisis. Image recognition offers both a cost effective and scalable technology for disease detection. New deep learning models offer an avenue for this technology to be easily deployed on mobile devices. Using a dataset of cassava disease images taken in the field in Tanzania, we applied transfer learning to train a deep convolutional neural network to identify three diseases and two types of pest damage (or lack thereof). The best trained model accuracies were 98% for brown leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage (GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic disease (CMD). The best model achieved an overall accuracy of 93% for data not used in the training process. Our results show that the transfer learning approach for image recognition of field images offers a fast, affordable, and easily deployable strategy for digital plant disease detection.

X Demographics

X Demographics

The data shown below were collected from the profiles of 42 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 648 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 648 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 90 14%
Student > Ph. D. Student 67 10%
Student > Bachelor 57 9%
Researcher 56 9%
Lecturer 30 5%
Other 91 14%
Unknown 257 40%
Readers by discipline Count As %
Computer Science 146 23%
Engineering 96 15%
Agricultural and Biological Sciences 54 8%
Unspecified 17 3%
Biochemistry, Genetics and Molecular Biology 11 2%
Other 52 8%
Unknown 272 42%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 100. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 18 August 2020.
All research outputs
#410,721
of 24,998,746 outputs
Outputs from Frontiers in Plant Science
#69
of 23,979 outputs
Outputs of similar age
#8,755
of 334,463 outputs
Outputs of similar age from Frontiers in Plant Science
#2
of 489 outputs
Altmetric has tracked 24,998,746 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 98th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 23,979 research outputs from this source. They receive a mean Attention Score of 3.9. This one has done particularly well, scoring higher than 99% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 334,463 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 97% of its contemporaries.
We're also able to compare this research output to 489 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 99% of its contemporaries.