ScabyNet: Image Processing and Deep Learning Application for Potato Tuber Evaluation

In an article published in the journal Nature, researchers introduced ScabyNet, an image-processing approach using color-morphology analysis and deep learning to grade scab-infected potato tubers accurately. ScabyNet, a standalone application, consisted of two modules: one estimated tuber quality traits, and the other detected and quantified common scab (CS) severity levels.

Study: ScabyNet: Image Processing and Deep Learning Application for Potato Tuber Evaluation. Image credit: Dmitri Malyshev/Shutterstock
Study: ScabyNet: Image Processing and Deep Learning Application for Potato Tuber Evaluation. Image credit: Dmitri Malyshev/Shutterstock

Background

Potatoes are a crucial global commodity, serving as a significant energy source for human consumption and various industrial applications. The demand for high-quality tubers that meet standards in appearance, size, and health is essential but challenging, particularly amidst climate change. Factors like environmental conditions, cultivation methods, and handling operations contribute to tuber disorders and mechanical damage. Among these, the CS bacterial disease poses a substantial threat, causing blemishes and economic losses. Current evaluation methods, relying on visual scorings and manual measurements, are imprecise, time-consuming, and subjective.

Previous attempts in digital image processing have shown promise in assessing tuber traits but face challenges in user-friendliness, automation, and adaptation to low-cost, high-throughput phenotyping. Some methods using hyperspectral imaging or red-green-blue (RGB) color space analysis have demonstrated accuracy but lack practicality or affordability for widespread implementation.

This study addressed three key objectives: evaluating potato tuber morphology traits for market quality, detecting and quantifying CS severity using convolutional neural networks (CNNs), and developing an automated, user-friendly application. By combining these objectives, the research aimed to overcome the limitations of previous methodologies, providing a comprehensive and accessible solution for assessing both tuber quality and disease severity. The utilization of CNNs in automated CS detection represented a notable advancement, offering a more accurate and robust approach to support potato breeding efforts for resistant genotypes while meeting market standards.

Materials and methods

The researchers utilized potato tubers from field experiments and greenhouse inoculation to develop an integrated approach, ScabyNet, to assess tuber morphology traits and CS severity. The dataset comprised 7,154 images of yellow and red tubers categorized into five CS severity classes. The morphology module, using Python and OpenCV, estimated tuber characteristics like length, width, and color. The deep-learning module, employing CNN architectures (VGG16, VGG19, ResNet50V2, ResNet101V2, InceptionV3, Xception), detected and quantified CS severity levels. Training strategies were compared, including transfer learning and fine-tuning. The graphical user interface (GUI)-based ScabyNet provided a user-friendly platform for batch processing.

Image acquisition involved standardized conditions, with a Canon camera capturing images under uniform daylight illumination. The database, containing diverse CS severity levels, facilitated training and validation. The morphology module's workflow included resizing, color segmentation, morphological operations, connected component identification, and color quantization. It estimated tuber morphology traits and skin color, with K-means clustering for color identification.

The deep-learning module processed individual tuber tiles utilizing CNN architectures. A benchmark compared six CNN models, and the researchers optimized training parameters for minimizing false positives and maximizing class separability. Visual inspections and manual measurements validated results, transforming expert scores into five CS severity classes. Statistical analyses assessed ScabyNet's performance, comparing results with ground truth data and expert scores.

The ScabyNet approach addressed limitations in current tuber assessment methods, offering a comprehensive solution for both morphology and CS severity evaluation. The GUI, coupled with advanced image processing and deep learning, showcased a user-friendly tool for efficient, automated, and accurate phenotyping, demonstrating potential applications in potato breeding and quality assessment.

Results

ScabyNet, a user-friendly application, comprised two modules for analyzing potato tuber images. In the first module, assessing morphological features, ScabyNet demonstrated medium-to-high correlations with ground truth and ImageJ in tuber size estimation. The results from a dataset of 4,735 tubers indicated robust and reliable performance in tuber size analysis. The frequency distribution of measured traits, including area, length, width, circularity, and length-to-width ratio, exhibited almost symmetrical Gaussian patterns.

ScabyNet's efficiency was evaluated in terms of time. Image acquisition, involving setup and capturing, took around five seconds, while the analysis of an individual image with up to 12 tubers varied from one to 3.5 seconds. For batch processing of 100 images, the total time ranged between 2.5 and seven minutes.

The second module focused on scab detection using deep learning. ScabyNet employed various CNN architectures. The fine-tuning strategy outperformed transfer learning, particularly in more complex architectures like ResNet50V2, ResNet101V2, InceptionV3, and Xception. VGG16 and VGG19 demonstrated better performance with fine-tuning but exhibited instability. The Xception model, trained optimally with fine-tuning, achieved over 99% accuracy on the validation set and demonstrated stable performance.

For the test set of 2,146 tubers, the Xception model at 15 epochs achieved accuracy above 99% for all classes. Even at 10 epochs, the model demonstrated over 90% precision for all classes, except for class four, indicating confusion between classes three and five. The Xception model, when optimally trained, showed consistent and accurate discrimination of healthy and scab-infected tubers across severity classes. Overall, ScabyNet presented a reliable and efficient tool for comprehensive potato tuber analysis, combining morphological trait assessment and scab severity detection.

Discussion

The ScabyNet application addressed challenges in assessing potato tuber quality by introducing two modules for image-based evaluation. The first module, focusing on tuber morphology, utilized a cost-effective approach with a simple RGB camera and static frame, demonstrating robustness and accuracy in measuring various traits. ScabyNet's ease of implementation distinguished it from other methods, such as TubAR and ImageJ, which require more complex setups.

The second module employed the Xception architecture for scab severity detection, showcasing high accuracy and stability. However, potential model improvements involved classifying severity profiles instead of area ranges. While ScabyNet presented a novel solution, future enhancements could involve deeper training or retraining with broader plant disease databases for increased sensitivity and generalization to diverse plant species and diseases.

Conclusion

In conclusion, researchers introduced ScabyNet, a novel application combining image processing and deep learning for efficient and accurate analysis of potato tuber morphology and detection of CS disease severity. Validated against manual measurements and existing methods, ScabyNet achieved 99% accuracy.

Future plans involve extending its applicability to diverse tuber colors and varieties and employing semantic segmentation for enhanced precision. Additionally, the integration of spectrometric data like hyperspectral imaging aimed to detect early disease symptoms showcased ScabyNet as a significant advancement in agricultural research for objective tuber analysis and disease estimation.

Journal reference:
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, January 19). ScabyNet: Image Processing and Deep Learning Application for Potato Tuber Evaluation. AZoAi. Retrieved on July 06, 2024 from https://www.azoai.com/news/20240119/ScabyNet-Image-Processing-and-Deep-Learning-Application-for-Potato-Tuber-Evaluation.aspx.

  • MLA

    Nandi, Soham. "ScabyNet: Image Processing and Deep Learning Application for Potato Tuber Evaluation". AZoAi. 06 July 2024. <https://www.azoai.com/news/20240119/ScabyNet-Image-Processing-and-Deep-Learning-Application-for-Potato-Tuber-Evaluation.aspx>.

  • Chicago

    Nandi, Soham. "ScabyNet: Image Processing and Deep Learning Application for Potato Tuber Evaluation". AZoAi. https://www.azoai.com/news/20240119/ScabyNet-Image-Processing-and-Deep-Learning-Application-for-Potato-Tuber-Evaluation.aspx. (accessed July 06, 2024).

  • Harvard

    Nandi, Soham. 2024. ScabyNet: Image Processing and Deep Learning Application for Potato Tuber Evaluation. AZoAi, viewed 06 July 2024, https://www.azoai.com/news/20240119/ScabyNet-Image-Processing-and-Deep-Learning-Application-for-Potato-Tuber-Evaluation.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Computer Vision and Deep Learning Enhance Emotion Recognition