Advancements in Image-Based Crop Yield Calculation

In an article published in the journal Remote Sensing, researchers reviewed advancements in crop-yield calculation methods utilizing remote sensing and visible light image processing technologies. They discussed technical features, applicable scenarios, data acquisition methods, algorithm selection, and optimization. Common challenges were addressed, and solutions were proposed for improving accuracy and widespread adoption. The authors aimed to enhance breeding efficiency and optimize agricultural practices through precise yield calculations.

Study: Advancements in Image-Based Crop Yield Calculation. Image credit: SKT Studio/Shutterstock
Study: Advancements in Image-Based Crop Yield Calculation. Image credit: SKT Studio/Shutterstock

Background

Crop yield prediction is vital for agricultural planning and optimization, yet its complexity arises from many factors influencing crop growth, such as variety, soil quality, irrigation, and pests. Traditional methods like field surveys and meteorological models have limitations in accuracy, efficiency, and applicability. Due to advancements in sensing tech and AI, remote sensing and visible light image-based techniques have emerged as viable options, providing high accuracy, affordability, and non-destructive analysis.

Previous reviews primarily focused on model algorithms, leaving a gap in understanding the technical nuances, challenges, and recent developments in image-based crop yield calculation. The researchers aimed to bridge this gap by analyzing the progress in image-based yield calculation technologies post-2020. It extensively reviewed over 1200 scientific papers, selecting 142 closely related ones for in-depth analysis. The researchers began by analyzing common research objects and yield calculation methods, followed by a detailed exploration of different technical approaches. They delved into various algorithms employed and common challenges faced in current research.

By synthesizing this information, the research provided a comprehensive overview of the advancements, technical nuances, and existing challenges in image-based crop yield calculation. Furthermore, the authors identified key crop yield calculation indicators for different crop types, shedding light on the specific parameters and technical solutions necessary for accurate prediction. By addressing these aspects, the paper not only filled the gap in understanding recent developments but also provided valuable insights for future research directions in image-based crop yield prediction.

Yield calculation by remote sensing image

Remote sensing technology, available through drones and satellites, was invaluable in precision agriculture, providing data essential for crop growth monitoring and yield prediction. Drones offered high spatiotemporal resolution, which is ideal for farm-scale monitoring, while satellites ensured continuity and stability and were suitable for large-scale monitoring. Multispectral sensors captured visible and near-infrared bands, facilitating vegetation index calculation, while hyperspectral imaging offered richer structural information but presented challenges like data redundancy.

Key to yield monitoring was identifying spectral bands sensitive to canopy reflectance, enabling the extraction of vegetation indices correlated with yield. In low-altitude remote sensing, drones equipped with multi-channel sensors were powerful tools for yield calculation. For major food crops like corn, rice, and wheat, drone-based hyperspectral and multispectral imaging, combined with machine learning algorithms, showed promising results. Multimodal prediction models integrating various data sources yield improved accuracy. Economic crops' yield calculation, also utilizing drone-based remote sensing, employed spectral indices and machine learning algorithms. For instance, soybean yield prediction utilized multimodal data, demonstrating the superiority of combining different data sources.

Similar approaches were applied to other economic crops and fruits, employing machine learning for accurate yield estimation. In high-altitude satellite remote sensing, satellite images were used to extract vegetation indices and climate data for yield prediction. Wheat and rice yield prediction based on satellite images achieved satisfactory results, while studies on other crops like coffee trees and cotton utilized machine learning for enhanced prediction accuracy. Despite advancements, challenges such as spatial resolution and cloud cover persisted. Microwave remote sensing was proposed to address these issues, offering three-dimensional information beyond visible light and infrared sensing. 

Yield calculation by visible light image

Visible light images were crucial for crop growth monitoring and yield prediction, offering rich color, structure, and morphological information. Color features, such as crop coverage and leaf-area index, provided insights into crop health, while texture analysis balanced overall and detailed aspects of images. Traditional image processing involved information extraction and segmentation, successfully predicting crop yield by combining color, texture, and morphological features with machine learning algorithms. Deep learning algorithms, particularly convolutional neural networks (CNNs), revolutionized crop yield calculation by enabling efficient object detection and segmentation.

In food crops like corn, wheat, and rice, deep learning models focused on detecting and counting grain tassels or ears. For instance, you only look once (YOLO)v5s achieved an average accuracy of 73.1% in corn yield prediction, while Faster region-based CNN (R-CNN) networks reached recognition accuracies of up to 94.99% for corn ears. In economic crops like kiwifruit, mango, grape, and apple, deep learning techniques have enabled accurate fruit detection and counting.

For example, mango-target-detection methods based on YOLOv2 achieved accuracies of 96.1%, while Mask R-CNN and YOLOv3 were employed for grape-instance segmentation, reaching F1 scores of 0.91 and accuracies over 99%, respectively. Additionally, deep learning was applied to weed detection, biomass calculation, chili-biomass estimation, pod detection, and leaf counting, leveraging RGB images and advanced network architectures to achieve accurate yield prediction. However, deep learning methods required significant computational resources and might face challenges like reduced resolution and occlusion, necessitating optimization strategies such as background removal and video streaming shooting to enhance their effectiveness.

Discussion

Crop-yield calculation methods were categorized into remote sensing and visible light image-based approaches. Remote sensing offered comprehensive data but suffered from issues like incomplete segmentation. Visible light image processing involves preprocessing for noise reduction and geometric adjustment. Feature selection aimed to identify relevant variables, often using methods like principal component analysis. Machine learning methods like artificial neural networks (ANN), support vector machine (SVM), and deep learning methods like CNN were commonly employed for crop yield prediction.

Machine learning handled nonlinear relationships but lacked interpretability, while deep learning offered higher accuracy but required extensive data and computing power. Deep learning models, particularly CNN, excelled in feature extraction from images, while recurrent neural networks (RNN), especially long short-term memory (LSTM), were effective for temporal data analysis. However, deep learning models were complex and required careful parameter tuning and data augmentation to avoid overfitting.

Conclusion

Advancements in artificial intelligence and sensor technology propelled image-analysis techniques in agricultural yield prediction. Despite challenges like boundary accuracy and small object detection, deep learning algorithms showed promise with optimization methods and data augmentation. Integrating multimodal data and transfer learning compensated for limited sample sizes. Combining drone and satellite platforms enhanced accuracy, while enhancing model interpretability remained challenging. Addressing power requirements with lightweight algorithms was crucial for efficient field-scale monitoring and yield estimation.

Journal reference:
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, March 14). Advancements in Image-Based Crop Yield Calculation. AZoAi. Retrieved on July 02, 2024 from https://www.azoai.com/news/20240314/Advancements-in-Image-Based-Crop-Yield-Calculation.aspx.

  • MLA

    Nandi, Soham. "Advancements in Image-Based Crop Yield Calculation". AZoAi. 02 July 2024. <https://www.azoai.com/news/20240314/Advancements-in-Image-Based-Crop-Yield-Calculation.aspx>.

  • Chicago

    Nandi, Soham. "Advancements in Image-Based Crop Yield Calculation". AZoAi. https://www.azoai.com/news/20240314/Advancements-in-Image-Based-Crop-Yield-Calculation.aspx. (accessed July 02, 2024).

  • Harvard

    Nandi, Soham. 2024. Advancements in Image-Based Crop Yield Calculation. AZoAi, viewed 02 July 2024, https://www.azoai.com/news/20240314/Advancements-in-Image-Based-Crop-Yield-Calculation.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Probes Cultural Ecosystem Services