The Next Frontier in Defect Detection with Enhanced YOLOv4

In a paper published in the journal Scientific Reports, researchers proposed an advanced defect detection system based on an improved you only look once version 4 (Yolo_v4) model. They addressed challenges in defect classification and localization by integrating enhancements such as using the density-based spatial clustering of applications with noise (DBSCAN) algorithm for anchor clustering, introducing efficient channel attention densenet-bottleneck-convolutional (ECA-DenseNet-BC-121) feature extraction network to improve small target detection, and implementing a dual channel feature enhancement (DCFE) module for robustness.

Study: The Next Frontier in Defect Detection with Enhanced YOLOv4. Image credit: N_Sakarin/Shutterstock
Study: The Next Frontier in Defect Detection with Enhanced YOLOv4. Image credit: N_Sakarin/Shutterstock

Their system achieved a remarkable mAP of 98.97% on fabric surface defect datasets, outperforming models like single-shot multibox detectors (SSD) and faster region-based convolutional neural networks (Faster_RCNN) while maintaining a detection speed of 39.4 fps, suitable for real-time monitoring in industrial settings.

Related Work

Past work in textile inspection relied on manual methods, leading to laborious processes and errors. Traditional approaches like filter-based and feature-based methods struggled to separate defects from texture backgrounds. Deep learning, especially Faster R-CNN, brought advancements but faced complexity issues.

One-stage detectors like Yolo and SSD improved speed, but their accuracy needed enhancement. Yolo_v4 emerged as an optimized solution, enhancing accuracy through various optimizations. However, fabric defect detection algorithms still require further accuracy and speed improvements.

Enhanced Yolo_v4 Methodology

The Yolo_v4 network model presents a novel approach to object detection. It divides input images into mesh grids and predicts bounding boxes for potential objects within each grid. Through a series of convolutional operations, the algorithm accurately identifies object positions and calculates confidence levels for each bounding box. This approach is efficient for detecting objects of varying sizes, utilizing feature maps of different scales to capture details from large to small objects.

The architecture of the Yolo_v4 network consists of several key components, including the cross-stage partial darknet 53 (CSPDarknet53) feature extraction network, spatial pyramid pooling (SPP) module, PANet feature fusion module, and Yolo head classifier. These components work synergistically to process input images, extract features, and predict object locations and categories. Integrating these modules enables the network to achieve high accuracy in object detection tasks.

Researchers have implemented enhancements in both the backbone feature extraction network and the Path Aggregation Network (PANet) feature fusion module to augment the performance of the Yolo_v4 network. The network gains better feature screening ability and improved information transmission by replacing the original CSPDarknet53 network with a DenseNet-BC-121 structure and introducing attention mechanisms. Researchers enhanced the PANet module with a dual channel feature enhancement (DCFE) module, which addresses feature extraction and gradient propagation limitations, leading to more robust detection results.

An essential aspect of the Yolo_v4 network's success is the optimization of anchor box clustering using a combination of K-Means and DBSCAN algorithms. This approach ensures the effective identification of anchor box centers while mitigating the influence of outliers in the dataset. By determining the optimal number of anchor boxes based on Average IOU analysis, the network achieves better detection accuracy without excessively increasing computational complexity.

Furthermore, the Yolo_v4 network employs transfer learning techniques to leverage pre-trained models and adapt them to fabric defect detection tasks. The network can effectively learn domain-specific features and improve detection performance even with limited training data by fine-tuning the pre-trained feature extraction network on fabric defect datasets. Overall, the Yolo_v4 network represents a sophisticated and versatile approach to object detection, demonstrating significant advancements in accuracy and efficiency across various applications.

Experimental Setup, Training, Evaluation

The experimental setup encompasses a Win10 system, Pytorch framework, and Python language, utilizing an Intel(R) Core (TM) i5-9600 k [email protected] Hz and an Nvidia RTX2060S GPU. The dataset comprises the open-source German Society for Pattern Recognition (DAGM) 2007 dataset featuring ten classes of texture defects, divided into training and test sets, with all defect samples consolidated into a standard dataset of 1046 samples.

Due to the limited sample size, model training adopts a 30% cross-validation approach. The researchers split the dataset into three equal parts and randomly chose a validation set from each fold. The training process demonstrates stability and consistent decreases in training and validation set losses, ensuring robust model training.

Evaluation metrics, including multi-category mean accuracy (mAP), highlight the proposed algorithm's significant performance improvements over baseline models. It surpasses mainstream deep learning models such as SSD, Faster_RCNN, Yolo_v4 tiny, and Yolo_v4 in terms of mAP on fabric and strip surface defect detection datasets.

The proposed algorithm balances detection precision and speed, outperforming other models' accuracy while maintaining competitive detection speeds. Comparison experiments underscore its superior detection performance and efficiency across various defect types.

The algorithm consistently identifies defects with high confidence levels across different defect categories in single defect detection scenarios. Moreover, multi-defect fusion detection accurately identifies and locates various defect types, demonstrating anti-interference solid ability and robust performance even at low image resolutions.

Conclusion

To sum up, the proposed Yolo_v4-based fabric surface defect detection algorithm incorporated four key enhancements: an improved clustering algorithm, modification of the backbone feature extraction network to ECA-DenseNet-BC-121, replacement of the PANet module's convolution process with the DCFE module, and utilization of transfer learning to address data scarcity. Experimental results demonstrated superior performance compared to SSD, Faster_RCNN, Yolo_v4 tiny, and Yolo_v4 algorithms, particularly excelling in detecting minor defects, even in low-resolution images.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, March 13). The Next Frontier in Defect Detection with Enhanced YOLOv4. AZoAi. Retrieved on July 04, 2024 from https://www.azoai.com/news/20240313/The-Next-Frontier-in-Defect-Detection-with-Enhanced-YOLOv4.aspx.

  • MLA

    Chandrasekar, Silpaja. "The Next Frontier in Defect Detection with Enhanced YOLOv4". AZoAi. 04 July 2024. <https://www.azoai.com/news/20240313/The-Next-Frontier-in-Defect-Detection-with-Enhanced-YOLOv4.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "The Next Frontier in Defect Detection with Enhanced YOLOv4". AZoAi. https://www.azoai.com/news/20240313/The-Next-Frontier-in-Defect-Detection-with-Enhanced-YOLOv4.aspx. (accessed July 04, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. The Next Frontier in Defect Detection with Enhanced YOLOv4. AZoAi, viewed 04 July 2024, https://www.azoai.com/news/20240313/The-Next-Frontier-in-Defect-Detection-with-Enhanced-YOLOv4.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Weed Classification in Precision Farming