In an article published in the journal Nature, researchers focused on enhancing the you-only-look-once (YOLO)v5 algorithm for real-time detection of safety helmets in industrial settings. The authors introduced the FasterNet lightweight network structure, Wise-intersection over union (IoU) loss function with dynamic focusing, and convolutional block attention module (CBAM) attention mechanism to improve detection precision.
Experimental results demonstrated significant improvements in reducing parameters, computational load, and inference time while increasing mean average precision (mAP) for effective real-time helmet detection.
Background
In construction settings, ensuring workers wear safety helmets is crucial for preventing head injuries. However, manual inspection methods are inefficient and error-prone, necessitating intelligent management systems. While computer vision technologies like region-based convolutional neural networks (RCNN) and single-shot detectors (SSD) have been applied to helmet detection, existing algorithms face challenges in achieving both precision and real-time performance.
Previous attempts, such as those utilizing Faster RCNN, YOLOv5, and YOLOv7, have shown promising results but still fall short in various aspects, including real-time capability and detection accuracy for small targets. These limitations underscore the need for lightweight target detection networks suitable for deployment in industrial settings. This paper addressed these gaps by proposing an improved YOLOv5 algorithm for real-time safety helmet detection.
Leveraging the FasterNet lightweight network structure, Wise-IoU loss function, and CBAM attention mechanism, the enhanced algorithm achieved higher mAP, faster processing speeds, and reduced computational complexity compared to mainstream algorithms. By focusing on the specific task of detecting construction workers wearing helmets, the proposed algorithm filled a critical gap in existing research by providing a solution tailored to real-world construction scenarios. This paper bridged the divide between precision and real-time performance, paving the way for the effective deployment of safety management systems in industrial environments.
Improvement to the YOLOv5 Algorithm
The improved YOLOv5s algorithm addressed the need for real-time and precise helmet detection in embedded device deployment and on-site operations while maintaining its original accuracy. Firstly, it replaced the main core network with the lightweight FasterNet, reducing parameters and computational load. FasterNet achieved efficient computation by employing partial convolution (PConv), significantly reducing floating point operations (FLOPs).
By decomposing the conventional convolution into PConv and point-wise state convolution (PWConv), redundancy between filters was minimized, further saving computational resources. This approach ensured optimal information utilization without compromising on performance. Additionally, the introduction of the Wise-IoU loss function with a dynamic focusing mechanism enhanced boundary regression, improving target localization accuracy.
The new loss function addressed the limitations of the original generalized IoU (GIoU) loss function, particularly in scenarios where prediction boxes coincided with targets. The dynamic focusing mechanism adjusted penalty weights based on geometric factors, enhancing model generalization.
Furthermore, the incorporation of the CBAM attention mechanism between the neck and head enhanced network performance, particularly for detecting small targets like safety hats. The CBAM module combined channel attention and spatial attention to enrich feature representation and strengthen the linkage between location information and target features.
This comprehensive approach ensured that the improved YOLOv5s model achieved higher precision, faster processing speeds, and reduced computational complexity compared to its predecessors. These enhancements made the algorithm suitable for real-world deployment in construction scenarios, addressing the critical need for efficient and accurate safety helmet detection.
Experimental Evaluation and Performance Analysis
The experiments conducted in this study aimed to enhance the YOLOv5 algorithm's effectiveness in safety helmet detection, crucial for industrial safety. A comprehensive approach was adopted, starting with dataset fusion to create a more representative dataset for construction scenarios. The algorithm's performance was evaluated using indicators like precision, recall rate, and average precision, demonstrating its suitability for real-time monitoring in production environments.
Ablation experiments dissected the impact of each improvement module, showcasing significant enhancements in mAP and computational efficiency. Comparative experiments against popular detection networks validated the superiority of the proposed algorithm, surpassing them in both detection performance and computational efficiency.
Notably, the improved YOLOv5 model exhibited higher mAP, reduced parameter count, and improved precision compared to existing models like YOLOv7 and YOLOv8, making it suitable for real-time deployment on edge devices. Test results further substantiated the algorithm's effectiveness, revealing improved detection capabilities in various scenarios, including small targets and crowded environments.
The integration of the CBAM attention mechanism and the Wise-IoU loss function proved instrumental in enhancing feature extraction and reducing false detections, ultimately improving overall detection performance. This comprehensive approach addressed critical safety concerns in industrial settings, paving the way for more efficient safety helmet monitoring and ensuring worker well-being.
Conclusion
In conclusion, the enhancements made to the YOLOv5 algorithm significantly improved safety helmet detection in industrial settings. Leveraging FasterNet, Wise-IoU loss function, and CBAM attention mechanism, the algorithm achieved higher precision and real-time performance. Experimental results demonstrated superior performance compared to existing models, showcasing reduced parameters, faster processing, and improved detection accuracy.
This research addressed critical safety concerns, paving the way for efficient safety management systems in construction environments. Further advancements, including correct helmet-wearing detection and facial recognition integration, will enhance safety protocols for construction workers, ensuring comprehensive protection on-site.
Journal reference:
- Liu, Y., Jiang, B., He, H., Chen, Z., & Xu, Z. (2024). Helmet wearing detection algorithm based on improved YOLOv5. Scientific Reports, 14(1), 8768. https://doi.org/10.1038/s41598-024-58800-6, https://www.nature.com/articles/s41598-024-58800-6