Revolutionizing PPE Detection with Deep Learning

In a paper recently published in the journal Sustainability, researchers investigated the feasibility of using a deep learning (DL)-based approach for quick and accurate personal protective equipment (PPE) detection in hazardous work environments.

Study: Revolutionizing PPE Detection with Deep Learning. Image credit: AnaLysiSStudiO/Shutterstock
Study: Revolutionizing PPE Detection with Deep Learning. Image credit: AnaLysiSStudiO/Shutterstock

Background

PPE is primarily used to increase the protection level of workers at chemical, construction, and other hazardous sites. The equipment reduces the severity and probability of fatal accidents or injuries, improving worker safety. However, several workers do not comply with the PPE-wearing regulations at their workplaces temporarily due to negligence or lack of awareness, leading to both non-fatal and fatal injuries.

Manual monitoring of workers is erroneous and laborious, necessitating the development of intelligent monitoring systems that can detect PPE compliance by workers accurately and autonomously in real-time during working hours.

The proposed DL-based approach

In this paper, researchers investigated the feasibility of using a two-stage detector based on the faster region-based convolutional neural network (Faster R-CNN) model for accurate and real-time PPE detection. They trained and evaluated the proposed model performance using the four colored hardhats, vest, safety glass (CHVG) dataset containing 1699 annotated images.

The CHVG dataset consisted of eight classes, including a person’s head, body, vest, safety glasses, and hardhats of four different colors, including yellow, red, blue, and white. The share of persons, vests, glass, heads, red, yellow, blue, and white among the 1189 objects/images after data preprocessing were 40%, 18.25%, 4.28%, 6.05%, 10%, 12.53%, 4.54%, and 4.28%, respectively.

Additionally, the dataset was divided into validation, test, and training datasets, with 430, 172, and 115 images being utilized for training, validation, and testing, respectively. The training dataset was utilized during the training of the proposed model, the test dataset served as the base for model evaluation, and the validation dataset was used as a check to ensure the planned proceedings of the model training process.

The data collected from several public repositories and open sources was preprocessed and unified to provide coherent data for the proposed model. Image filtering, scaling, denoising, and augmentation were used to generate a homogeneous set of images, as the data utilized in this study was primarily comprised of images containing different PPE features.

Researchers used the Albumentations library for data augmentation to enhance the model performance using several augmentation techniques, such as hue-saturation value (HSV) alteration, mosaic, and image flipping.

Model training, validation, and evaluation

Researchers investigated both two-shot and single-shot detectors, including the Faster RCNN with the ResNet50 backbone and You Only Look Once version 5 (YOLOv5), respectively, to identify the most suitable architecture for PPE detection.

The YOLOv5 single-shot detector comprised a feature pyramid network (FPN), YOLOv3 detection head, and a CSPDarknet53 backbone, while the proposed faster RCNN two-shot detector consisted of a region-based detector and a region proposal network (RPN). The RPN primarily generates candidate object regions by assessing image regions at various aspect ratios and scales and then utilizes the region-based detector to refine and classify the generated candidate object regions.

In this proposed model, the ResNet50 network was employed for feature extraction from the data and then sending the extracted features to the region-based detector and RPN that utilize the faster RCNN network. Thus, this complex, faster RCNN architecture can ensure accurate detection results with improved speed.

Researchers used the Google Colab platform with Nvidia T4 Tensor Core GPU for this study. The entire process of evaluating and training the model was repeated until the best performance was achieved. In the evaluation process, the validation data were evaluated using mean average precision (mAP50), precision, and recall metrics, with the mAP50 being utilized for benchmark evaluation. Additionally, the speed of the model was also determined by measuring the inference time in seconds.

Eventually, the best-performing architecture/final Faster RCNN model was tested using different images in various environments, and its results were compared with the results of the YOLOv5 model to assess the performance of the proposed model in practical use cases. Both models were trained on a common dataset using the same dataset classes and hyperparameters.

Significance of the study

The proposed Faster RCNN model significantly outperformed the YOLOv5 model in PPE detection. Results showed that the proposed model achieved 96%, 68%, and 78% mAP50, precision, and recall, respectively, while the YOLOv5 model achieved 63.9%, 62.8%, and 55.3% mAP50, precision, and recall, respectively.

The model also displayed a significantly improved mAP of 96% and inference time of 0.17 s compared to the 89.84% mAP and 0.99 s inference time of YOLOX-m/, the best-performing model in the literature. Moreover, the trained Faster RCNN model attained an overall 96% mAP50 and over 50% recall and precision when it was used to classify the eight different classes, including blue, white, vest, yellow, glass, person, head, and red.

To summarize, the findings of this study demonstrated the feasibility of using the proposed Faster RCNN model to accurately localize and identify PPE with a high level of accuracy and shorter detection time in real-time environments. Moreover, the model was also consistent and stable across different confidence thresholds.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, September 23). Revolutionizing PPE Detection with Deep Learning. AZoAi. Retrieved on November 22, 2024 from https://www.azoai.com/news/20230923/Revolutionizing-PPE-Detection-with-Deep-Learning.aspx.

  • MLA

    Dam, Samudrapom. "Revolutionizing PPE Detection with Deep Learning". AZoAi. 22 November 2024. <https://www.azoai.com/news/20230923/Revolutionizing-PPE-Detection-with-Deep-Learning.aspx>.

  • Chicago

    Dam, Samudrapom. "Revolutionizing PPE Detection with Deep Learning". AZoAi. https://www.azoai.com/news/20230923/Revolutionizing-PPE-Detection-with-Deep-Learning.aspx. (accessed November 22, 2024).

  • Harvard

    Dam, Samudrapom. 2023. Revolutionizing PPE Detection with Deep Learning. AZoAi, viewed 22 November 2024, https://www.azoai.com/news/20230923/Revolutionizing-PPE-Detection-with-Deep-Learning.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Secures IoT with Federated Learning