Real-Time Driver Monitoring System for Enhanced Road Safety Using Facial Landmark Estimation

In a paper published in the journal Scientific Reports, researchers introduced a real-time Driver Monitoring System (DMS) that monitored driver behavior while driving using facial landmark estimation-based behavior recognition.

Study: Real-Time Driver Monitoring System for Enhanced Road Safety Using Facial Landmark Estimation. Image credit: Generated using DALL.E.3
Study: Real-Time Driver Monitoring System for Enhanced Road Safety Using Facial Landmark Estimation. Image credit: Generated using DALL.E.3

The system utilized an infrared (IR) camera to capture and analyze video data. It had two modules for recognizing specific behaviors: one for detecting inattention by analyzing head pose and the other for identifying drowsiness through eye-closure recognition. The method highlighted its efficiency and real-time capabilities, relying solely on IR camera data. The researchers actively demonstrated the system's effectiveness in monitoring and recognizing driver behavior through their active data evaluation. This DMS presented a practical and robust approach to enhance driver safety and alertness during their journeys.

Background

Amid rapid advances in computer vision technology, artificial intelligence (AI) applications in the automotive sector, particularly in autonomous driving, have surged. However, accidents due to driver drowsiness, alcohol consumption, and lapses in attention persist. Regulatory bodies like the European Commission and the United States National Transportation Safety Board (US NTSB) are proactively mandating driver monitoring technology in vehicles. Robust DMS remains essential for assessing driver conditions and ensuring safety.

In past works, research explored methods to analyze driver states, including drowsiness and inattention. Traditional DMS used pre-deep learning techniques, while some studies adopted infrared cameras to address challenges with conventional cameras. Driver monitoring often focuses on studying eye features and gestures, with deep learning, image classification, and vision transformer techniques playing significant roles in behavior recognition—recent DMS systems employed decision fusion-based methods to manage multi-modal data.

Development and Deployment of the DSM System

In pursuing a robust Deep Learning-based DSM system capable of effectively handling lighting variations induced by the rotation of Infrared Light Emitting Diodes (IR LEDs), researchers conducted video analysis using a custom in-house camera developed by Can-lab. 

The camera design incorporates IR LED control and seamless communication with the Electronic Control Unit (ECU). Implementing a dedicated power supply for the IR-LED driver was aimed at enhancing video quality and reducing noise, thus contributing to the effectiveness and performance of the DSM system. 

In the described DMS, a custom image acquisition device developed by CANlab is used to capture images for driver condition analysis. The DMS camera's installation is strategically placed behind the fixed steering column, ensuring it remains undisturbed during steering wheel adjustments.

The paper outlines the DMS system's comprehensive methodology and installation for optimized driver state analysis. The camera is placed at the center of the steering wheel to capture and analyze the driver's face. Sample images demonstrate the DMS's operation, continuously monitoring the driver's state by matching information from the steering wheel. The DMS employs an IR camera for facial landmark estimation, crucial for head pose and eye closure detection, indicative of drowsy driving.

Researchers use facial detection algorithms, high-resolution images, and the You Only Look Once version 7 (YOLOv7) network for precise detection and analysis. The paper also highlights using other algorithms for facial landmark extraction due to its speed and performance.

Advanced Driver State Analysis

Driver State Analysis: The designed system addresses drowsy and inattentive driving behavior, two significant contributors to accidents. Researchers have developed a comprehensive system that relies on a single camera to capture and analyze the driver's state, making it an efficient solution for enhancing road safety. By focusing on these specific factors, the system aims to reduce the risk of accidents caused by driver fatigue and inattention, making the roads safer for everyone.

Inattention Analysis: Identifying inattentive driving behavior in the system begins with head pose estimation. This process provides essential insights into the driver's head angle and gaze direction, which are crucial for recognizing signs of inattention. This approach ensures that identification of drowsy driving, such as head shaking or nodding, is used to evaluate the driver's focus on the road, even when the vehicle's ECU data is inaccessible. 

Drowsiness Analysis: Detecting drowsy driving is paramount for road safety. This approach uses an eye-closing detection filter, which helps us identify instances when drivers close their eyes. By carefully selecting the threshold value and applying it to the image, researchers can detect drowsy driving behavior reliably. This method allows for timely alerts or interventions, ultimately preventing accidents caused by drowsy driving.

Performance Evaluation and Comparative Analysis

Researchers evaluated the method's performance using an IR driver state analysis dataset of every day and tired situations under low-light conditions. The camera was placed in front of the driver's seat, capturing individuals within a 30-50cm range. The video was in Audio Video Interleave (AVI) format with a 1280 × 800 resolution. The first assessed face detection performance under varying lighting conditions using the YOLO V7 model, which achieved high precision (100%) and recall (98% and 99%).

Moving to drowsiness detection, they employed an eye-closure recognition filter, yielding impressive accuracy and precision, exceeding 99%. In inattentiveness analysis, based on head pose estimation, the recognition rate is over 99% for the driver's head direction. Comparing the system to existing driver monitoring methods, this approach, categorized as a Direct type DMS, delivered strong performance.

Although there's room for improvement, the customized IR camera and DMS method offer a cost-effective and efficient solution for driver state analysis, suitable for practical deployment. These results demonstrate the robustness and effectiveness of the approach in terms of face recognition, drowsiness detection, and inattentiveness analysis. With the potential to enhance driving safety and prevent accidents, the system holds promise for real-world use in commercial vehicles.

Conclusion

To sum up, the innovative algorithm utilizes an IR camera to analyze the driver's state, offering real-time capabilities and robust performance. With its efficient behavioral recognition modules, it can be seamlessly integrated into vehicles to enhance driving safety. Future research will focus on merging these techniques with On-Board Diagnostics II (OBD-II) data for a more comprehensive driver behavior analysis and adapting the system for compact embedded boards. The goal is to bring this system to the commercial market through rigorous testing and exploration.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, October 27). Real-Time Driver Monitoring System for Enhanced Road Safety Using Facial Landmark Estimation. AZoAi. Retrieved on September 19, 2024 from https://www.azoai.com/news/20231027/Real-Time-Driver-Monitoring-System-for-Enhanced-Road-Safety-Using-Facial-Landmark-Estimation.aspx.

  • MLA

    Chandrasekar, Silpaja. "Real-Time Driver Monitoring System for Enhanced Road Safety Using Facial Landmark Estimation". AZoAi. 19 September 2024. <https://www.azoai.com/news/20231027/Real-Time-Driver-Monitoring-System-for-Enhanced-Road-Safety-Using-Facial-Landmark-Estimation.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Real-Time Driver Monitoring System for Enhanced Road Safety Using Facial Landmark Estimation". AZoAi. https://www.azoai.com/news/20231027/Real-Time-Driver-Monitoring-System-for-Enhanced-Road-Safety-Using-Facial-Landmark-Estimation.aspx. (accessed September 19, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. Real-Time Driver Monitoring System for Enhanced Road Safety Using Facial Landmark Estimation. AZoAi, viewed 19 September 2024, https://www.azoai.com/news/20231027/Real-Time-Driver-Monitoring-System-for-Enhanced-Road-Safety-Using-Facial-Landmark-Estimation.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Advances Deep-Sea Biota Identification in the Great Barrier Reef