AI Dataset Revolutionizes Emergency Evacuation

In an article published in the Journal of Safety Science and Resilience, researchers introduced the human behavior detection dataset (HBDset), designed for computer vision (CV) applications in emergency evacuation scenarios. They addressed limitations in traditional object detection models by focusing on specific groups like the elderly, disabled, and pregnant women.

Study: AI Dataset Revolutionizes Emergency Evacuation. Image Credit: Pixel-Shot/Shutterstock
Study: AI Dataset Revolutionizes Emergency Evacuation. Image Credit: Pixel-Shot/Shutterstock

The dataset contained eight behavior categories, enabling accurate detection and classification. Testing with classical object detection algorithms demonstrated high accuracy, suggesting its potential to improve public safety and emergency response efforts.

Background

In recent years, the rise in natural and man-made disasters has underscored the critical need for effective emergency management systems. However, conventional approaches often overlook the diverse needs of vulnerable populations during evacuations, leading to increased risks and delays in rescue efforts.

While artificial intelligence (AI) and CV have been applied to enhance emergency response systems, existing research primarily focuses on general population behavior, neglecting special groups like pregnant women, children, and individuals with disabilities.

Previous studies have demonstrated the potential of AI in evacuation research but lacked specific attention to vulnerable populations and diverse human behaviors. This paper addressed this gap by introducing the HBDset, a comprehensive dataset curated specifically for detecting and classifying various human behaviors during evacuations. By leveraging advanced object detection algorithms, the study aimed to improve the accuracy and effectiveness of evacuation monitoring systems, thereby enhancing public safety and early disaster warning capabilities.

The Anatomy of HBDset

The HBDset categorized evacuee behaviors into eight groups, including vulnerable individuals and those exhibiting disruptive behaviors during emergencies. These categories, such as 'holding_crutch' and 'pregnancy', aimed to address the unique needs and challenges faced by special attention groups during evacuations. Data collection involved sourcing images from established datasets like Microsoft common objects in context (MS COCO) and employing web crawlers to collect publicly available images.

Annotation was conducted using the PASCAL visual object classes (VOC) format, with annotations saved as extensible markup language (XML) files. The dataset comprised 1,523 images with 2,923 annotated objects, with each image containing an average of two bounding boxes. Notably, the 'child' category had the highest number of objects, reflecting the vulnerability of children during evacuations.

Conversely, the 'using_wheelchair' category had the fewest objects due to typically featuring only one object per image. This dataset filled a critical gap in existing datasets by focusing on specific behaviors crucial for enhancing evacuation monitoring systems' accuracy and effectiveness in ensuring the safety of all individuals during emergencies.

Object Detection Experiments on HBDset

The object detection model is trained and tested with the HBDset to showcase its capability in detecting various human behaviors, establishing a benchmark for evaluation. You only look once (YOLO)v5, YOLOv7, and YOLOv8 models were trained and tested to detect diverse human behaviors. Data augmentation techniques such as color transformation were employed to enhance model generalization.

The dataset was split into training, validation, and test subsets. Transfer learning was utilized to leverage knowledge from previous tasks. The models' training progress was evaluated using loss curves and mean average precision (mAP). Testing results showed high mAP scores for all models, with YOLOv7 achieving the highest score.

Detection instances from the test dataset demonstrated accurate classification of human categories. The study acknowledged limitations and suggested future research directions. Additionally, a demonstration at Hong Kong International Airport showcased the models' effectiveness in real-world scenarios, achieving over 90% successful detection and stable tracking performance.

Perspectives of Intelligent Monitoring and Digital Twin Systems

The HBDset not only advanced human recognition research but also enhanced intelligent monitoring and digital twin systems. By training object detection models with HBDset, automatic detection of vulnerable populations and behaviors in emergencies became feasible, improving public safety. The proposed intelligent digital twin system integrated closed circuit television (CCTV) networks, AI engines, and user interfaces to monitor and optimize evacuation strategies based on real-time behavior detection.

The system issued instructions and warnings tailored to detected behaviors, enhancing emergency response. The HBDset served as a foundation for this system, contributing significantly to emergency management. Despite challenges like latency and human intervention, this marked a crucial step toward fully unmanned, smart monitoring systems.

Conclusion

In conclusion, the introduction of HBDset marked a significant advancement in emergency management, addressing the critical need for accurate and inclusive behavior detection during evacuations. By training and testing object detection models on this dataset, researchers demonstrated its potential to enhance public safety systems.

The proposed intelligent monitoring and digital twin systems offered promising avenues for real-time behavior monitoring and optimized evacuation strategies. Overall, HBDset laid the groundwork for future research in human behavior detection and emergency response, emphasizing the importance of considering vulnerable populations in disaster preparedness and management efforts.

Journal reference:
  • Yifei Ding, Xinghao Chen, Zilong Wang, Yuxin Zhang, Xinyan Huang, Human Behaviour Detection Dataset (HBDset) Using Computer Vision for Evacuation Safety and Emergency Management, Journal of Safety Science and Resilience, 2024, ISSN 2666-4496, DOI: 10.1016/j.jnlssr.2024.04.002, https://www.sciencedirect.com/science/article/pii/S2666449624000343
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, June 12). AI Dataset Revolutionizes Emergency Evacuation. AZoAi. Retrieved on December 11, 2024 from https://www.azoai.com/news/20240611/AI-Dataset-Revolutionizes-Emergency-Evacuation.aspx.

  • MLA

    Nandi, Soham. "AI Dataset Revolutionizes Emergency Evacuation". AZoAi. 11 December 2024. <https://www.azoai.com/news/20240611/AI-Dataset-Revolutionizes-Emergency-Evacuation.aspx>.

  • Chicago

    Nandi, Soham. "AI Dataset Revolutionizes Emergency Evacuation". AZoAi. https://www.azoai.com/news/20240611/AI-Dataset-Revolutionizes-Emergency-Evacuation.aspx. (accessed December 11, 2024).

  • Harvard

    Nandi, Soham. 2024. AI Dataset Revolutionizes Emergency Evacuation. AZoAi, viewed 11 December 2024, https://www.azoai.com/news/20240611/AI-Dataset-Revolutionizes-Emergency-Evacuation.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Optimizing Computer Vision for Embedded Systems