A groundbreaking project blends cutting-edge AI and immersive mixed reality to prepare first responders for real-life challenges, enhancing safety and teamwork in emergencies.
Image Credit: Gorodenkoff / Shutterstock
Lap Fai (Craig) Yu, an Associate Professor of Computer Science in the College of Engineering and Computing, and Joel Martin, an Associate Professor of kinesiology in the College of Education and Human Development, received funding for the project "EAGER: TaskDCL: Adapting Mixed Reality Training Programs to Real-World Scenes to Enhance Human-AI Teaming in Emergency Responses."
This EArly-concept Grant for Exploratory Research (EAGER) project funds research that intends to speed up the development of mixed reality and artificial intelligence (AI) technologies to help first responders, aiming to reduce training-related risks and casualties.
The research team will collaborate with the Fairfax Fire and Rescue Department to explore how AI can improve first responders' training and effectiveness in mixed-reality tools. By adding virtual elements like fires, hazards, firefighters, robots, and people in need of rescue to real-life scenes, these mixed-reality scenarios help first responders practice handling real-world challenges through interactive training.
The project will also involve a postdoctoral researcher and undergraduate students, including those from underrepresented groups in science and technology fields. The team will share its findings at conferences focused on mixed reality and training.
This EAGER project offers a novel interdisciplinary research perspective. It integrates concepts and techniques from mixed reality, AI, human-computer interaction, and movement science to advance first responder training.
The researchers aim to devise a novel optimization-based generative framework for adapting mixed-reality training scenarios to real scenes. This framework will offer ample training opportunities for first responders to practice accomplishing different first-responder tasks (e.g., firefighting, search, and rescue) via human-AI collaboration enabled by mixed reality headsets.
To carry out the research, the team will first investigate how artificial intelligence techniques could be integrated with mixed-reality devices to provide first-response assistance. Then, the team will devise a generative framework based on optimization techniques for adapting mixed-reality training scenarios to real scenes. Finally, they will conduct user studies to evaluate the performance gain brought about by the advanced mixed reality interfaces and the synthesized training scenarios.
Yu received $299,861 from the National Science Foundation for this research. Funding began in Jan. 2025 and will end in late Dec. 2026.