A Fusion Module Approach for Enhancing Positioning Accuracy in Firefighting Robotics

In a paper published in the journal PLoS ONE, researchers tackle challenges in positioning technology crucial for environmental perception, particularly in firefighting robots. The study addresses the insufficient positioning accuracy of the fire emergency reaction dispatching (FERD) system, especially in indoor scenarios with multiple obstacles.

Study: A Fusion Module Approach for Enhancing Positioning Accuracy in Firefighting Robotics. Image credit: Pavel Gulea/Shutterstock
Study: A Fusion Module Approach for Enhancing Positioning Accuracy in Firefighting Robotics. Image credit: Pavel Gulea/Shutterstock

The proposed solution involves a fusion module based on the Blackboard architecture, aiming to enhance the accuracy of individual sensors on unmanned vehicles within the FERD system. This module employs a comprehensive approach, integrating strategies like denoising, spatial alignment, confidence degree update, observation filtering, data fusion, and fusion decision. Experimental results reveal the module's effectiveness in significantly improving positioning accuracy in complex indoor environments with multiple obstacles, showcasing its potential for advancing firefighting and rescue applications.

Background

The proliferation of robotics, particularly in safety-focused applications like firefighting, underscores the need for precise positioning in indoor emergencies. Firefighting robots, vital for minimizing casualties, encounter challenges in accurate positioning due to the impracticality of conventional methods like global positioning system (GPS) indoors. Various sensors, including Ultra-Wideband (UWB), Inertial Measurement Unit (IMU), Infrared Depth Sensor (IDS), and cameras, address this issue, yet each has limitations.

FERD System's Fusion Module Strategies

The paper introduces a Fusion Module into the FERD system to address inaccuracies observed across multiple sensors. This module, part of the FERD system, comprises three sub-modules: the control, execution, and fusion modules. A series of strategies have been developed within the fusion module to process data from various sensors. These strategies include denoising, spatial alignment, confidence degree update, observation filtering, data fusion, and fusion decision, and the following are their functions.

  • Denoising addresses interference values present in the observed values of each sensor, referred to collectively as random errors, aiming to reduce the standard deviation of each sensor's data.
  • Spatial alignment transforms observed values from different coordinate systems into a uniform data type before fusion, enabling integration.
  • The confidence degree update determines the contribution of each sensor to data fusion, assigning confidence degrees to sensors based on their respective observations.
  • In the fusion process, observation filtering utilizes error thresholds to filter observed values, ensuring the utilization of only accurate data.
  • Data fusion, a multi-level and multi-spatial information processing operation, optimally combines data from multiple sensors, leveraging their collective intelligence and reducing information uncertainty. This process amalgamates observations into a singular dataset, enhancing measurement accuracy. The fusion module assumes multiple fusion algorithms and fuses observed values of different sensors for the exact attributes.
  • The fusion decision stage selects the most appropriate fusion algorithm closest to the fundamental values by minimizing the discrepancy between fused and actual values.

Sensor Performance Comparison in Diverse Environments

Without obstacles, IDS showed errors ranging from 134.86mm to 420.35mm compared to fundamental values, with standard deviations varying between 54.27mm and 165.15mm. Meanwhile, camera-based running had errors ranging from 42.89mm to 155.89mm, with standard deviations between 23.16mm and 83.15mm. It indicates the IDS system faced significant challenges in accurately measuring distances, exhibiting higher error ranges and fluctuations than the camera-based measurements.

However, IDS struggled more in scenarios with multiple obstacles, showcasing errors from 290.67mm to 639.22mm, with higher standard deviations between 140.37mm and 297.21mm. In contrast, camera-based measurements faced errors between 66.45mm and 385.81mm, with standard deviations from 33.15mm to 194.05mm. Despite facing challenges due to obstacles, the camera-based system demonstrated more consistent and relatively lower errors than the IDS ranging system in this scenario.

These results highlight the disparity in performance between IDS and camera-based ranging systems, particularly magnified in scenarios with obstacles, where the IDS system faced more incredible difficulty in accurate distance measurement than the camera-based approach. Through qualitative and quantitative assessments, it was evident that the system's stability varied across different scenarios, highlighting the need for sensor complementarity and fusion to mitigate limitations in specific environments. The comparison between UWB, IDS, and Camera positioning revealed fluctuations in observed values, with particular scenarios presenting challenges like signal interference and reflections impacting accuracy.

The research demonstrated the system's performance in firefighting, showcasing how a1 and a2 navigated through predefined routes. Despite UWB's limitations in an obstacle-rich indoor setting, the fusion module enabled more accurate localization by combining data from multiple sensors. However, the study also identified challenges, such as discrepancies in localization frequency between sensors, affecting the continuity of positioning data. Overall, the findings underscored the significance of sensor fusion in enhancing environmental awareness and precision in coordinating unmanned vehicles, albeit acknowledging room for further improvements in specific operational conditions.

Conclusion

To summarize, this paper introduces a Blackboard Architecture-based fusion module within the FERD system, aiming to address inaccuracies stemming from single-sensor limitations in indoor environments filled with obstacles. This module enhances positioning accuracy by leveraging multiple sensors through various fusion techniques. The study gathers empirical data on confidence degrees, sensor errors, and timeliness through training in obstacle-rich indoor scenarios. Notably, compared to single-sensor setups, this proposed module offers switchable and scalable capabilities.

The evaluation of the fusion module showcases its efficacy in localizing based on field sensor data, with potential applications in tasks like target identification and tracking for multiple agents. For instance, the system can autonomously identify and filter short-term abnormal sensor data, ensuring continuous operation without compromising subsequent tasks. The intended future application of this fusion module extends beyond the FERD system, with implications for enhancing safety within autonomous driving systems.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, November 15). A Fusion Module Approach for Enhancing Positioning Accuracy in Firefighting Robotics. AZoAi. Retrieved on September 19, 2024 from https://www.azoai.com/news/20231115/A-Fusion-Module-Approach-for-Enhancing-Positioning-Accuracy-in-Firefighting-Robotics.aspx.

  • MLA

    Chandrasekar, Silpaja. "A Fusion Module Approach for Enhancing Positioning Accuracy in Firefighting Robotics". AZoAi. 19 September 2024. <https://www.azoai.com/news/20231115/A-Fusion-Module-Approach-for-Enhancing-Positioning-Accuracy-in-Firefighting-Robotics.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "A Fusion Module Approach for Enhancing Positioning Accuracy in Firefighting Robotics". AZoAi. https://www.azoai.com/news/20231115/A-Fusion-Module-Approach-for-Enhancing-Positioning-Accuracy-in-Firefighting-Robotics.aspx. (accessed September 19, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. A Fusion Module Approach for Enhancing Positioning Accuracy in Firefighting Robotics. AZoAi, viewed 19 September 2024, https://www.azoai.com/news/20231115/A-Fusion-Module-Approach-for-Enhancing-Positioning-Accuracy-in-Firefighting-Robotics.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Dual LLMs Streamline Robot Programming and Boost Code Safety