In an article in press with the journal Robotics and Computer-Integrated Manufacturing, researchers proposed a deep learning (DL)-enhanced Digital Twin framework for effective detection of objects, specifically the human operators and robots, and classification of their actions during the manufacturing process to improve the reliability and safety in human-robot collaborative manufacturing.
Background
In Industry 5.0, the human-robot collaboration (HRC) concept is represented by smart manufacturing, where humans work alongside collaborative robots (cobots) in a shared workspace in close proximity. The cobots are pre-programmed to interact with human workers and perform different tasks.
However, collaborative manufacturing tasks pose significant reliability concerns and risks to the safety of human operators. Conventional approaches to ensure safety in collaborative manufacturing necessitate the deployment of cages. Laser rangefinders, light gates, and physical barriers are used to prevent direct contact between humans and cobots.
However, these safety measures are inflexible, bulky, and expensive. In recent years, several studies have been performed to develop flexible and cage-free safety solutions. For instance, collision avoidance-based solutions were proposed where the cobot’s pre-programmed trajectory is adapted to prevent collisions in a shared workspace with dynamic obstacles, such as humans.
However, these solutions cannot exclusively distinguish humans from other dynamic obstacles, leading to severe consequences. Recently, several advanced approaches based on computer vision and DL techniques have been proposed that demonstrated success in visual perception and scene understanding, such as object segmentation and detection, and classification.
Additionally, Digital Twins of cyber-physical systems can offer a digital representation of physical collaborative manufacturing systems in real-time, which can significantly improve the intelligence of systems regarding performance optimization, health management, evaluation, operation, production, and design.
Digital Twins can also play a crucial role in several aspects of complex HRC systems, including cognitive and interaction service, data fusion, data mining, data collection, process monitoring, performance analysis, modeling, and simulation.
Thus, intelligent solutions and Digital Twins can identify cobots and other objects in HRC, avoiding the complex calibration process. Several studies have been performed to investigate the application of artificial intelligence (AI)-driven Digital Twin in the field of smart manufacturing and cutting-edge robotics.
For instance, multi-access edge computing was integrated into Digital Twins to realize flexible and smart manufacturing processes. However, studies have not been performed to develop Digital Twins for manufacturing systems in diverse production stages and complex environmental conditions until now.
Additionally, the safety in most cobot systems is still primarily ensured using caged environments/additional safety sensors when operating at high speeds. The autonomy level also varies across various applications, and the level is increasing due to recent developments in AI, intelligent sensing, and computer vision techniques.
Improving reliability and safety in HRC manufacturing
In this paper, researchers proposed an intelligent DL-enhanced Digital-Twin-based safe HRC framework that can detect robot and human operators and classify their actions during manufacturing to enable the robot control system to make decisions autonomously.
The Unreal Engine 4 was used to develop the Digital Twin framework, which simulated the physical HRC system. The framework supported synchronous communication and control between the physical and Digital Twin systems. A communication framework was designed to synchronize the Digital Twin with the physical HRC platform using the robot operating system (ROS).
The communication framework allowed flexible information sharing between physical and digital systems in real time, including robot kinematics and poses. The digital system generated a significant amount of diverse synthetic cobot data with proper labels, as Digital Twins can create photo-realistic digital cobots and maintain all cobot parameters.
These data were combined with human data obtained from the common objects in context (COCO) repository and utilized to train the DL models to monitor the interactive operations of humans and robots. A faster region convolutional neural network-based fully-supervised detector was initially trained on synthetic data/real data and then tested on the physical system to evaluate the effectiveness of the Digital Twin-based framework.
A semi-supervised DL detector trained on both real and synthetic data was also proposed and tested to ensure the reliability of the digital system under different lighting conditions. Moreover, the Digital Twin system was used to validate and analyze the impact of the environment on the DL action-recognition system performance.
Significance of the study
The evaluations of the Digital Twin framework in multiple scenarios where human operators collaborate with a Universal Robot 10 demonstrated the effectiveness of the framework in detecting the robot and human in manufacturing and classifying their actions under different conditions. A fully-supervised detection algorithm achieved successful detection results in the real environment.
However, the semi-supervised detector displayed an improved performance/more accurate detection in new real environments compared to the fully-supervised detector, as the semi-supervised detector was trained on both synthetic and real data.
The semi-supervised model was also more effective than models trained using real or synthetic data under changing lighting conditions, including under full lighting and dark situation. The Hungarian algorithm and the Kalman filter introduced in the detector effectively reduced the detection failures while improving the inference speed.
The proposed Digital Twin framework was set up and deployed easily in manufacturing. A Digital Twin of the physical manufacturing workspace was developed by introducing computer-aided design (CAD) models of real objects into the Digital Twin.
Additionally, the proposed framework eliminated the need for substantial data collection and expensive manual annotation work by implementing the semi-supervised method using the Sim2Real technique and efficient data generation.
The generative framework displayed flexibility as it allowed the introduction of new objects through the addition of their CAD models into the digital system. Moreover, adopting an efficient scheme for data transmission between the physical and digital systems and automatic annotation generation allowed the implementation of other tasks, such as reinforcement learning.
To summarize, the findings of this study demonstrated that the proposed Digital Twin system could successfully achieve accurate recognition of human-robot behaviors and maintain a safe distance between a robot and human operators to improve reliability and safety in the HRC environment.
However, more research is required to consider other tasks, such as pose estimation and gesture recognition, along with object detection to recognize the robot and human actions to enable more complex control and decision-making and improve the digital system resilience in complicated tasks.
Journal reference:
- Wang, S., Zhang, J., Wang, P., Law, J., Calinescu, R., Mihaylova, L. (2023). A deep learning-enhanced Digital Twin framework for improving safety and reliability in human–robot collaborative manufacturing. Robotics and Computer-Integrated Manufacturing, 85, 102608. https://doi.org/10.1016/j.rcim.2023.102608, https://linkinghub.elsevier.com/retrieve/pii/S0736584523000832