Advancing Robotic Assembly: Learning from Human Demonstrations

In a study published in Robotics and Computer-Integrated Manufacturing, researchers from Bosch Center for Artificial Intelligence presented a framework for flexible robotic manipulation. The framework enables robots to learn object-centric skills from human demonstrations and sequence them to perform industrial assembly tasks. As a real-world application, they demonstrated the approach of assembling critical components of an electric bicycle (e-bike) motor.

Study: Advancing Robotic Assembly: Learning from Human Demonstrations. Image credit: metamorworks/Shutterstock
Study: Advancing Robotic Assembly: Learning from Human Demonstrations. Image credit: metamorworks/Shutterstock

E-bikes, powered by an electric motor, are becoming increasingly popular for urban transportation. To meet the growing demand, e-bike motors must be mass-produced at high quality. Robot-based flexible manufacturing systems can enable the variable, high-volume production required. However, automating complex assembly tasks remains challenging for robotic manipulators. The study introduces a framework to program robots on-site via human demonstrations of modular skills. The system integrates perception, skill learning, sequencing, control, and optimization to perform multi-step industrial assembly.

Flexible Manipulation Framework

The framework consists of five key components. First, a perception module estimates 6D poses of objects from red-green-blue-depth (RGB-D) images using dense object nets. Second, a learning approach builds object-centric skill models from human demonstrations using task-parameterized hidden semi-Markov models (TP-HSMMs). Third, a sequencing module chains multiple skill models into complete tasks based on a human-provided high-level plan. Fourth, an impedance controller tracks motion trajectories generated from the skill models. Finally, a Bayesian optimization module refines skills for precise execution.

The system layers these components into a flexible robotic architecture. The integration layer coordinates software modules for a given task, enabling online refinement. The interface layer connects components through the Robot Operating System (ROS). The algorithm layer implements core methods like perception and learning. The hardware layer interfaces with the actual robot platform. This organization allows developing algorithms independently while jointly executing skills on hardware.

The eBike Challenge

The researchers evaluated their framework for assembling critical parts of a Bosch Performance Line e-bike motor. The motor comprises multiple components, including circuit boards, gears, shafts, and pegs. The assembly tasks involve grasping, inserting, and pressing these objects at precise locations, requiring skilled yet compliant robot motions.

The experiments used a 7-DoF Franka Emika manipulator with a force-torque sensor. A wrist camera provided RGB-D images for perception. The researchers demonstrated four sub-tasks: pressing the circuit board onto motor pins, mounting a gear, inserting a drive shaft, and sliding a transmission peg. The skills involved free motions, force interactions, and different end-effector orientations.

Learning Manipulation Skills

The researchers employed TP-HSMMs to learn reusable manipulation skills from the demonstrations. This probabilistic model captures the spatial and temporal patterns of motions relative to objects, enabling adaptation to new configurations. The learning approach also modeled force profiles critical for assembly by tracking attractor trajectories representing demonstrated stiffness behaviors.

Ten skills with 2-4 instances each were learned from 3-4 demonstrations per instance. The system robustly reproduced skills for the assembly by sequencing the models based on a provided sub-task order. Furthermore, sample-efficient Bayesian optimization refined skills to reduce positioning errors and interaction forces.

Advantages of the Flexible Manipulation System

With its integrated framework, the flexible manipulation system offers several distinct advantages over conventional robot programming. Firstly, it provides quick and intuitive programming through TP-HSMMs, allowing users to easily demonstrate skills and encode complex behaviors like peg insertion from just a few examples. Moreover, its modular structure enables the reusability of skills for new tasks.

In addition to its ease of use, the system excels in precision and compliance, accurately reproducing nuanced force and motion patterns crucial for assembly. It achieves this through adaptive impedance control, ensuring precise trajectory tracking even in the presence of model inaccuracies.

Lastly, the system exhibits adaptability to varying conditions thanks to its object-centric skill representation, which facilitates online adaptation to perceived object poses. Additionally, the sequencing approach selects suitable skill variants. Integrating self-supervised reinforcement learning and Bayesian optimization further optimizes execution, making it a highly versatile and efficient solution for robotic tasks. These combined features make the flexible manipulation system a cutting-edge solution that enhances the ease and effectiveness of robotic programming and execution.

Challenges and Limitations

Despite the promising results, applying the proposed system to more comprehensive industrial settings poses several open challenges that must be addressed. One limitation is the time required to sequence multiple skills grows combinatorially as more skill variations exist. This exponential growth prevents fast online re-planning and limits the approach's scalability. For example, if one skill has five variants and the next has three, there are 15 possible combinations to evaluate. As more skills are added, the number of combinations multiplies, making real-time adaptation intractable.

Another challenge is the assumption of a static environment within the execution of each skill. If objects move during the robot's motion, failures may occur as the initial conditions have changed. Incorporating dynamic state tracking and closed-loop adaptation during skill execution could improve the system's robustness to unexpected environmental disturbances and changes.

Additionally, learning force interaction patterns from human demonstrations may not produce robot behaviors best suited for the task objectives. Humans avoid large contact forces, while optimizing the assembly process may require controlled forceful interactions. Extracting the appropriate skill parameters from suboptimal demonstrations remains an open research area.

Furthermore, sequencing skills based on modular condition models may fail to incorporate higher-level constraints critical for the task explicitly. For example, constraints like avoiding jamming and ensuring stability are difficult to formalize within the proposed approach. More sophisticated planning techniques that consider such abstract requirements could enhance the performance.

Finally, providing safety guarantees when generating autonomous motions near humans remains an unsolved challenge for learning-based methods like the one presented. Learned policies tend to be opaque compared to model-based approaches, complicating the formal verification required for safe operation around people.

Future Outlook

This research demonstrated the flexible automation of the assembly of critical components of an e-bike motor using a novel framework. While limitations exist, it represents significant progress in integrating key technologies like perception, learning, control, and planning.

Future work can enhance the adaptability and safety of such systems. Promising directions are even more data-efficient learning, tighter perception-control loops, and incorporating formal task requirements into planning. Further applications to industrial settings will uncover additional challenges to address on the path toward more capable and reliable intelligent robotic assistants.

Journal reference:
Aryaman Pattnayak

Written by

Aryaman Pattnayak

Aryaman Pattnayak is a Tech writer based in Bhubaneswar, India. His academic background is in Computer Science and Engineering. Aryaman is passionate about leveraging technology for innovation and has a keen interest in Artificial Intelligence, Machine Learning, and Data Science.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Pattnayak, Aryaman. (2023, September 08). Advancing Robotic Assembly: Learning from Human Demonstrations. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20230908/Advancing-Robotic-Assembly-Learning-from-Human-Demonstrations.aspx.

  • MLA

    Pattnayak, Aryaman. "Advancing Robotic Assembly: Learning from Human Demonstrations". AZoAi. 22 December 2024. <https://www.azoai.com/news/20230908/Advancing-Robotic-Assembly-Learning-from-Human-Demonstrations.aspx>.

  • Chicago

    Pattnayak, Aryaman. "Advancing Robotic Assembly: Learning from Human Demonstrations". AZoAi. https://www.azoai.com/news/20230908/Advancing-Robotic-Assembly-Learning-from-Human-Demonstrations.aspx. (accessed December 22, 2024).

  • Harvard

    Pattnayak, Aryaman. 2023. Advancing Robotic Assembly: Learning from Human Demonstrations. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20230908/Advancing-Robotic-Assembly-Learning-from-Human-Demonstrations.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Researchers Unveil Method to Steal AI Models Without Hacking Devices