Reinforcement Learning Empowers Autonomous 3D Positional Control of Magnetic Microrobots

In an article recently published in the journal Nature Machine Intelligence, researchers proposed a reinforcement learning (RL)-based approach for autonomous three-dimensional (3D) positional control of a magnetic microrobot.

Study: Reinforcement Learning Empowers Autonomous 3D Positional Control of Magnetic Microrobots. Image credit: Pixel Enforcer/Shutterstock
Study: Reinforcement Learning Empowers Autonomous 3D Positional Control of Magnetic Microrobots. Image credit: Pixel Enforcer/Shutterstock

Existing microrobot control method limitations

Recently, studies have demonstrated the significant potential of microrobots in biomedicine and biomedical engineering as the small size of these robots enables them to access all body regions, facilitating diagnostics and targeted therapies. However, the small size of microrobots also limits onboard electronics, making only wireless manipulation through magnetic/optical/chemical means a feasible option. Among them, magnetic actuation is a more common approach due to a good degree of freedom controllability, biocompatibility, and high penetration capability.

Multiple magnetic actuation schemes based on electromagnet- and/or permanent magnet-based actuation systems have been utilized for controlling magnetic microrobots. However, the existing magnetic microrobot motion-controlling techniques depend on the assumption of homogenous magnetic fields and are substantially influenced by the properties and surrounding environment of a microrobot. These strategies lack adaptability and generality when the microrobot or the environment changes, and display a moderate delay in response owing to independent control of the microrobot’s position and electromagnetic actuation system.

RL is a machine learning (ML) technique that can facilitate an intuitive and unique approach towards microrobot control. This technique can resolve complex problems without previous system knowledge and handle different environments. Thus, RL can serve as a generic microrobot control method.

The proposed RL-based approach

In this study, researchers proposed an ML-based positional control of magnetic microrobots through electromagnetic coil-generated gradient fields. A gradual training approach and RL were used to control a microrobot’s 3D position within a defined working area by directly managing the coil currents.

This model-free RL approach does not require previous system knowledge to control the magnetic microrobot’s 3D position using an eight-coil electromagnetic actuation system (EAS). The gradual training process was adopted for an RL agent to simplify the learning process and improve overall accuracy.

A four-step training process was utilized, including training through simulation/step 1, training for two-dimensional (2D) navigation in the EAS/step 2, training for 3D navigation with random target locations/step 3, and training for 3D navigation with fixed targets/step 4, which facilitated initial exploration and increased the complexity progressively to ensure accurate navigation.

Steps 1 and 2 assisted the RL agent in the initial stages, step 3 introduced a 3D workspace for increasing complexity, and step 4 improved the navigation accuracy. The episode end conditions, maximum timesteps, and the reward function differed in every step. The state-of-the-art proximal policy optimization (PPO) algorithm was utilized to train the RL agent as this requires less training time.

A simulation environment was developed for initial exploration to decrease the overall training/learning time, as training an RL agent is time-consuming in a physical system. After the simulation training, the learning process was transferred to a physical electromagnetic actuation system representing real-world complexities.

The primary task for the RL agent was to learn to control the magnetic microrobot’s position in a fluidic environment. Initially, the RL agent was pretrained in a simulation environment consisting of a single microrobot and eight electromagnetic coils. Then, a physical EAS was used to retrain the agent to introduce non-homogenous behaviors and disturbances of real-world scenarios. This novel approach allowed the RL agent autonomy in selecting EAS currents, enabling the agent to understand the system dynamics, microrobot-fluid interactions, and microrobot actuation.

Significance of this study

Researchers compared the proposed method with conventional proportional-integral-derivative (PID) control for performance evaluation. They also navigated the microrobot through a scaled-down 3D phantom of a brain artery/middle cerebral artery (MCA) section to assess the potential of the proposed method in real-world scenarios. Eventually, the method was merged with the path planning algorithms to realize fully autonomous navigation with dynamic and static obstacles.

The comparative analysis between the proposed RL-based method and the PID controller demonstrated that a significantly better accuracy can be achieved and the target can be reached more quickly/in a lower time using the proposed method. Additionally, the RL agent successfully navigated from a designated start point to a target/an aneurysm within the phantom of the MCA, which displayed the potential of the proposed method in real-life scenarios.

A* and D* path planning algorithms were introduced for fully autonomous navigation with dynamic and static obstacles. These algorithms and their variations have several applications in nano- and/or micro-robotic systems. However, in the absence of such path planning, a human must set a series of trajectory points manually from the starting position to the target. The A* algorithm was used for static environments, while the D* algorithm was utilized for dynamic obstacle avoidance. The RL agent utilized the trajectories generated by the path planning algorithms to automatically navigate from an initial microrobot position/a user-specified ‘start’ point to a final ‘target’ location.

In conclusion, the findings of the study demonstrated that the presented approach could be a feasible alternative to complex mathematical models, which are sensitive to variations in microrobot design, the nonlinearity of magnetic systems, and the environment. In the future, the approach can potentially assist in velocity, orientation, and positional control of microrobots in 3D dynamic environments with oscillating and rotating magnetic fields.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2024, January 19). Reinforcement Learning Empowers Autonomous 3D Positional Control of Magnetic Microrobots. AZoAi. Retrieved on November 22, 2024 from https://www.azoai.com/news/20240119/Reinforcement-Learning-Empowers-Autonomous-3D-Positional-Control-of-Magnetic-Microrobots.aspx.

  • MLA

    Dam, Samudrapom. "Reinforcement Learning Empowers Autonomous 3D Positional Control of Magnetic Microrobots". AZoAi. 22 November 2024. <https://www.azoai.com/news/20240119/Reinforcement-Learning-Empowers-Autonomous-3D-Positional-Control-of-Magnetic-Microrobots.aspx>.

  • Chicago

    Dam, Samudrapom. "Reinforcement Learning Empowers Autonomous 3D Positional Control of Magnetic Microrobots". AZoAi. https://www.azoai.com/news/20240119/Reinforcement-Learning-Empowers-Autonomous-3D-Positional-Control-of-Magnetic-Microrobots.aspx. (accessed November 22, 2024).

  • Harvard

    Dam, Samudrapom. 2024. Reinforcement Learning Empowers Autonomous 3D Positional Control of Magnetic Microrobots. AZoAi, viewed 22 November 2024, https://www.azoai.com/news/20240119/Reinforcement-Learning-Empowers-Autonomous-3D-Positional-Control-of-Magnetic-Microrobots.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Scaling Large Language Models Makes Them Less Reliable, Producing Confident but Incorrect Answers