Advances in Robotics and AI: Case Studies Unveiling Future Applications

In a paper recently published in the journal Electronics, the authors reviewed recent case studies and research demonstrating the future applications and advances in robotics and artificial intelligence (AI).

Study: Advances in Robotics and AI: Case Studies Unveiling Future Applications. Image credit: thinkhubstudio/Shutterstock
Study: Advances in Robotics and AI: Case Studies Unveiling Future Applications. Image credit: thinkhubstudio/Shutterstock

Background

In the last several decades, AI has advanced significantly due to the advent of new algorithm designs, the availability of large amounts of data, and the exponential rise in computing power. However, abstract, high-level forms of knowledge representation and proper feature fusion are necessary for AI to attain better results.

In robotics, restoration and perception enhancement methods are active research areas as they assist in perceiving and understanding the world. Computer vision-based AI has become crucial in the field of robotics, while search based on spatial attributes, navigation, efficiency, object recognition, and classification will become important fields of development in robotics and AI in the future.

In this paper, the authors reviewed important case studies and research demonstrating the advances and future applications of AI and robotics, specifically regarding spatial and visual perception enhancement and reasoning.

Advances and future applications

An innovative generalized knowledge distillation framework was developed to address the limitations of missing modalities in glioma segmentation, specifically in unimodal scenarios. The framework can effectively extract rich knowledge from a multimodal segmentation model and transfer the knowledge to an unimodal segmentation model to improve the unimodal model’s performance.

Researchers developed a parallel platform and its mechanical structure design in a study. They completed the motor driver circuit design based on a pre-driver chip + microprogrammed control unit (MCU) + three-phase full bridge solution. The parallel platform control center and MCU programs were developed to drive six parallel robotic arms and the system joint debugging was also completed to achieve a closed-loop control effect within the parallel platform workspace, leading to the creation of a physical platform with a flexible structure and low costs.

In another study, a heterogeneous quasi-continuous spiking cortical model (HQC-SCM) method was proposed to discriminate neutron and gamma-ray pulse shapes. This method uses specific neural responses for various features within radiation pulse signals to fully extract features present in the delayed fluorescence parts and falling edge.

Researchers investigated the influence of the HQC-SCM’s parameters on its discrimination performance to identify an automated parameter selection strategy for the proposed method. A parameter optimization method based on genetic algorithm (GA) was used. Experiments displayed that GA can be effective for local optima searching in chaotic systems such as the HQC-SCM. The GA-based method was efficient it located the local optima within a few evolution iterations.

A novel framework, designated as HDRFormer, has been designed to improve the high dynamic range (HDR) image quality in video surveillance systems based on edge cloud. HDRFormer utilizes a unique architecture consisting of a weighted attention module (WAM) and a feature extraction module (FEM) leveraging the Internet of Things (IoT) technology and advanced deep learning (DL) algorithms.

The FEM can accurately capture multiscale image information using a transformer-based hierarchical structure. Additionally, guided filters are used to steer the network and improve the images’ structural integrity. WAM primarily focuses on reconstructing saturated areas to enhance the images’ perceptual quality and render saturated, reconstructed natural HDR images. The novel framework displays exceptional performance in HDR visual difference predictor (HDR-VDP2.2) and multiscale structural similarity (MS-SSIM).

Moreover, the proposed method can outperform all existing HDR reconstruction techniques and provide improved generalization capabilities. Thus, HDRFormer can play a crucial role in future smart city applications. An innovative calibration algorithm that exploits the unique binocular endoscope attributes has been developed. In stereo images, the proposed algorithm can effectively eliminate vertical disparity while maintaining horizontal disparity by integrating monocular camera calibration principles, simplifying the ensuing stereo-matching operation, and meeting strict accuracy standards.

Experimental validation of the algorithm through an investigation into the three-dimensional (3D) cardiac soft tissue surface reconstruction method displayed that using a stereo endoscope vision system and dense parallax images has resulted in a precise acquisition of 3D coordinates within the left endoscope coordinate system.

Subsequently, the surface reconstruction process using the Delaunay triangulation method and a dual-pass filter effectively generated an accurate and detailed representation of the cardiac soft tissue surface. The reconstructed 3D spatial points aligned closely with coordinates obtained manually.

A tensor sparse dictionary learning-based dose image reconstruction method has been developed recently by researchers. Specifically, the tensor coding was combined with compressed sensing data, two-dimensional (2D) dictionary learning was extended to 3D using a tensor product, and the X-ray acoustic signal spatial information was utilized more efficiently.

An alternate iterative solution of the tensor dictionary and tensor sparse coefficient was designed by researchers to reduce the reconstruction image artifacts caused by spare sampling. They also built an X-ray-induced acoustic dose image reconstruction system, simulated X-ray acoustic signals based on patient information from a hospital, and then created simulated datasets. Experimental results demonstrated that the proposed method could improve the reconstructed image quality and dose distribution accuracy significantly compared to state-of-the-art imaging methods.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, November 28). Advances in Robotics and AI: Case Studies Unveiling Future Applications. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20231128/Advances-in-Robotics-and-AI-Case-Studies-Unveiling-Future-Applications.aspx.

  • MLA

    Dam, Samudrapom. "Advances in Robotics and AI: Case Studies Unveiling Future Applications". AZoAi. 22 December 2024. <https://www.azoai.com/news/20231128/Advances-in-Robotics-and-AI-Case-Studies-Unveiling-Future-Applications.aspx>.

  • Chicago

    Dam, Samudrapom. "Advances in Robotics and AI: Case Studies Unveiling Future Applications". AZoAi. https://www.azoai.com/news/20231128/Advances-in-Robotics-and-AI-Case-Studies-Unveiling-Future-Applications.aspx. (accessed December 22, 2024).

  • Harvard

    Dam, Samudrapom. 2023. Advances in Robotics and AI: Case Studies Unveiling Future Applications. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20231128/Advances-in-Robotics-and-AI-Case-Studies-Unveiling-Future-Applications.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Tackling Text-to-Image AI Flaws