AI Enhances Visual Inspection in Remanufacturing

In an article published in the CIRP Journal of Manufacturing Science and Technology, researchers explored automating visual inspection in remanufacturing, traditionally done by humans. They framed the challenge as a view planning problem and applied supervised and reinforcement learning with neural networks in a simulation environment.

Study: AI Enhances Visual Inspection in Remanufacturing. Image Credit: Gorodenkoff/Shutterstock.com
Study: AI Enhances Visual Inspection in Remanufacturing. Image Credit: Gorodenkoff/Shutterstock.com

The authors evaluated their effectiveness for determining inspection poses in real-time, specifically for electric starter motor remanufacturing, and presented an open-source framework for this purpose.

Background

Artificial intelligence (AI) is increasingly utilized in production processes to enhance efficiency and address challenges like skilled labor shortages and high costs. One crucial application is automating visual inspection in remanufacturing, a task traditionally performed by humans. Current methods struggle with flexibility due to the unpredictable nature of defects and the lack of comprehensive product models, especially when the original design information is unavailable.

Previous research has addressed visual planning through the next-best-view (NBV) and view planning problem (VPP) using supervised and reinforcement learning techniques. However, these approaches often rely on simplified models or pre-existing geometric data, which is not always available in remanufacturing scenarios.

This paper filled these gaps by proposing a novel framework for visual acquisition planning using AI methods that did not depend on prior geometric models. It formalized the problem and applied supervised and reinforcement learning to manage real-world inspection tasks, specifically for starter motors in remanufacturing. This approach aimed to improve adaptability and accuracy in environments with incomplete information, providing a foundation for future advancements in automated visual inspection.

Methodology for Vision Planning and Inspection Simulation

In the study of vision planning for automated inspection in remanufacturing, two key problems were addressed, overall surface inspection and detailed inspection of specific components. The overall inspection involved covering an entire product's surface to identify defects such as corrosion or mechanical damage. This problem was modeled as a sequence of viewpoints to ensure complete coverage, optimized using reinforcement learning to minimize acquisition steps and robot trajectory.

The detailed inspection focused on specific regions of interest (RoI), like damaged drive shafts or corroded pulleys, which required targeted scrutiny. This problem could also be tackled with reinforcement or supervised learning. The approach involved first solving the overall inspection problem to create a geometric model of the object. This model helped identify RoIs for further examination.

The researchers introduced a simulation framework that integrated both reinforcement and supervised learning approaches. The scan simulation environment modeled a three-dimensional (3D) object and used a virtual sensor to perform acquisitions, creating point clouds that represented the object's surface. Reinforcement learning agents were trained to optimize acquisition poses based on these point clouds, while supervised learning frameworks were used to predict optimal poses for inspecting RoIs.

The reinforcement learning agent received dense and sparse rewards based on the coverage achieved and aimed to maximize surface inspection efficiency. The supervised learning framework predicted camera poses from a dataset of point clouds and associated poses, evaluated using metrics for position and rotation accuracy. Various neural network architectures, including PointNet and its variants, were utilized for processing and interpreting 3D point cloud data.

Results and Analysis

The authors employed a dataset of synthetically generated starter engines, which varied in geometric properties due to different motor variants and potential damage.

Dataset overview: The dataset included 100 starter engines with nine randomly generated components and 28 parameters, saved in standard triangle language (STL) format. This diversity ensured a realistic representation of different engine types and conditions.

Agent configuration: For reinforcement learning, default parameters included a learning rate of 0.000078 and a discount factor of 0.9. The soft actor-critic (SAC) algorithm and point completion network (PCN) encoder were chosen for optimal performance. For supervised learning, the models used various feature extractors (PCN, PointNet, PointNet++), with PCN showing the best balance between accuracy and training duration.

Reinforcement learning results: The reinforcement learning agent demonstrated that using specific state encodings significantly improved performance, achieving the required surface coverage more efficiently than random actions. The agent learned to choose poses that maximized coverage and minimized the number of acquisitions, leading to high stability and effective coverage.

Supervised learning results: Supervised learning models were evaluated on their ability to regress acquisition system poses from point clouds. PCN achieved near-identical accuracy to PointNet but with a shorter training duration. PointNet++ had higher accuracy but was less efficient. The authors highlighted the need for metrics beyond position and angular deviations to fully evaluate regression quality.

Individual inspection with reinforcement learning: The reinforcement learning approach was further tested for inspecting RoIs with varying sizes and numbers. Results indicated that agents effectively reduced the number of acquisitions needed, with fewer RoIs leading to increased complexity and exploration costs. The size of the RoIs had a minimal impact on the number of required acquisitions.

Conclusion

In conclusion, the researchers introduced an innovative simulation framework for automating visual inspection in remanufacturing using both supervised and reinforcement learning methods. The framework addressed the challenge of view planning by integrating 3D simulation models and virtual sensors to optimize inspection poses.

Results showed that reinforcement learning effectively improved surface coverage and inspection efficiency, while supervised learning enhanced pose prediction accuracy. Future work will focus on adapting these methods for real-world applications, including implementing detection algorithms and evaluating the framework's practicality in actual inspection environments. This research laid a foundation for more advanced and adaptable automated inspection systems.

Journal reference:
  • Kaiser, J., Koch, D., Gäbele, J., May, M. C., & Lanza, G. (2024). View planning in the visual inspection for remanufacturing using supervised- and reinforcement learning approaches. CIRP Journal of Manufacturing Science and Technology, 53, 128–138. DOI: 10.1016/j.cirpj.2024.07.006, https://www.sciencedirect.com/science/article/pii/S1755581724001159
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, August 09). AI Enhances Visual Inspection in Remanufacturing. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20240809/AI-Enhances-Visual-Inspection-in-Remanufacturing.aspx.

  • MLA

    Nandi, Soham. "AI Enhances Visual Inspection in Remanufacturing". AZoAi. 22 December 2024. <https://www.azoai.com/news/20240809/AI-Enhances-Visual-Inspection-in-Remanufacturing.aspx>.

  • Chicago

    Nandi, Soham. "AI Enhances Visual Inspection in Remanufacturing". AZoAi. https://www.azoai.com/news/20240809/AI-Enhances-Visual-Inspection-in-Remanufacturing.aspx. (accessed December 22, 2024).

  • Harvard

    Nandi, Soham. 2024. AI Enhances Visual Inspection in Remanufacturing. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20240809/AI-Enhances-Visual-Inspection-in-Remanufacturing.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Boosting LLM Performance With Innovative Filtering Method for Reinforcement Learning