In an article published in the CIRP Journal of Manufacturing Science and Technology, researchers explored automating visual inspection in remanufacturing, traditionally done by humans. They framed the challenge as a view planning problem and applied supervised and reinforcement learning with neural networks in a simulation environment.
The authors evaluated their effectiveness for determining inspection poses in real-time, specifically for electric starter motor remanufacturing, and presented an open-source framework for this purpose.
Background
Artificial intelligence (AI) is increasingly utilized in production processes to enhance efficiency and address challenges like skilled labor shortages and high costs. One crucial application is automating visual inspection in remanufacturing, a task traditionally performed by humans. Current methods struggle with flexibility due to the unpredictable nature of defects and the lack of comprehensive product models, especially when the original design information is unavailable.
Previous research has addressed visual planning through the next-best-view (NBV) and view planning problem (VPP) using supervised and reinforcement learning techniques. However, these approaches often rely on simplified models or pre-existing geometric data, which is not always available in remanufacturing scenarios.
This paper filled these gaps by proposing a novel framework for visual acquisition planning using AI methods that did not depend on prior geometric models. It formalized the problem and applied supervised and reinforcement learning to manage real-world inspection tasks, specifically for starter motors in remanufacturing. This approach aimed to improve adaptability and accuracy in environments with incomplete information, providing a foundation for future advancements in automated visual inspection.
Methodology for Vision Planning and Inspection Simulation
In the study of vision planning for automated inspection in remanufacturing, two key problems were addressed, overall surface inspection and detailed inspection of specific components. The overall inspection involved covering an entire product's surface to identify defects such as corrosion or mechanical damage. This problem was modeled as a sequence of viewpoints to ensure complete coverage, optimized using reinforcement learning to minimize acquisition steps and robot trajectory.
The detailed inspection focused on specific regions of interest (RoI), like damaged drive shafts or corroded pulleys, which required targeted scrutiny. This problem could also be tackled with reinforcement or supervised learning. The approach involved first solving the overall inspection problem to create a geometric model of the object. This model helped identify RoIs for further examination.
The researchers introduced a simulation framework that integrated both reinforcement and supervised learning approaches. The scan simulation environment modeled a three-dimensional (3D) object and used a virtual sensor to perform acquisitions, creating point clouds that represented the object's surface. Reinforcement learning agents were trained to optimize acquisition poses based on these point clouds, while supervised learning frameworks were used to predict optimal poses for inspecting RoIs.
The reinforcement learning agent received dense and sparse rewards based on the coverage achieved and aimed to maximize surface inspection efficiency. The supervised learning framework predicted camera poses from a dataset of point clouds and associated poses, evaluated using metrics for position and rotation accuracy. Various neural network architectures, including PointNet and its variants, were utilized for processing and interpreting 3D point cloud data.
Results and Analysis
The authors employed a dataset of synthetically generated starter engines, which varied in geometric properties due to different motor variants and potential damage.
Dataset overview: The dataset included 100 starter engines with nine randomly generated components and 28 parameters, saved in standard triangle language (STL) format. This diversity ensured a realistic representation of different engine types and conditions.
Agent configuration: For reinforcement learning, default parameters included a learning rate of 0.000078 and a discount factor of 0.9. The soft actor-critic (SAC) algorithm and point completion network (PCN) encoder were chosen for optimal performance. For supervised learning, the models used various feature extractors (PCN, PointNet, PointNet++), with PCN showing the best balance between accuracy and training duration.
Reinforcement learning results: The reinforcement learning agent demonstrated that using specific state encodings significantly improved performance, achieving the required surface coverage more efficiently than random actions. The agent learned to choose poses that maximized coverage and minimized the number of acquisitions, leading to high stability and effective coverage.
Supervised learning results: Supervised learning models were evaluated on their ability to regress acquisition system poses from point clouds. PCN achieved near-identical accuracy to PointNet but with a shorter training duration. PointNet++ had higher accuracy but was less efficient. The authors highlighted the need for metrics beyond position and angular deviations to fully evaluate regression quality.
Individual inspection with reinforcement learning: The reinforcement learning approach was further tested for inspecting RoIs with varying sizes and numbers. Results indicated that agents effectively reduced the number of acquisitions needed, with fewer RoIs leading to increased complexity and exploration costs. The size of the RoIs had a minimal impact on the number of required acquisitions.
Conclusion
In conclusion, the researchers introduced an innovative simulation framework for automating visual inspection in remanufacturing using both supervised and reinforcement learning methods. The framework addressed the challenge of view planning by integrating 3D simulation models and virtual sensors to optimize inspection poses.
Results showed that reinforcement learning effectively improved surface coverage and inspection efficiency, while supervised learning enhanced pose prediction accuracy. Future work will focus on adapting these methods for real-world applications, including implementing detection algorithms and evaluating the framework's practicality in actual inspection environments. This research laid a foundation for more advanced and adaptable automated inspection systems.
Journal reference:
- Kaiser, J., Koch, D., Gäbele, J., May, M. C., & Lanza, G. (2024). View planning in the visual inspection for remanufacturing using supervised- and reinforcement learning approaches. CIRP Journal of Manufacturing Science and Technology, 53, 128–138. DOI: 10.1016/j.cirpj.2024.07.006, https://www.sciencedirect.com/science/article/pii/S1755581724001159