Machine Learning Approach to Enhance Inkjet Print Head Monitoring

In a paper published in the journal Scientific Reports, researchers enhanced inkjet print head monitoring in digital manufacturing, addressing the challenge of using multiple nozzles simultaneously by employing machine learning (ML) algorithms to classify nozzle jetting conditions based on self-sensing signals, achieving a remarkable accuracy of over 99.6%. The study also introduced a hybrid monitoring approach, effectively dividing the feature space into different regions to improve monitoring accuracy and efficiency.

Study: Machine Learning Approach to Enhance Inkjet Print Head Monitoring. Image credit: Generated using DALL.E.3
Study: Machine Learning Approach to Enhance Inkjet Print Head Monitoring. Image credit: Generated using DALL.E.3

Inkjet Printing and artificial intelligence (AI)/ML

Inkjet printing has widespread applications in digital fabrication, including three-dimensional (3D) bioprinting, sensor production, energy devices, and display manufacturing. Its use of multi-nozzle printheads in industrial settings significantly boosts efficiency. Monitoring multiple nozzles in real-time is crucial, with defective nozzles leading to issues in the final printed products. Traditional monitoring approaches involve time-consuming jet visualization or pattern inspection.

In recent years, AI and ML have gained popularity in manufacturing, including additive manufacturing. They aim to enhance efficiency and intelligence in manufacturing processes. In inkjet printing, prior AI and ML applications focused on understanding jetting, nozzle selection, and visualization-based jetting detection using visualized jet images. However, these methods are unsuitable for real-time applications due to long scanning times.

Setup and Monitoring

Monitoring System and Experiment Setup: A printing system was utilized for the jet monitoring system, integrating both self-sensing and drop-watching capabilities. The system featured a printhead with 1024 nozzles, divided into 8 rows with 128 nozzles per row, each with an independent driver for nozzle control. The self-sensing module allowed the scanning and collection of self-sensing signals from all nozzles.

The inkjet head driver's jetting signals triggered the synchronized jet visualization and self-sensing data acquisition. In-house software managed the entire system, encompassing jetting, printing patterns, and monitoring, facilitating data collection and storage for subsequent analysis, and the experiments used model fluid while maintaining the printhead temperature at 30 °C.

ML Modeling for Nozzle Jetting Status: The self-sensing signals were labeled (jetting or non-jetting) through streaming image analysis. Over 150,000 jetting samples were collected, and these data were processed and used for training ML models. Once trained, these models could monitor the jetting status and select suitable nozzles for printing.

Jet Visualization and Labeling: Researchers achieved jet visualization using a Charge-Coupled Device (CCD) camera and a telecentric lens. They employed an image analysis algorithm to determine jetting status and label the sensing signal for the modeling process. The team pre-processed the obtained images to create binary images, enhancing their uniformity for automated analysis. They used a pixel-gradient method to determine jetting status based on peaks in the gradients, allowing for efficient labeling of self-sensing signals. 

Self-Sensing and Feature ExtractionSelf-sensing signals were acquired directly from the nozzles when applying the jetting waveform, ensuring high scanning frequencies. The driving signals were normalized and filtered to eliminate their influence and isolate the self-sensing signals. The extracted features involved comparing the normalized jetting signal of each nozzle with a reference signal, which entailed calculating phase and amplitude score differences for each nozzle. The objective was to detect significant deviations in jet behavior from the reference status.

ML Approaches: ML models classify the nozzle jetting status based on the extracted features, approaching it as a two-dimensional (2D) classification problem. Three popular classification models are linear support vector machine (SVM), multilayer neural network, and Gaussian naïve Bayes. These models determined hyperplanes in the feature space to distinguish between jetting and non-jetting nozzles.

Optimizing Jetting Status Modeling and Classification

The analysis commences by outlining the acquisition and preprocessing of jetting images and self-sensing signals, employing a developed algorithm to determine the jetting status. This investigation centers on two primary jetting conditions: good and non-jetting, including slow jetting for simplicity. Notably, the self-sensing signals shift from passive to active states as they traverse across different rows, requiring preprocessing before extracting crucial features that serve as the foundation for subsequent modeling.

Significantly, this study adopts a standardized modeling approach that intentionally overlooks row-dependent features, including a wide range of jetting conditions and printhead variations. The modeling phase delves into model training performance, exploring it using shuffled data, with 80% designated for training and the remaining 20% for testing. Among the three models considered, all demonstrate impressive accuracy, exceeding 99.6% on the training data. The multilayer neural network model outperforms, achieving an accuracy of around 99.8%, followed closely by the SVM and the naïve Bayes model, displaying an accuracy near 99.6%. The latter exhibits an advantage with its swifter training and reduced computational complexity.

Researchers examined the classification results using test data, uncovering certain complexities in the classification process owing to the overlapping phase and amplitude scores between jetting and non-jetting nozzles. There are two groups in misclassification: the first involves correctly identifying jetting nozzles as non-jetting (case 1). In contrast, the second entails incorrectly classifying non-jetting nozzles as spraying (case 2). Case 2, with its implications for print quality, garners specific attention. The naïve Bayes model performs better in selecting jetting nozzles, while the SVM emphasizes inclusivity, even at the risk of misclassifications.

Given these insights, the naïve Bayes model is the preferred choice for quality assurance, especially considering its faster training suitable for real-time applications. Researchers also explore how models respond to varying classification thresholds, aiming to optimize efficiency. The threshold at 0.5 is a reliable choice for the naïve Bayes model in selecting jetting nozzles, whereas the SVM emphasizes printing efficiency. The multilayer neural network strikes a balance by employing two thresholds to categorize the feature space into three regions, facilitating more nuanced classification.

Conclusion

To sum up, this study utilizes ML for automated faulty nozzle detection in multi-nozzle printheads. It extracts critical parameters from self-sensing signals and employs an improved verification algorithm. Researchers prefer the Gaussian naïve Bayes model for its faster training and reduced misclassification rate. Accuracy is enhanced through self-sensing and drop visualization, although the approach is sensitive to noise and pre-processing parameters. Looking forward, AI techniques like Convolutional Neural Network (CNN) or Recurrent Neural Network (RNN) hold the promise of improving real-time detection of faulty jet behaviors.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, October 26). Machine Learning Approach to Enhance Inkjet Print Head Monitoring. AZoAi. Retrieved on December 26, 2024 from https://www.azoai.com/news/20231026/Machine-Learning-Approach-to-Enhance-Inkjet-Print-Head-Monitoring.aspx.

  • MLA

    Chandrasekar, Silpaja. "Machine Learning Approach to Enhance Inkjet Print Head Monitoring". AZoAi. 26 December 2024. <https://www.azoai.com/news/20231026/Machine-Learning-Approach-to-Enhance-Inkjet-Print-Head-Monitoring.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Machine Learning Approach to Enhance Inkjet Print Head Monitoring". AZoAi. https://www.azoai.com/news/20231026/Machine-Learning-Approach-to-Enhance-Inkjet-Print-Head-Monitoring.aspx. (accessed December 26, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. Machine Learning Approach to Enhance Inkjet Print Head Monitoring. AZoAi, viewed 26 December 2024, https://www.azoai.com/news/20231026/Machine-Learning-Approach-to-Enhance-Inkjet-Print-Head-Monitoring.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Machine Learning Model Enhances Drone Formation Stability and Accuracy