HeinSight30 Uses Computer Vision for Liquid Extraction

In a recent article published in the journal Device, researchers introduced an innovative high-throughput automation platform called "HeinSight3.0", which integrates a computer vision (CV) system to monitor and analyze liquid-liquid extraction (LLE) processes in real-time. This system uses machine learning and image analysis to detect and measure visual cues like liquid levels, turbidity, homogeneity, volume, and color across multiple vials simultaneously. The goal is to speed up LLE optimization and create a self-driving lab for workup processes.

Study: HeinSight30 Uses Computer Vision for Liquid Extraction. Image Credit: Collagery/Shutterstock.com
Study: HeinSight30 Uses Computer Vision for Liquid Extraction. Image Credit: Collagery/Shutterstock.com

Background

LLE is a simple and cost-effective method for separating chemicals based on their solubility and immiscibility in two liquid phases. It is commonly used in labs and industries, especially for heat-sensitive compounds or high boiling points. However, optimizing LLE can be challenging due to factors like solvent choice, volume ratio, potential of hydrogen ions (pH), temperature, and concentration. Additionally, LLE can produce emulsions, rag layers, or solid dispersions, complicating separation and affecting yield and purity.

To address these challenges, researchers have been developing automation technologies and high-throughput techniques to improve the efficiency of LLE condition screening and optimization. Traditional methods often rely on manual or destructive measurements, such as rulers, liquid chromatography, or analytical technologies. These approaches are labor-intensive, time-consuming, and may not provide real-time information on separation dynamics, such as time, mechanism, and distribution coefficients.

About the Research

In this paper, the authors designed and developed HeinSight3.0, a system combining high-throughput hardware and CV technology to monitor and analyze LLE processes in real-time using multiple visual cues. The hardware includes a commercial magnetic vertical tumble stirrer and a modified heat block that can screen up to 12 vials simultaneously. It also features electroluminescent materials for consistent bottom and backlight illumination, ensuring stable lighting for image capture. Additionally, two webcams are equipped with the system to record videos of the vials before, during, and after stirring.

HeinSight3.0's CV system relies on a machine learning model you-only-look-once (YOLO), known for its fast and accurate object detection. The system first identifies the vials as regions of interest and then detects the liquid levels within each vial. It can classify and locate these regions, adapting to different vial types and configurations. The system also detects and quantifies visual cues like liquid levels, turbidity, volume, homogeneity, and color. By integrating these cues with process parameters like temperature and stir rate, the system enables real-time analysis of key processes, including separation time, emulsion presence, and volume ratio of layers helping to optimize separation parameters.

Research Findings

The outcomes demonstrated the performance and versatility of the newly designed system across three case studies: excess reagent removal, impurity recovery, and Grignard workup. In each case, the authors tested various conditions and evaluated how different parameters affected separation efficiency and quality using the system’s visual cues. The system successfully detected phase boundaries, volumes, colors, and turbidity in all scenarios, even with colorless, turbid, or non-linear phases. It also provided valuable insights into separation dynamics, such as separation time, mechanism, and stability, which are difficult to obtain with traditional methods.

Additionally, the system was also tested with literature data to evaluate its generalizability and adaptability. It performed well with images and videos from diverse sources, lighting conditions, vessels, and chemical environments. The system effectively analyzed phase separation in various LLE examples from previous studies, including ionic liquids, deep eutectic solvents, and switchable solvents.

Applications

The proposed system is a powerful tool for screening and optimizing LLE processes using CV algorithms. It offers comprehensive and real-time data, facilitating quicker and more informed decision-making and feedback control. It can also handle different vial types and configurations, making it suitable for high-throughput experimentation and diverse applications. The system can be integrated with other automation technologies and artificial intelligence (AI) methods to create a self-driving lab for workup processes, capable of autonomously planning, executing, and analyzing experiments based on visual cues. It can also be extended to other chemical processes involving visual analysis, such as crystallization, distillation, filtration, mixing, and reaction monitoring.

Conclusion

In summary, the novel system proved effective and robust for monitoring and analyzing LLE processes in real-time. It captured and evaluated multiple visual cues, enabling thorough data collection and early-stage optimization. Overall, this represented a significant advancement toward achieving autonomous LLE screening guided by visual cues and moving closer to a self-driving lab for workup processes. Future improvements should focus on integrating additional visual cues, such as fluorescence and polarization, and developing an autonomous feedback loop to optimize LLE processes.

Journal reference:
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, August 05). HeinSight30 Uses Computer Vision for Liquid Extraction. AZoAi. Retrieved on September 16, 2024 from https://www.azoai.com/news/20240805/HeinSight30-Uses-Computer-Vision-for-Liquid-Extraction.aspx.

  • MLA

    Osama, Muhammad. "HeinSight30 Uses Computer Vision for Liquid Extraction". AZoAi. 16 September 2024. <https://www.azoai.com/news/20240805/HeinSight30-Uses-Computer-Vision-for-Liquid-Extraction.aspx>.

  • Chicago

    Osama, Muhammad. "HeinSight30 Uses Computer Vision for Liquid Extraction". AZoAi. https://www.azoai.com/news/20240805/HeinSight30-Uses-Computer-Vision-for-Liquid-Extraction.aspx. (accessed September 16, 2024).

  • Harvard

    Osama, Muhammad. 2024. HeinSight30 Uses Computer Vision for Liquid Extraction. AZoAi, viewed 16 September 2024, https://www.azoai.com/news/20240805/HeinSight30-Uses-Computer-Vision-for-Liquid-Extraction.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Optimizing Computer Vision for Embedded Systems