In a recent article published in the journal Device, researchers introduced an innovative high-throughput automation platform called "HeinSight3.0", which integrates a computer vision (CV) system to monitor and analyze liquid-liquid extraction (LLE) processes in real-time. This system uses machine learning and image analysis to detect and measure visual cues like liquid levels, turbidity, homogeneity, volume, and color across multiple vials simultaneously. The goal is to speed up LLE optimization and create a self-driving lab for workup processes.
Background
LLE is a simple and cost-effective method for separating chemicals based on their solubility and immiscibility in two liquid phases. It is commonly used in labs and industries, especially for heat-sensitive compounds or high boiling points. However, optimizing LLE can be challenging due to factors like solvent choice, volume ratio, potential of hydrogen ions (pH), temperature, and concentration. Additionally, LLE can produce emulsions, rag layers, or solid dispersions, complicating separation and affecting yield and purity.
To address these challenges, researchers have been developing automation technologies and high-throughput techniques to improve the efficiency of LLE condition screening and optimization. Traditional methods often rely on manual or destructive measurements, such as rulers, liquid chromatography, or analytical technologies. These approaches are labor-intensive, time-consuming, and may not provide real-time information on separation dynamics, such as time, mechanism, and distribution coefficients.
About the Research
In this paper, the authors designed and developed HeinSight3.0, a system combining high-throughput hardware and CV technology to monitor and analyze LLE processes in real-time using multiple visual cues. The hardware includes a commercial magnetic vertical tumble stirrer and a modified heat block that can screen up to 12 vials simultaneously. It also features electroluminescent materials for consistent bottom and backlight illumination, ensuring stable lighting for image capture. Additionally, two webcams are equipped with the system to record videos of the vials before, during, and after stirring.
HeinSight3.0's CV system relies on a machine learning model you-only-look-once (YOLO), known for its fast and accurate object detection. The system first identifies the vials as regions of interest and then detects the liquid levels within each vial. It can classify and locate these regions, adapting to different vial types and configurations. The system also detects and quantifies visual cues like liquid levels, turbidity, volume, homogeneity, and color. By integrating these cues with process parameters like temperature and stir rate, the system enables real-time analysis of key processes, including separation time, emulsion presence, and volume ratio of layers helping to optimize separation parameters.
Research Findings
The outcomes demonstrated the performance and versatility of the newly designed system across three case studies: excess reagent removal, impurity recovery, and Grignard workup. In each case, the authors tested various conditions and evaluated how different parameters affected separation efficiency and quality using the system’s visual cues. The system successfully detected phase boundaries, volumes, colors, and turbidity in all scenarios, even with colorless, turbid, or non-linear phases. It also provided valuable insights into separation dynamics, such as separation time, mechanism, and stability, which are difficult to obtain with traditional methods.
Additionally, the system was also tested with literature data to evaluate its generalizability and adaptability. It performed well with images and videos from diverse sources, lighting conditions, vessels, and chemical environments. The system effectively analyzed phase separation in various LLE examples from previous studies, including ionic liquids, deep eutectic solvents, and switchable solvents.
Applications
The proposed system is a powerful tool for screening and optimizing LLE processes using CV algorithms. It offers comprehensive and real-time data, facilitating quicker and more informed decision-making and feedback control. It can also handle different vial types and configurations, making it suitable for high-throughput experimentation and diverse applications. The system can be integrated with other automation technologies and artificial intelligence (AI) methods to create a self-driving lab for workup processes, capable of autonomously planning, executing, and analyzing experiments based on visual cues. It can also be extended to other chemical processes involving visual analysis, such as crystallization, distillation, filtration, mixing, and reaction monitoring.
Conclusion
In summary, the novel system proved effective and robust for monitoring and analyzing LLE processes in real-time. It captured and evaluated multiple visual cues, enabling thorough data collection and early-stage optimization. Overall, this represented a significant advancement toward achieving autonomous LLE screening guided by visual cues and moving closer to a self-driving lab for workup processes. Future improvements should focus on integrating additional visual cues, such as fluorescence and polarization, and developing an autonomous feedback loop to optimize LLE processes.