AI Unlocks High-Resolution Cellular Imaging Without Fluorescent Staining

Scientists introduce a groundbreaking AI-driven solution that bridges the limitations of traditional imaging methods, providing high-resolution, label-free cellular visuals without compromising cell integrity.

Research: Unsupervised inter-domain transformation for virtually stained high-resolution mid-infrared photoacoustic microscopy using explainable deep learning. Image Credit: murat photographer / ShutterstockResearch: Unsupervised inter-domain transformation for virtually stained high-resolution mid-infrared photoacoustic microscopy using explainable deep learning. Image Credit: murat photographer / Shutterstock

A research team at POSTECH, led by Professors Chulhong Kim (Department of Electrical Engineering, Department of Convergence IT Engineering, Department of Mechanical Engineering, Department of Medical Science and Engineering, Graduate School of Artificial Intelligence) and Jinah Jang (Department of Mechanical Engineering, Department of Convergence IT Engineering, Department of Medical Science and Engineering), alongside doctoral candidate Eunwoo Park, Dr. Sampa Misra (Department of Convergence IT Engineering), and Dr. Dong Gyu Hwang (Center for 3D Organ Printing and Stem Cells), has developed a technology that surpasses the constraints of traditional imaging methods, providing stable and highly accurate cell visualization. Their findings were published in the journal Nature Communications and featured in Editors' Highlights.

Confocal fluorescence microscopy (CFM) is widely regarded for producing high-resolution cellular images in life sciences. However, it requires fluorescent staining, which poses risks of photobleaching and phototoxicity, potentially damaging the cells under study. Conversely, mid-infrared photoacoustic microscopy (MIR-PAM) allows for label-free imaging, preserving cell integrity. Yet, its reliance on longer wavelengths limits spatial resolution, making it challenging to visualize fine cellular structures precisely. Furthermore, the optical diffraction limits of MIR-PAM make it challenging to achieve subcellular feature identification with sufficient clarity.

To bridge these gaps, the POSTECH team developed an innovative imaging method powered by explainable deep learning (XDL). This approach transforms low-resolution, label-free MIR-PAM images into high-resolution, virtually stained images resembling those generated by CFM. Unlike conventional AI models, the XDL framework utilizes a CycleGAN model with saliency similarity metrics, offering enhanced transparency by visualizing the transformation process, ensuring both reliability and accuracy.

a Workflow for UIDT in MIR-PAM images. Low-resolution MIR-PAM images and high-resolution CFM images of cultured cells are inputs of UIDT. The two-step UIDT produces high-resolution and virtually fluorescence-stained images of label-free cells. b The network configuration for the XDL. By adjusting the saliency masks between the input and generated images, the XDL model adopts a saliency similarity in loss functions of the existing network to achieve explainability. MIR-PAM, mid-infrared photoacoustic microscopy, CFM, confocal fluorescence microscopy, and DA and DB denote discriminators of each domain.a Workflow for UIDT in MIR-PAM images. Low-resolution MIR-PAM images and high-resolution CFM images of cultured cells are inputs of UIDT. The two-step UIDT produces high-resolution and virtually fluorescence-stained images of label-free cells. b The network configuration for the XDL. By adjusting the saliency masks between the input and generated images, the XDL model adopts a saliency similarity in loss functions of the existing network to achieve explainability. MIR-PAM, mid-infrared photoacoustic microscopy, CFM, confocal fluorescence microscopy, and DA and DB denote discriminators of each domain.

The team implemented a single-wavelength MIR-PAM system and designed a two-phase imaging process: (1) the Image Resolution Enhancement phase (IREN), which applies deep learning to convert low-resolution MIR-PAM images into high-resolution ones, achieving improved structural detail and better visualization of key cellular components, clearly distinguishing intricate cellular structures such as nuclei and filamentous actin, and (2) the Virtual Staining phase, which produces virtually stained images without fluorescent dyes, eliminating the risks associated with staining while maintaining CFM-quality imaging. This approach was validated using human cardiac fibroblasts, showing strong agreement with traditional CFM images.

Performance evaluations using key image quality metrics, such as Structural Similarity Index (SSIM) and Frechet Inception Distance (FID), demonstrated that the XDL framework outperforms conventional deep learning methods in accuracy and reliability. This innovative technology delivers high-resolution, virtually stained cellular imaging without compromising cell health, offering a powerful new tool for live-cell analysis and advanced biological research.

Professor Chulhong Kim remarked: "We have developed a cross-domain image transformation technology that bridges the physical limitations of different imaging modalities, offering complementary benefits. The XDL approach has significantly enhanced the stability and reliability of unsupervised learning." Professor Jinah Jang added, "This research unlocks new possibilities for multiplexed, high-resolution cellular imaging without labeling. It holds immense potential for applications in live-cell analysis and disease model studies." She also highlighted the potential for further advancements in optimizing the imaging platform to improve contrast and resolution for live-cell applications.

This research was made possible through support from the Ministry of Education, the Ministry of Science and ICT, the Korea Medical Device Development Fund, the Korean Fund for Regenerative Medicine, the Korea Institute for Advancement of Technology (KIAT), the Artificial Intelligence Graduate School Program (POSTECH), BK21 FOUR, and the Glocal University 30 Project.

Source:
  • Pohang University of Science & Technology (POSTECH)
Journal reference:
  • Park, E., Misra, S., Hwang, D. G., Yoon, C., Ahn, J., Kim, D., Jang, J., & Kim, C. (2024). Unsupervised inter-domain transformation for virtually stained high-resolution mid-infrared photoacoustic microscopy using explainable deep learning. Nature Communications, 15(1), 1-12. DOI: 10.1038/s41467-024-55262-2, https://www.nature.com/articles/s41467-024-55262-2

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.