AI-Driven Optical Tomography via Multi-Core Fiber-Optic Rotation

In an article published in the journal Nature, researchers introduced a novel optical tomography method using a multi-core fiber-optic cell rotator (MCF-OCR) system to precisely manipulate cells in a microfluidic chip. This method overcame limitations in conventional methods, allowing full-angle projection tomography with isotropic resolution. The authors also presented an artificial intelligence (AI)-driven tomographic reconstruction workflow, revolutionizing traditional computational methods.

Study: AI-Driven Single-Cell Imaging via Multi-Core Fiber-Optic Rotation. Image credit: HealthyCapture Studio /Shutterstock
Study: AI-Driven Single-Cell Imaging via Multi-Core Fiber-Optic Rotation. Image credit: HealthyCapture Studio /Shutterstock

Background

Optical tomography has become a crucial label-free microscopic technique, providing detailed insights into three-dimensional (3D) subcellular structures. This method has significantly influenced biomedical research by enabling a deeper understanding of cellular processes, disease mechanisms, and treatment responses. Conventional optical cell tomography relies on illumination scanning for projections at various orientations, allowing improved resolution in microscopy. However, limitations arise from the finite numerical aperture of microscope objectives, leading to axial resolution inferior to lateral resolution—a problem known as the missing cone problem.

Previous approaches to address the missing cone problem include iterative reconstruction algorithms and deep learning (DL)-based limited-angle tomography. However, these methods often require a full-angle tomographic scan to validate reconstructed images. Various cell rotation strategies have been explored to facilitate full-angle optical tomography, such as mechanical rotation and contactless rotation approaches using microflow, dielectrophoretic fields, or acoustic microstreaming.

Fiber-optic manipulation, particularly with MCFs, offers precise and non-invasive control over cell rotation. The challenge lies in accurately measuring the cell rotation angle, crucial for reliable tomographic reconstruction. Additionally, traditional reconstruction procedures demand complex and computationally intensive pre-processing of two-dimensional (2D) projections, posing difficulties for online processing.

In this study, researchers introduced an AI-driven optical projection tomography (OPT) system utilizing an MCF-OCR, aiming to bridge the gap between fiber-optic manipulation and optical tomography. The innovative system incorporated an autonomous tomography reconstruction workflow powered by computer vision technologies, addressing the limitations of previous methods. Object detection convolutional neural networks (CNNs), DL, and optical flow methods contributed to an accurate 3D reconstruction.

Methods

The experimental setup for the proposed tomography system involved a MCF-OCR using an MCF and an opposing single-mode fiber (SMF). The MCF-OCR dynamically modulated the light field using a phase-only spatial light modulator (SLM), enabling precise optical control of cell rotation. The system included a brightfield microscope with a camera for imaging the cell rotation process. An in-situ calibration method compensated for inherent phase distortions in the MCF, enhancing the robustness of the tomography system. The MCF-OCR employed a physics-informed neural network named CoreNet for real-time wavefront shaping, enhancing the control of optical forces during cell rotation.
Inverse Radon transform was utilized for 3D tomographic reconstruction from 2D projections obtained during cell rotation.

The reconstruction process involved compensating for phase distortions, enhancing the efficiency and accuracy of the reconstruction. Additionally, researchers discussed the application of the proposed methods to the HL60 human cancer cells, demonstrating the capability of the system in accurately reconstructing the 3D intensity distribution of live cells.

Results

The MCF-OCR system employed a rotating elliptical beam profile generated by a phase-only spatial light modulator, inducing optical gradient forces that align cells with the rotation. The lab-on-a-chip system integrated this technology, allowing controlled cell delivery and stable trapping. A brightfield microscope with a camera recorded 2D projections during cell rotation for subsequent full-angle tomographic reconstruction. To streamline the reconstruction process and overcome challenges in precise rotation angle determination, an autonomous tomographic reconstruction workflow was introduced.

Utilizing computer vision technologies, this workflow employed YOLOv5 for real-time cell detection, OpenCV for position alignment, and deep learning for cell segmentation. Rotation angle tracking involved the Harris corner detector and optical flow method. The reconstructed 3D intensity distribution was obtained through the inverse Radon transform.

The proposed workflow was validated using a simulated cell phantom and experimentally with HL60 human leukemia cells. For the cell phantom, the multi-feature optical flow tracking approach achieved a mean measurement error of 1.49°. The autonomous tomographic reconstruction outperformed conventional limited-angle illumination scanning tomography, significantly enhancing axial resolution. Quantitative metrics demonstrated reduced reconstruction errors and improved accuracy in 3D reconstruction using the proposed workflow.

The results indicated superior performance, with a decrease in mean squared error (MSE) by 66.7%, mean absolute error (MAE) by 58.0%, and root MSE (RMSE) by 43.8% compared to traditional methods. Multiscale structural similarity (MM-SSIM) and peak signal-to-noise ratio (PSNR) further supported the improved accuracy of the proposed approach. The workflow's effectiveness was highlighted in live HL60 cell reconstructions, showcasing precise 3D isotropic reconstructions and providing valuable insights into subcellular structures. The study demonstrated the potential of AI-driven tomographic reconstruction for advancing OPT methodologies, offering enhanced efficiency and accuracy in capturing detailed cellular morphology.

Conclusion

⁠⁠⁠⁠⁠⁠⁠In conclusion, the AI-driven fiber-optic cell rotation tomography system enabled efficient 3D single-cell imaging with full-angle tomography. Overcoming the limitations of conventional methods, it provided precise reconstructions and ground truth data for evaluating other algorithms. The system's versatility extended to diverse tomography modalities, showcasing its transformative potential in advancing cell-rotation-based tomography and integrating computer vision technologies across various imaging techniques.

Journal reference:
  • Sun, J., Yang, B., Koukourakis, N., Guck, J., & Czarske, J. W. (2024). AI-driven projection tomography with multicore fibre-optic cell rotation. Nature Communications15(1), 147. https://doi.org/10.1038/s41467-023-44280-1,

https://www.nature.com/articles/s41467-023-44280-1

Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, January 04). AI-Driven Optical Tomography via Multi-Core Fiber-Optic Rotation. AZoAi. Retrieved on November 09, 2024 from https://www.azoai.com/news/20240104/AI-Driven-Optical-Tomography-via-Multi-Core-Fiber-Optic-Rotation.aspx.

  • MLA

    Nandi, Soham. "AI-Driven Optical Tomography via Multi-Core Fiber-Optic Rotation". AZoAi. 09 November 2024. <https://www.azoai.com/news/20240104/AI-Driven-Optical-Tomography-via-Multi-Core-Fiber-Optic-Rotation.aspx>.

  • Chicago

    Nandi, Soham. "AI-Driven Optical Tomography via Multi-Core Fiber-Optic Rotation". AZoAi. https://www.azoai.com/news/20240104/AI-Driven-Optical-Tomography-via-Multi-Core-Fiber-Optic-Rotation.aspx. (accessed November 09, 2024).

  • Harvard

    Nandi, Soham. 2024. AI-Driven Optical Tomography via Multi-Core Fiber-Optic Rotation. AZoAi, viewed 09 November 2024, https://www.azoai.com/news/20240104/AI-Driven-Optical-Tomography-via-Multi-Core-Fiber-Optic-Rotation.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Revolutionizing Gemstone Analysis with Deep Learning