This cutting-edge AI method transforms microscopy, delivering clear images of thick samples without the need for costly equipment, empowering labs worldwide.
a. Fixed and iDISCO-cleared E11.5-day mouse embryos were immunostained for neurons (TuJ1, cyan) and blood vessels (CD31, magenta), imaged with confocal microscopy and processed with a trained DeAbe model. b. Axial view corresponding to dotted rectangular region in (a), comparing raw data and depth-compensated, de-aberrated, and deconvolved data (DeAbe + ). c. Higher magnification lateral view at axial depth of 1689 μm indicated by the orange double headed arrowheads in (b). d. Higher magnification views of white dotted region in (c), comparing raw (left) and DeAbe+ processing (right) for neuronal (top) and blood vessel (bottom) stains.
Biologists know the problem of depth degradation all too well: The deeper you look into a sample, the fuzzier the image becomes. A worm embryo or a piece of tissue may only be tens of microns thick, but optical aberrations, such as wavefront distortions and differences in refractive indices, cause microscopy images to lose their sharpness as the instruments peer beyond the top layer.
To address this problem, microscopists add technology to existing microscopes to cancel out these distortions. However, this technique, called adaptive optics, involves corrective wavefronts applied through specialized devices like deformable mirrors or spatial light modulators. This process requires time, money, and expertise, making it available to relatively few biology labs.
Now, researchers at HHMI's Janelia Research Campus and collaborators have developed a way to make a similar correction, but without using adaptive optics, adding additional hardware, or taking more images. A team from the Shroff Lab has developed a new AI method that produces sharp microscopy images throughout a thick biological sample.
a C. elegans embryos expressing GFP-labeled membrane marker (green) and mCherry-labeled nuclear marker (magenta) were imaged with dual-view light-sheet microscopy (diSPIM) and the raw data (left) from single-view recordings processed through neural networks that progressively de-aberrated, deconvolved, and isotropized spatial resolution (3-step DL, right). Single planes from lateral (top) and axial (bottom) perspectives are shown (b) Higher magnification axial views deep into embryo, corresponding to dashed rectangle in (a). c Examples of automatic segmentation on raw (left, 319 cells), 3-step deep learning (DL) prediction (middle, 421 cells), and manually corrected segmentation based on DL prediction (right, 421 cells). Single planes corresponding to upper planes in (a) are shown. Red, blue ellipses highlight regions for visual comparison. d Number of cells detected by automatic segmentation of membrane marker vs. time for raw data (purple) and after applying first two DL steps (Steps 1, 2; blue, green curves). Means and standard deviations are derived from 3 embryos and manually derived ground truth (black) is also provided.
To create the new technique, the team first modeled how the image degraded as the microscope penetrated a uniform sample. Synthetic aberrations were introduced to near-diffraction-limited images taken from the shallow side of image stacks, simulating the distortions observed at greater depths. They then applied their model to these near-side images, causing these clear images to become distorted like the deeper images. Then, they trained a neural network to reverse the distortion for the entire sample, resulting in a clear image throughout the entire depth of the sample.
The method produces better-looking images and enabled the team to count the number of cells in worm embryos more accurately, trace vessels and tracts in whole mouse embryos, and examine mitochondria in pieces of mouse livers and hearts. Additionally, it facilitated quantitative analysis, such as improved segmentation of membranes and nuclei in C. elegans embryos and the orientation of blood vessels in mouse tissues.
The new deep learning-based method does not require any equipment beyond a standard microscope, a computer with a graphics card, and a short tutorial on how to run the computer code, making it more accessible than traditional adaptive optics techniques. Unlike adaptive optics, this method avoids the need for wavefront sensing, additional illumination doses, or complex hardware, further enhancing its practicality.
The Shroff Lab is already using the new technique to image worm embryos. However, the method is not without limitations. Its effectiveness depends on the uniformity of the sample, and it may face challenges when applied to highly heterogeneous specimens, potentially introducing artifacts. The team plans to develop the model further to address these issues and make it applicable to laterally varying aberrations.
This groundbreaking research is published in the journal Nature Communications.
Source:
Journal reference:
- Guo, M., Wu, Y., Hobson, C. M., Su, Y., Qian, S., Krueger, E., Christensen, R., Kroeschell, G., Bui, J., Chaw, M., Zhang, L., Liu, J., Hou, X., Han, X., Lu, Z., Ma, X., Zhovmer, A., Combs, C., Moyle, M., . . . Shroff, H. (2025). Deep learning-based aberration compensation improves contrast and resolution in fluorescence microscopy. Nature Communications, 16(1), 1-19. DOI: 10.1038/s41467-024-55267-x, https://www.nature.com/articles/s41467-024-55267-x