ILNet: Revolutionizing High-Quality Single-Pixel Imaging Using Deep Learning

In a paper published in the journal Optics & Laser Technology, researchers introduced the image-loop neural network (ILNet), a self-supervised deep learning technique designed for single-pixel imaging (SPI) integrated with a part-based model. This innovative approach offers a versatile and efficient solution for SPI, enabling the reconstruction of high-quality images at low sampling rates. Promising outcomes have been demonstrated in both free-space and underwater scenarios.

Study: ILNet: Revolutionizing High-Quality Single-Pixel Imaging Using Deep Learning. Image credit: Monster Ztudio /Shutterstock
Study: ILNet: Revolutionizing High-Quality Single-Pixel Imaging Using Deep Learning. Image credit: Monster Ztudio /Shutterstock

Background

SPI is a potent technique that reconstructs two-dimensional (2D) images from one-dimensional (1D) intensity signals using the correlation of classical or quantum light. Its applications span various domains, including microscopic, X-ray, and lidar imaging, primarily due to its high sensitivity in challenging conditions with low light, long distances, and high scattering. However, the requirement for substantial data for high-quality image reconstruction limits SPI's real-time imaging capabilities.

To address this constraint, researchers have turned to deep learning (DL) approaches, leveraging neural networks' capacity to extract underlying rules and representations from data. DL-based SPI methods, such as underwater ghost imaging-based generative adversarial networks (UGI-GAN), multi-scale GAN (MsGAN), and ghost imaging using deep learning (GIDL), hold promise for achieving remarkable imaging quality with low sample rates. These methods employ GANs, or convolutional neural networks (CNNs), to enhance image reconstruction quality.

However, integrating DL into SPI presents its own set of challenges. DL-based techniques necessitate extensive training data and do not guarantee optimal image quality in unfamiliar scenes. To overcome this, researchers have explored the incorporation of untrained neural networks into physical models. Methods such as variable generative networks (VGenNet) and untrained reconstruction networks (URNet) utilize untrained neural networks to achieve high-quality imaging without the need for extensive training data.

Study Methodology

The SPI setup comprises an expanded laser source, a single-pixel detector (SPD), an object, a digital micromirror device (DMD), and a computer. The 10-mW laser with a 532 nm wavelength is expanded and modulated by the DMD using random speckle patterns stored in the computer. As the laser travels through the object, the SPD collects intensity signals, which are then transmitted to the computer for analysis.

ILNet leverages a part-based model for fine-grained learning, enhancing image details during reconstruction. The calculated intensity signals of the reconstructed image serve as a loss function for training ILNet's parameters. The refined, reconstructed image can undergo iterative improvement, and the 2D image produced by ILNet can be utilized as input for successive iterations. This imparts prior information to enhance imaging quality when dealing with low sampling rates.

ILNet's encoding-decoding structure employs convolutional and deconvolutional layers for feature extraction and restoration, respectively. Experimental setups encompass free-space and underwater environments, thoroughly assessing ILNet's effectiveness. Five laser-cut alphabet letters serve as objects, and quantitative evaluations are conducted using contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and resolution measures.

Major Findings 

Initial comparisons are drawn between ILNet and conventional image reconstruction techniques such as traditional ghost imaging (TGI) and differential ghost imaging (DGI) within an unfamiliar free-space setting. Impressively, ILNet exhibits superior noise reduction and object reconstruction, even with fewer iterations. The employed part-based model within ILNet augments image intricacies, yielding higher-quality images with reduced iterations compared to TGI and DGI.

Quantitative assessments validate ILNet's superiority, showcasing heightened contrast-to-noise ratio (CNR) and peak signal-to-noise ratio (PSNR), even with fewer iterations. ILNet-generated image resolution significantly surpasses that of TGI and DGI.

Furthermore, investigations into diverse part configurations within ILNet are conducted. Results suggest that partitioning both the encoder and decoder leads to incomplete images, adversely affecting CNR, PSNR, and resolution. However, isolating partitioning solely to the encoder results in remarkably clear images, indicating adept capture of fine-grained features and superior performance.

ILNet's efficacy is then evaluated through a turbulent water-based reflected SPI experiment. It outperforms TGI, DGI, and Differential Ghost Imaging with Enhanced Positive Signals (SDGI) in terms of CNR, PSNR, and resolution. Unlike SDGI's binarization-centric approach that results in diminished CNR and PSNR due to binary pixel values, ILNet achieves enhanced image quality without relying solely on binarization. ILNet's incorporation of detected single-pixel values, part-based models, and image looping contributes to continuous optimization and improved overall image quality.

It is important to acknowledge that images retrieved in a turbulent water environment manifest distortions and omissions due to light scattering and refraction, challenges not entirely resolved by ILNet. Nevertheless, ILNet effectively achieves its primary goal of reducing sample rates and enhancing image quality.

Conclusion

In summary, the current study introduced ILNet-enhanced SPI by integrating physical models into a part-based neural network. This approach employs the model for refined 2D object image reconstruction. The iterative generation of 2D images continually enhances quality by incorporating prior information. By utilizing 1D SPD signals for adaptive optimization and object image reconstruction, ILNet excels in unknown environments, elevating image quality and stability. This enables high-quality SPI at extremely low sample rates, extending applications to optically harsh conditions.

Journal reference:
Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, August 08). ILNet: Revolutionizing High-Quality Single-Pixel Imaging Using Deep Learning. AZoAi. Retrieved on September 07, 2024 from https://www.azoai.com/news/20230808/ILNet-Revolutionizing-High-Quality-Single-Pixel-Imaging-Using-Deep-Learning.aspx.

  • MLA

    Lonka, Sampath. "ILNet: Revolutionizing High-Quality Single-Pixel Imaging Using Deep Learning". AZoAi. 07 September 2024. <https://www.azoai.com/news/20230808/ILNet-Revolutionizing-High-Quality-Single-Pixel-Imaging-Using-Deep-Learning.aspx>.

  • Chicago

    Lonka, Sampath. "ILNet: Revolutionizing High-Quality Single-Pixel Imaging Using Deep Learning". AZoAi. https://www.azoai.com/news/20230808/ILNet-Revolutionizing-High-Quality-Single-Pixel-Imaging-Using-Deep-Learning.aspx. (accessed September 07, 2024).

  • Harvard

    Lonka, Sampath. 2023. ILNet: Revolutionizing High-Quality Single-Pixel Imaging Using Deep Learning. AZoAi, viewed 07 September 2024, https://www.azoai.com/news/20230808/ILNet-Revolutionizing-High-Quality-Single-Pixel-Imaging-Using-Deep-Learning.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Transforms Solar Cell Design