Advancing Low-Light Image Enhancement: A Deep Learning Approach Based on Retinex Theory

In an article published in the journal Nature, researchers propose Low-Light Image Enhancement (LLIE) methods, including Retinex-based approaches for low-light image enhancement.

Study: Advancing Low-Light Image Enhancement: A Deep Learning Approach Based on Retinex Theory. Image credit: tos.ak/Shutterstock
Study: Advancing Low-Light Image Enhancement: A Deep Learning Approach Based on Retinex Theory. Image credit: tos.ak/Shutterstock

Background

LLIE is indispensable for restoring intrinsic color, enhancing details, and reducing noise in low-light images. This comprehensive review delves into traditional and learning-based LLIE approaches, elucidating their strengths and limitations. Traditional model-based methods encompass Tone Mapping, Gamma Correction, Histogram Equalization (HE), and Retinex-based methods. While Tone Mapping preserves details, linear mapping may cause information loss. Similarly, Gamma Correction introduces nonlinearity but struggles with local over-/under-exposure, and HE enhances contrast, but some local areas may be neglected.

Retinex theory, inspired by human visual perception, has yielded various algorithms, but their effectiveness can vary due to manual design reliance and scenario-specific application challenges. In recent years, deep learning-based methods have gained prominence. For instance, LLNet pioneers contrast enhancement and denoising but faces residual noise issues, and Retinex-Net decomposes images for enhancement yet suffers from poor smoothing and severe color distortion. While MBLLEN employs a multi-branch network for end-to-end enhancement, UTVNet and URetinex leverage adaptive unfolding networks.

Additionally, Zero-shot learning approaches, like RUAS and Zero-DCE, offer efficiency with fewer images but may lack effectiveness in extreme cases. RRDNet uses an iterative approach for underexposed image restoration, while SCI introduces a self-calibrating illumination framework. To address these challenges, the present study proposes a data-driven deep network for studying the decomposition and enhancement of low-light images.

Methods

The researchers present an innovative approach for enhancing low-light images, unveiling a detailed framework for LLIE. The proposed framework comprises two key networks: the decomposition network and the enhancement network. The decomposition module dissects low-illumination image (I) into its illumination map (L) and reflection map (R). The illumination enhancement module iteratively refines the illumination map, and the reflectance denoising module fine-tunes the reflection map, both crucial for achieving the estimated reflection map.

As per the Retinex theory, the final enhancement result is attained through element-wise multiplication of the reflection and illumination maps. Drawing inspiration from Retinex theory, the Decomposition Module presented in this study, named Decom-Net, employs a fully convolutional network to adaptively learn and simultaneously decompose the input image into illumination and reflection components. This module aims to reveal fundamental details adaptively while minimizing distortions, crucial for informative initial components without introducing noise.

  • Illumination Enhancement Module (IEM): the proposed IEM utilizes a fully convolutional network to dynamically learn the illumination estimation map iteratively. Comprising a correction unit and an enhancement, this module ensures an accurate illumination estimation map by sharing parameters throughout the entire training process. The enhancement unit refines the estimated illumination component, contributing to an iterative loop.
  • Correction Unit: The correction unit guarantees convergence of results across stages. It introduces a correction image to explore convergence behavior between stages, ensuring nearly identical results. The iterative convergence of illumination is expressed through a residual learning strategy.
  • Reflectance Denoising Module (RDM): To address noise in low-light images, an improved RDM is employed to denoise the reflection component. This module employs a learnable denoising network to transform the degraded reflection into a cleaner map, preserving detailed information.
  • Unsupervised Loss Function: In the training phase, the framework involves two subnets, each contributing to a specific loss function—Decom-Net Loss and Enhance-Net Loss. Decom-Net Loss measures reconstruction error and structure preservation, while Enhance-Net Loss comprises fidelity, smoothness, reflection consistency, and total variation losses to ensure accurate enhancement.

The proposed method effectively preserves original image details, suppresses noise, and enhances contrast, showcasing its potential for robust low-light image enhancement.

Experimental results and analysis

The researchers present a comprehensive analysis of the experimental results, beginning with a description of the parameter settings and comparison methods. The experiments were conducted in a uniform configuration environment on an Ubuntu system, implemented in PyTorch and optimized by ADMM, utilizing specific parameters: λ1, λ2, and ρ. A batch size of 16, the learning rate of 0.0003, and a training sample size of 320×320 were maintained, with 485 paired images from the LOL dataset used for training over 1000 epochs.

To evaluate the proposed network, they compared it with various state-of-the-art methods, encompassing traditional, supervised, and unsupervised approaches. Paired (LOL and LSRW) and unpaired (LIME, MEF, NPE) datasets were selected for verification experiments. Quantitative evaluation employed peak signal-to-noise ratio (PSNR), mean absolute error (MAE), structural similarity index (SSIM), learned perceptual image patch similarity (LPIPS), and natural image quality evaluator (NIQE). These metrics provided a comprehensive assessment of the algorithm's performance.

Subjective visual evaluation demonstrated the effectiveness of this method compared to various algorithms across LOL and LSRW datasets. This approach consistently outperformed others by enhancing brightness, preserving colors and intricate details, evident in improved overall image quality. Notably, this model's multi-stage strategy contributed to robust illumination map estimation and reflectance map optimization.

Conclusion

Overall, this innovative method tackles the challenge of enhancing low-light images by employing a decomposition network for obtaining illumination and reflection components. The enhancement network, comprising RDM and IEM, refines these components iteratively, culminating in image reconstruction. Comparative experiments on various benchmarks attest to our method's effectiveness, offering uniformly bright, detailed, and natural images. While exhibiting promising performance, this approach may encounter challenges with non-uniform illumination, prompting ongoing efforts to refine the network structure and integrate it into a seamless end-to-end architecture.

Journal reference:
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2023, November 14). Advancing Low-Light Image Enhancement: A Deep Learning Approach Based on Retinex Theory. AZoAi. Retrieved on November 22, 2024 from https://www.azoai.com/news/20231114/Advancing-Low-Light-Image-Enhancement-A-Deep-Learning-Approach-Based-on-Retinex-Theory.aspx.

  • MLA

    Nandi, Soham. "Advancing Low-Light Image Enhancement: A Deep Learning Approach Based on Retinex Theory". AZoAi. 22 November 2024. <https://www.azoai.com/news/20231114/Advancing-Low-Light-Image-Enhancement-A-Deep-Learning-Approach-Based-on-Retinex-Theory.aspx>.

  • Chicago

    Nandi, Soham. "Advancing Low-Light Image Enhancement: A Deep Learning Approach Based on Retinex Theory". AZoAi. https://www.azoai.com/news/20231114/Advancing-Low-Light-Image-Enhancement-A-Deep-Learning-Approach-Based-on-Retinex-Theory.aspx. (accessed November 22, 2024).

  • Harvard

    Nandi, Soham. 2023. Advancing Low-Light Image Enhancement: A Deep Learning Approach Based on Retinex Theory. AZoAi, viewed 22 November 2024, https://www.azoai.com/news/20231114/Advancing-Low-Light-Image-Enhancement-A-Deep-Learning-Approach-Based-on-Retinex-Theory.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Revolutionizing Gemstone Analysis with Deep Learning