Liquid Lens-Based Camera and EEPMD-Net for 3D Scene Capture and Reconstruction

In a paper published in the journal Light: Science & Applications, researchers introduced a liquid lens-based holographic camera for accurate 3D scene hologram acquisition using an end-to-end physical model-driven network (EEPMD-Net). The camera featured a large aperture electrowetting-based liquid lens, enabling quick and high-quality imaging of multi-layered real 3D scenes.

Study: Liquid Lens-Based Camera and EEPMD-Net for 3D Scene Capture and Reconstruction. Image credit: Maryna Polonska/Shutterstock
Study: Liquid Lens-Based Camera and EEPMD-Net for 3D Scene Capture and Reconstruction. Image credit: Maryna Polonska/Shutterstock

The EEPMD-Net employed novel encoder and decoder networks to generate low-noise phases, optimizing hologram fidelity through a composite loss function. Experimental results demonstrated high-fidelity hologram generation with low noise, showcasing potential applications in 3D display, measurement, encryption, and beyond.

Related Work

Past work in holography has encountered challenges in quickly capturing and reconstructing natural 3D scenes with proper depth and minimizing speckle noise. Traditional methods involve cumbersome data collection and complex calculations. While digital holography offers direct capture, it relies on active illumination and suffers from system limitations. Recent advancements in neural networks have improved hologram generation but struggle with accurate depth reconstruction and speed.

Developing and Training EEPMD-Net

Researchers developed EEPMD-Net and trained using the PyTorch deep learning framework, with PyCharm as the integrated development tool. Training and testing utilized a hybrid dataset comprising the complex realistic scenes stereo (CREStereo) and dataset for image and video processing, 2K resolution (DIV2K) datasets for training and validation, respectively. The CREStereo dataset, generated through Blender's synthetic rendering, offers complex scenes with accurate depth maps. Researchers selected 200 scenes from this extensive dataset for training and 50 for validation. The DIV2K dataset, consisting of 900 high-definition images, contributed 800 images for training and 100 for validation.

Researchers employed mixed data sampling (MiDaS) depth estimation to enhance depth estimation for the DIV2K dataset and utilized the Adam optimizer for training the EEPMD-Net. They used default hyperparameters for the optimizer, with adjustments made solely to the learning rate. The training process comprised two stages: the initial model pre-training phase, utilizing the CREStereo dataset with a learning rate of 0.0004, and the subsequent fine-tuning phase, using the DIV2K dataset with a learning rate 0.0001.

The pre-training stage spanned 80 epochs, while the fine-tuning stage consisted of 50 epochs, each with a mini-batch size of one. Researchers executed the training on a Windows 11 computer equipped with an Intel Core i9-10980XE CPU and an NVIDIA GeForce RTX 3090 graphics processing unit (GPU).

Liquid Camera and Holographic Reconstruction

Fabrication and performance testing of the liquid camera involved integrating a self-fabricated liquid lens, a solid lens group, a custom-built liquid lens driver, and an image sensor. The solid lens group used had a focal ratio of 2.8, a focal length of 12 mm, and a mechanism length of approximately 19 mm, with a Sony IMX178 photosensitive chip in the image sensor.

Based on société anonyme microelectronics (STM32G070), the liquid lens driver provided sufficient driving voltage with a voltage adjustment step of ~0.2 V. The liquid lens comprised flexible electrodes with aluminum material and various coatings for effective operation. The biphasic liquids employed had specific parameters focusing on maintaining high transmission efficiency in the visible light spectrum. After testing, the liquid lens exhibited a response time of 91 ms and a range of optical power from -5 m⁻¹ to 7.03 m⁻¹.

Image capture and depth calculation involved utilizing the liquid camera to capture an accurate 3D scene containing four signs, with the liquid lens allowing quick focusing depth adjustments. Researchers conducted clarity evaluation using the convolution of images and the Laplacian operator, enabling the calculation of clarity evaluation values (CEVs) for marked regions under different driving voltages. The depth of target signs within the marked areas was accurately obtained, with depth values consistent with actual settings. Researchers developed a mask generation method to extract complete target sign shapes, facilitating the generation of fused full-focused scenes and depth maps.

Holographic reconstruction verified the advantages of the proposed holographic camera, with holograms transmitted to the spatial light modulator (SLM) for optical reconstruction. The EEPMD-Net demonstrated superior reconstruction quality compared to alternative methods such as error diffusion (ED), double-phase (DP), and stochastic gradient descent (SGD), preserving edges, details, and texture information effectively. Researchers calculated red, green, and blue (RGB) holograms of the actual 3D scene using EEPMD-Net, achieving high-quality holographic reconstruction with short calculation times. Experimental results of color-reconstructed images demonstrated excellent texture and layered detail reconstruction.

Conclusion

To sum up, the development of the liquid camera and its integration with holographic reconstruction techniques marked significant advancements in capturing and rendering realistic 3D scenes. The liquid camera's successful fabrication, performance testing, and innovative depth calculation methods demonstrated its effectiveness in accurately capturing complex scenes and generating depth maps.

Additionally, using the EEPMD-Net for holographic reconstruction showcased superior quality compared to traditional methods, ensuring a faithful representation of details and textures. The generation of RGB holograms further enhanced the realism of reconstructed images, opening up possibilities for dynamic holographic 3D augmented reality applications. These achievements signified promising prospects for advancements in holography and its diverse applications in various fields.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, March 06). Liquid Lens-Based Camera and EEPMD-Net for 3D Scene Capture and Reconstruction. AZoAi. Retrieved on July 01, 2024 from https://www.azoai.com/news/20240306/Liquid-Lens-Based-Camera-and-EEPMD-Net-for-3D-Scene-Capture-and-Reconstruction.aspx.

  • MLA

    Chandrasekar, Silpaja. "Liquid Lens-Based Camera and EEPMD-Net for 3D Scene Capture and Reconstruction". AZoAi. 01 July 2024. <https://www.azoai.com/news/20240306/Liquid-Lens-Based-Camera-and-EEPMD-Net-for-3D-Scene-Capture-and-Reconstruction.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Liquid Lens-Based Camera and EEPMD-Net for 3D Scene Capture and Reconstruction". AZoAi. https://www.azoai.com/news/20240306/Liquid-Lens-Based-Camera-and-EEPMD-Net-for-3D-Scene-Capture-and-Reconstruction.aspx. (accessed July 01, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Liquid Lens-Based Camera and EEPMD-Net for 3D Scene Capture and Reconstruction. AZoAi, viewed 01 July 2024, https://www.azoai.com/news/20240306/Liquid-Lens-Based-Camera-and-EEPMD-Net-for-3D-Scene-Capture-and-Reconstruction.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.