ACCEL: Revolutionizing Vision Computing with an All-Analog Chip

In a paper published in the journal Nature, researchers introduced an All-Analog Chip for Combined Electronic and Light Computing (ACCEL) to address the challenges in photonic computing.

Study: ACCEL: Revolutionizing Vision Computing with an All-Analog Chip. Image credit: Generated using DALL.E.3
Study: ACCEL: Revolutionizing Vision Computing with an All-Analog Chip. Image credit: Generated using DALL.E.3

ACCEL achieved remarkable Energy Efficiency (EE) and high Computing Speed (CS), significantly outperforming state-of-the-art processors. It eliminated the need for Analog-to-Digital Converters (ADCs), resulting in Low Latency (LL), and delivered competitive Classification accuracy (CA) while excelling in low-light conditions. With applications ranging from wearables to autonomous driving, ACCEL represented a breakthrough in photonic computing.

Background

The background outlines the growing importance of computer vision in various applications and the limitations of traditional digital computing units regarding energy consumption and processing speed. Photonic computing has emerged as a promising solution, but it faces practical challenges, including complex optical implementation, power-hungry analog-to-digital converters, and sensitivity to noise.

In previous research, computer vision has demonstrated its extensive applicability, ranging from autonomous driving and robotics to medical diagnosis and wearable devices. Notably, deep learning has significantly enhanced the performance of vision tasks at the algorithmic level. However, the inherent limitations of traditional digital computing units, particularly in terms of energy consumption and processing speed, have hindered these advancements.

ACCEL: All-Analog Vision Computing

The architecture of ACCEL comprises a hybrid, all-analog approach that combines diffractive optical analog computing (OAC) and electronic analog computing (EAC) to process high-resolution images for various computer vision tasks efficiently. OAC extracts image features through a diffractive optical computing module, reducing the need for ADCs by performing dimension reduction optically.

EAC then converts optical signals into analog electronic signals through photodiode arrays without needing ADCs, thanks to binary-weighted connections determined by static random-access memory (SRAM). ACCEL employs the accumulated photocurrents on the photodiode arrays for computation and actively calculates the output as the differential voltage between computing lines V+ and V-. ACCEL can actively reconfigure itself for various tasks without modifying the OAC module, enhancing its versatility for multiple applications.

The OAC module utilizes phase masks trained to process data encoded in light fields, performing operations equivalent to linear matrix multiplications. This approach enables data compression without sacrificing accuracy, reducing the number of ADCs required by 98%. EAC, on the other hand, implements a binary-weighted fully connected neural network, and its computing power consumption primarily comes from the discharging power of photocurrents.

ACCEL demonstrates remarkable noise robustness, making it suitable for high-speed vision tasks with short exposure times, resulting in low signal-to-noise ratios. An adaptive training method fine-tunes EAC weights to mitigate system errors induced by manufacturing defects and misalignment. Experimental results show that ACCEL achieves high CA on datasets such as the Modified National Institute of Standards and Technology (MNIST), Fashion-MNIST, and KMNIST, even in low-light conditions.

Moreover, ACCEL is highly versatile and reconfigurable, allowing it to perform well on various tasks without significant accuracy loss. For example, it functions effectively on challenging tasks such as ImageNet classification, demonstrating its capabilities in high-resolution image processing. The partial reconfigurability of ACCEL's EAC enables it to achieve comparable performance on different tasks, making it a flexible solution for a wide range of computer vision applications.

Experimental Setup and Active Component Fabrication

The experimental setup for ACCEL encompassed a range of configurations, including both single-layer and two-layer diffractive OAC units with varying diffractive distances. The experiments were conducted under coherent light conditions with a 532-nm laser source and partial-coherent light conditions using a flashlight as the light source. Regarding material components, OAC employed phase-modulation-only spatial light modulators (SLMs) and SiO2 plates.

The fabrication of SiO2 phase masks involved meticulous attention to detail, ensuring precise depth levels and line dimensions. Concurrently, the EAC chip underwent active manufacturing using the 180-nm standard CMOS process. The chip incorporated photodiode arrays with a 32x32 resolution, each pixel measuring 35μm x 35μm with a fill factor of 9.14%.

The EAC unit of ACCEL employed an SRAM-based weight storage mechanism, where each pixel contained 16 SRAM units to support binary fully connected networks. The operation pipeline of EAC involved sequential weight updates in SRAM, controlling switches for computing, and utilizing comparators to determine the maximum output voltage, which corresponded to the classification result in the all-analog mode.

To facilitate training, the entire analog physical process within ACCEL, encompassing OAC and EAC, was modeled using TensorFlow. The training involved implementing end-to-end fusion training with stochastic gradient descent and backpropagation. To tackle the challenge of low-light conditions, researchers took an approach involving modeling to incorporate noise sources actively. It includes intrinsic shot noise, thermal noises, and readout noises, allowing us to assess their impact on the performance of ACCEL. The experiments yielded measurements for various critical parameters.

Researchers determined the reset time (tr) for each pixel and actively measured its upper limit, approximately 12.5 ns. Response times and accumulating times were also experimentally measured, with variations depending on the incident light intensity. The complete processing time for ACCEL, comprising reset time (tr), response time (tp), and accumulating time (ta), was determined, leading to a remarkable system-level CS of around 4.55 × 103 trillions of Operations Per Second (TOPS) for 3-class ImageNet classification.

Measurements of energy consumption and efficiency actively showed that ACCEL's systemic energy consumption for tasks like 10-class MNIST and 3-class ImageNet classification amounted to tens of nanojoules. Notably, ACCEL exhibited remarkable systemic energy efficiency, with values of approximately 9.49 x 103 TOPS W−1 and 7.48 x 104 TOPS W−1 for 10-class MNIST and 3-class ImageNet, respectively. In direct comparisons with state-of-the-art GPUs, ACCEL showcased its prowess, achieving significantly lower latency and energy consumption while maintaining similar CA for complex vision tasks.

Conclusion

In summary, ACCEL is a groundbreaking analog computing system for vision tasks. Its active adaptability, precision in fabrication, and modeling of noise sources make it a versatile and robust technology. Dynamic measurements reveal remarkable efficiency and speed, outperforming state-of-the-art GPUs in complex vision tasks. ACCEL has the potential to revolutionize analog computing and drive advancements in various applications.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, October 29). ACCEL: Revolutionizing Vision Computing with an All-Analog Chip. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20231029/ACCEL-Revolutionizing-Vision-Computing-with-an-All-Analog-Chip.aspx.

  • MLA

    Chandrasekar, Silpaja. "ACCEL: Revolutionizing Vision Computing with an All-Analog Chip". AZoAi. 22 December 2024. <https://www.azoai.com/news/20231029/ACCEL-Revolutionizing-Vision-Computing-with-an-All-Analog-Chip.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "ACCEL: Revolutionizing Vision Computing with an All-Analog Chip". AZoAi. https://www.azoai.com/news/20231029/ACCEL-Revolutionizing-Vision-Computing-with-an-All-Analog-Chip.aspx. (accessed December 22, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. ACCEL: Revolutionizing Vision Computing with an All-Analog Chip. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20231029/ACCEL-Revolutionizing-Vision-Computing-with-an-All-Analog-Chip.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Deep Learning Advances Deep-Sea Biota Identification in the Great Barrier Reef