Quantum computing and deep learning have seen major breakthroughs in past decades. In recent times, the convergence of these fields has resulted in the development of quantum-inspired deep learning and quantum deep learning techniques. This article deliberates on the advances in quantum deep learning and new developments in this field.
Image credit: NicoElNino /Shutterstock
Introduction to Quantum Deep Learning
Deep learning is the backbone of most of the modern machine learning methods and one of the most active areas of research in computer science due to the greater availability of computational resources and data. Deep learning based on neural networks has received significant attention as it efficiently solves complex practical tasks. It obtains the optimal parameters of a neural network and is trained using substantial amounts of data.
Similarly, remarkable progress has been made in the quantum computing domain towards solving classically intractable problems using computationally cheaper techniques. Several studies have been performed to develop classical algorithms' poly-time alternatives using the core concept of quantum entanglement and superposition.
Quantum computation can employ unique entanglement characteristics to accelerate algorithms for many tasks. The quantum computing principles can be utilized to improve the computational efficiency and representation power of classical machine learning approaches. Thus, the combination of quantum computation with machine learning, typically deep learning, results in the development of quantum learning algorithms with advantages of both quantum computing and machine learning, like quantum deep learning.
For instance, a quantum perceptron has been formalized in many studies since it was first proposed in 1995. The perceptron constitutes the fundamental unit of the deep learning architectures and represents a single neuron. Recent studies have displayed that quantum computing offers a more holistic framework for deep learning compared to classical computing and assists in the underlying objective function optimization.
Quantum Neural Network (QNN) Implementation
Although QNN modeling has remained the focus in the quantum deep learning domain, many algorithms are not practically implementable due to the poor representation capability of existing quantum computing devices. Many studies have been performed to develop hybrid quantum-classical algorithms that efficiently perform computations using a small QRAM.
Early attempts to practically implement QNNs primarily relied on representing weights by optical phase shifters and beam splitters, and the qubits through polarized optical modes. A study proposed QNN implementation through the interaction between a quantum dot molecule and phonons of a surrounding lattice and an external field.
Another study utilized a central spin model as a practical QNN implementation using a system of two coupled nodes with independent spin baths. A collisional spin model was proposed for QNN representation, enabling the analysis of non-Markovian and Markovian dynamics of the system. Most of the recent studies related to practical QNN implementation have primarily focused on quantum circuit simulation on noisy intermediate-scale quantum computing (NISQ) devices.
For instance, a study proposed a neuromorphic hardware co-processor, designated as the Darwin Neural Processing Unit, which is the practical implementation of the spiking neural network. Another recent study compared the performance of deep learning architectures on three computing platforms, including neuromorphic, high-performance, and D-Wave processors.
Quantum Convolutional Neural Networks (QCNNs)
A QCNN was proposed in a study that adapted the concepts of pooling and convolutional layers from classical CNNs. Although the proposed architecture was similarly layered, it applied one-dimensional (1D) convolutions to the input quantum state instead of two-dimensional (2D)/three-dimensional (3D) convolutions on images.
The convolutional layer was modeled in the form of a quasi-local unitary operation on the density of the input state. This unitary operator was applied on multiple successive input qubit sets up to a predefined depth. Additionally, the pooling layer was implemented by applying unitary rotations to the nearby qubits and performing measurements on some qubits, while the rotation operation was determined through observations of qubits.
This integrates the functionality of non-linearity/partial qubit measurement and dimensionality reduction: the unitary rotation output is of lower dimension. The fully connected layer was implemented by the unitary F after the required number of blocks of pooling and convolutional unitaries. A final measurement of the F output yielded the network output.
The overall QCNN architecture is user-defined like classical CNNs, whereas the unitaries' parameters are learned. A loss function is minimized to optimize the parameters by employing gradient descent using the finite difference method. Researchers demonstrated the effectiveness of the proposed architecture on two problem classes, including quantum error correction and quantum phase recognition.
A recent study identified the relationship between matrix multiplications and convolutions and presented the first quantum algorithm for computing a CNN's forward pass as a convolutional product. They also provided a quantum backpropagation algorithm for learning network parameters through gradient descent. Special CNNs were proposed in several studies to extract features from graphs and identify graphs that display quantum advantage.
Quantum Recurrent Neural Networks (QRNNs)
A quantum variant of RNNs was proposed using variational wave functions to learn a quantum Hamiltonian's approximate ground state. Another study presented an iterative retraining approach using RNNs to simulate bulk quantum systems through lattice vectors' mapping translations to the RNN time index. Many studies have proposed quantum variants for Hopfield networks, which were a popular early RNN form.
Hybrid CNNs
The quanvolutional layer, a random quantum circuit-based transformation, was introduced as an additional component in a classical CNN to form a hybrid model architecture. Quanvolutional layers contain several quantum filters, with each filter taking a matrix of 2D values as input and providing a single scalar value as output.
The operations are applied iteratively to input subsections similar to the convolutional filters. Every quantum filter contains a decoder, encoder, and a random circuit. The raw input data is converted into an initialization state by the encoder, and the initialization state is fed to the random circuit. The random circuit output is then fed to the decoder, which yields a scalar value.
Quantum-inspired Classical Deep Learning
Quantum computing methods have been applied to classical deep learning techniques in many studies. For instance, a quantum sampling-based approach was proposed in a study for generative training of restricted Boltzmann machines. This quantum sampling-based approach was significantly faster compared to Gibbs sampling.
Similarly, another study used quantum mechanical DFT techniques to train deep neural networks for building a molecular energy estimating engine. Quantum-based particle swarm optimization was utilized in a study to identify optimal CNN model architectures, while a quantum algorithm was proposed for the performance evaluation of neural network architectures.
New Developments
A study published in the New Journal of Physics proposed a general scheme of quantum deep learning based on multi-qubit entanglement states, including training and computation of neural networks in a full quantum process. During training, the distance between a known unit vector and an unknown unit vector was calculated efficiently by proper measurement based on the Greenberger–Horne–Zeilinger entanglement states.
An exponential speedup was realized over classical deep learning algorithms using the proposed quantum deep learning scheme. In the computation process, a quantum scheme that corresponded to a multi-layer feedforward neural network was provided. Researchers successfully demonstrated the effectiveness of this scheme using the Iris dataset.
Another study published in Scientific Reports demonstrated that a classically learned deep neural network can be feasibly framed as an energy-based model. This model can be processed on a one-step quantum annealer to exploit fast sampling times. Exploiting the strengths of quantum annealing displayed the potential for classification speedup by at least one order of magnitude.
A study published in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition utilized quantum computing's information processing advantages to promote the defect learning defect review (DLDR) in the context of semiconductor manufacturing.
A classical-quantum hybrid algorithm was proposed for deep learning on near-term quantum processors. The quantum circuit driven by the proposed framework learns a given DLDR task, including hotspot detection, defect pattern classification, and wafer defect map classification, by tuning parameters implemented on quantum processors.
Overall, quantum deep learning offers promising avenues for enhanced machine learning, but practical applications require further development in quantum computing hardware.
References and Further Reading
Higham, C. F., Bedford, A. (2023). Quantum deep learning by sampling neural nets with a quantum annealer. Scientific Reports, 13(1), 1-9. https://doi.org/10.1038/s41598-023-30910-7
Garg, S., Ramakrishnan, G. (2020). Advances in Quantum Deep Learning: An Overview. ArXiv. https://doi.org/10.48550/arXiv.2005.04316
Yang, Y. F., Sun, M. (2022). Semiconductor defect detection by hybrid classical-quantum deep learning. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2323-2332. https://doi.org/10.1109/CVPR52688.2022.00236
Yang, Z., Zhang, X. (2020). Entanglement-based quantum deep learning. New Journal of Physics, 22(3), 033041. https://doi.org/10.1088/1367-2630/ab7598
Cong, I., Choi, S., Lukin, M. D. (2019). Quantum convolutional neural networks. Nature Physics, 15(12), 1273-1278. https://doi.org/10.1038/s41567-019-0648-8