论文标题
DEEPQMLP:用于分类的可伸缩量子式混合式杂交网络架构
DeepQMLP: A Scalable Quantum-Classical Hybrid DeepNeural Network Architecture for Classification
论文作者
论文摘要
Quantum机器学习(QML)有望实现传统机器学习(ML)任务的潜在加速和改进(例如,分类/回归)。搜索理想的QML模型是一个主动研究领域。这包括识别有效的经典数据编码方案,具有最佳表达性和纠缠能力的参数量子电路(PQC)的构建以及有效的输出解码方案,以最大程度地减少所需的测量数量,仅举几例。但是,大多数经验/数值研究都缺乏通往可伸缩性的明确途径。由于嘈杂的量子硬件的局限性(例如,在逆转,闸门和串扰)的局限性,在模拟环境中观察到的任何潜在益处都可能会减少实际应用。我们提出了一个受经典深神经网络体系结构启发的可伸缩量子古典混合深度神经网络(DEEPQMLP)体系结构。在DEEPQMLP中,堆叠的浅量子神经网络(QNN)模型模拟了经典的馈送多层感知器网络的隐藏层。每个QNN层都会为下一层的输入数据产生一个新的且潜在的丰富表示。该新表示形式可以通过电路的参数调整。浅QNN型号经历了更少的变形,门误等,这使得它们(和网络)对量子噪声更具弹性。我们介绍了有关各种分类问题的数值研究,以显示DEEPQMLP的训练性。我们还表明,DEEPQMLP在看不见的数据上表现出色,并且比使用深量子电路的QNN模型具有更大的弹性。与QMLP相比,DEEPQMLP在噪声下的推理期间的损失率低25.3%,而精度高7.92%。
Quantum machine learning (QML) is promising for potential speedups and improvements in conventional machine learning (ML) tasks (e.g., classification/regression). The search for ideal QML models is an active research field. This includes identification of efficient classical-to-quantum data encoding scheme, construction of parametric quantum circuits (PQC) with optimal expressivity and entanglement capability, and efficient output decoding scheme to minimize the required number of measurements, to name a few. However, most of the empirical/numerical studies lack a clear path towards scalability. Any potential benefit observed in a simulated environment may diminish in practical applications due to the limitations of noisy quantum hardware (e.g., under decoherence, gate-errors, and crosstalk). We present a scalable quantum-classical hybrid deep neural network (DeepQMLP) architecture inspired by classical deep neural network architectures. In DeepQMLP, stacked shallow Quantum Neural Network (QNN) models mimic the hidden layers of a classical feed-forward multi-layer perceptron network. Each QNN layer produces a new and potentially rich representation of the input data for the next layer. This new representation can be tuned by the parameters of the circuit. Shallow QNN models experience less decoherence, gate errors, etc. which make them (and the network) more resilient to quantum noise. We present numerical studies on a variety of classification problems to show the trainability of DeepQMLP. We also show that DeepQMLP performs reasonably well on unseen data and exhibits greater resilience to noise over QNN models that use a deep quantum circuit. DeepQMLP provided up to 25.3% lower loss and 7.92% higher accuracy during inference under noise than QMLP.