论文标题
多层神经网络中量子感知训练的无衍生化方法
A Derivative-free Method for Quantum Perceptron Training in Multi-layered Neural Networks
论文作者
论文摘要
在本文中,我们提出了一种基于量子感知的训练多层神经网络的无梯度方法。在这里,我们偏离了经典的感知器和量子位上的元素操作,即Qubits,以便以量子感知来提出问题。然后,我们利用可衡量的操作员以与马尔可夫流程一致的方式定义网络状态。这产生了与量子力学一致的Dirac-Von Neumann配方。此外,此处介绍的配方具有具有没有网络中层数的计算效率的优点。与量子计算的自然效率相结合,可以暗示效率的显着提高,尤其是对于深网。最后,但并非最不重要的一点是,这里的发展本质上是一般性的,因为此处介绍的方法也可以用于在传统计算机上实施的量子启发的神经网络。
In this paper, we present a gradient-free approach for training multi-layered neural networks based upon quantum perceptrons. Here, we depart from the classical perceptron and the elemental operations on quantum bits, i.e. qubits, so as to formulate the problem in terms of quantum perceptrons. We then make use of measurable operators to define the states of the network in a manner consistent with a Markov process. This yields a Dirac-Von Neumann formulation consistent with quantum mechanics. Moreover, the formulation presented here has the advantage of having a computational efficiency devoid of the number of layers in the network. This, paired with the natural efficiency of quantum computing, can imply a significant improvement in efficiency, particularly for deep networks. Finally, but not least, the developments here are quite general in nature since the approach presented here can also be used for quantum-inspired neural networks implemented on conventional computers.