论文标题
限制芯片体系结构保持量子神经网络的准确性
Restricting to the chip architecture maintains the quantum neural network accuracy
论文作者
论文摘要
在嘈杂的中间量子量子设备的时代,变化量子算法(VQAS)是构建量子机器学习模型的重要策略。这些模型既包含量子和经典成分。量子面的特征是参数化$ u $,通常来自各种量子门的组成。另一方面,经典组件涉及一个优化器,该优化器调整$ u $的参数以最大程度地减少成本函数$ c $。尽管VQA有广泛的应用,但几个关键问题仍然存在,例如确定最佳门序,设计有效的参数优化策略,选择适当的成本功能以及了解量子芯片架构对最终结果的影响。本文旨在解决最后一个问题,强调,通常,随着使用的参数化接近$ 2 $ - 设计,成本函数往往会收集到平均值。因此,当参数化与$ 2 $ -DESIGN紧密对齐时,量子神经网络模型的结果将较少依赖于特定的参数化。这种洞察力导致可能利用量子芯片的固有结构来定义VQA的参数化。通过这样做,需要减轻对其他交换门的需求,从而减少了VQA的深度并最大程度地减少相关错误。
In the era of noisy intermediate-scale quantum devices, variational quantum algorithms (VQAs) stand as a prominent strategy for constructing quantum machine learning models. These models comprise both a quantum and a classical component. The quantum facet is characterized by a parametrization $U$, typically derived from the composition of various quantum gates. On the other hand, the classical component involves an optimizer that adjusts the parameters of $U$ to minimize a cost function $C$. Despite the extensive applications of VQAs, several critical questions persist, such as determining the optimal gate sequence, devising efficient parameter optimization strategies, selecting appropriate cost functions, and understanding the influence of quantum chip architectures on the final results. This article aims to address the last question, emphasizing that, in general, the cost function tends to converge towards an average value as the utilized parameterization approaches a $2$-design. Consequently, when the parameterization closely aligns with a $2$-design, the quantum neural network model's outcome becomes less dependent on the specific parametrization. This insight leads to the possibility of leveraging the inherent architecture of quantum chips to define the parametrization for VQAs. By doing so, the need for additional swap gates is mitigated, consequently reducing the depth of VQAs and minimizing associated errors.