论文标题

量子近似优化算法的深度促进初始化策略

A Depth-Progressive Initialization Strategy for Quantum Approximate Optimization Algorithm

论文作者

Lee, Xinwei, Xie, Ningyi, Saito, Yoshiyuki, Cai, Dongsheng, Asai, Nobuyoshi

论文摘要

量子近似优化算法(QAOA)以其在近期量子设备上解决组合优化问题方面的能力和普遍性而闻名。 QAOA产生的结果在很大程度上取决于其初始变异参数。因此,QAOA的参数选择成为研究的活跃领域,因为不良初始化可能会恶化结果的质量,尤其是在大电路深度下。我们首先讨论QAOA中最佳参数的模式:角度指数和电路深度。然后,我们讨论用于确定搜索空间界限的期望的对称性和周期性。根据最佳参数和边界限制的模式,我们提出了一种策略,该策略通过在先前的最佳参数之间差异来预测新的初始参数。与大多数其他策略不同,我们提出的策略不需要多次试验来确保成功。它只需要在进入下一个深度时就需要一个预测。我们将该策略与先前提出的策略以及解决最大切割问题的层次策略进行比较,以近似值和优化成本进行比较。我们还解决了先前参数中的非罕见性,尽管它在解释变异量子算法的行为方面很少在其他作品中进行讨论。

The quantum approximate optimization algorithm (QAOA) is known for its capability and universality in solving combinatorial optimization problems on near-term quantum devices. The results yielded by QAOA depend strongly on its initial variational parameters. Hence, parameters selection for QAOA becomes an active area of research as bad initialization might deteriorate the quality of the results, especially at great circuit depths. We first discuss on the patterns of optimal parameters in QAOA in two directions: the angle index and the circuit depth. Then, we discuss on the symmetries and periodicity of the expectation that is used to determine the bounds of the search space. Based on the patterns in optimal parameters and the bounds restriction, we propose a strategy which predicts the new initial parameters by taking the difference between previous optimal parameters. Unlike most other strategies, the strategy we propose does not require multiple trials to ensure success. It only requires one prediction when progressing to the next depth. We compare this strategy with our previously proposed strategy and the layerwise strategy on solving the Max-cut problem, in terms of the approximation ratio and the optimization cost. We also address the non-optimality in previous parameters, which is seldom discussed in other works, despite its importance in explaining the behavior of variational quantum algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源