论文标题
带有决策图的量子电路模拟的仿真路径
Simulation Paths for Quantum Circuit Simulation with Decision Diagrams
论文作者
论文摘要
在古典计算机上模拟量子电路是臭名昭著的,但对于开发和测试量子算法的任务却越来越重要。为了减轻这种固有的复杂性,已经提出了有效的数据结构和诸如张量网络和决策图之类的方法。但是,它们的效率在很大程度上取决于执行单个计算的顺序。对于张量网络,该订单由所谓的收缩计划定义,并且已经开发了许多方法来确定合适的计划。另一方面,基于决策图的仿真主要是在迄今为止直接向前进行的,即顺序,时尚。在这项工作中,我们研究了使用决策图模拟量子电路所选择的路径的重要性,并在概念上和实验上显示,选择正确的模拟路径可以使用决策图对经典模拟的效率产生巨大差异。我们提出了一个开源框架(可在github.com/cda-tum/ddsim上获得),该框架不仅允许研究专用的仿真路径,而且还可以重新使用现有发现,例如从确定张量网络的收缩计划获得的现有发现。实验评估表明,与艺术状态相比,从张量网络域的翻译策略可能会产生多种因素的加速。此外,我们设计了一个专门的模拟路径启发式,该路径可以进一步提高性能 - 经常产生几个数量级的加速。最后,我们提供了有关从张量网络中学到的知识以及不能从中学到的内容的广泛讨论。
Simulating quantum circuits on classical computers is a notoriously hard, yet increasingly important task for the development and testing of quantum algorithms. In order to alleviate this inherent complexity, efficient data structures and methods such as tensor networks and decision diagrams have been proposed. However, their efficiency heavily depends on the order in which the individual computations are performed. For tensor networks the order is defined by so-called contraction plans and a plethora of methods has been developed to determine suitable plans. On the other hand, simulation based on decision diagrams is mostly conducted in a straight-forward, i.e., sequential, fashion thus far. In this work, we study the importance of the path that is chosen when simulating quantum circuits using decision diagrams and show, conceptually and experimentally, that choosing the right simulation path can make a vast difference in the efficiency of classical simulations using decision diagrams. We propose an open-source framework (available at github.com/cda-tum/ddsim) that not only allows to investigate dedicated simulation paths, but also to re-use existing findings, e.g., obtained from determining contraction plans for tensor networks. Experimental evaluations show that translating strategies from the domain of tensor networks may yield speedups of several factors compared to the state of the art. Furthermore, we design a dedicated simulation path heuristic that allows to improve the performance even further -- frequently yielding speedups of several orders of magnitude. Finally, we provide an extensive discussion on what can be learned from tensor networks and what cannot.