论文标题
COVID-19的随机流行病学模型基于硬件加速模拟的推断
Hardware-accelerated Simulation-based Inference of Stochastic Epidemiology Models for COVID-19
论文作者
论文摘要
流行病学模型在理解和控制大规模大流行时至关重要。几种流行病学模型需要基于仿真的推断,例如近似贝叶斯计算(ABC),以适合其参数与观察。 ABC推断非常适合有效的硬件加速度。在这项工作中,我们开发了Covid-19的随机流行病学模型的平行ABC推论。在Intel Xeon CPU,NVIDIA TESLA V100 GPU和GraphCore MK1 IPU上实现了统计推理框架,并在其计算体系结构的背景下讨论了结果。结果表明,GPU为4倍,IPU比Xeon CPU快30倍。广泛的性能分析表明,IPU和GPU之间的差异可以归因于更高的通信带宽,内存的接近度以及IPU中更高的计算功率。所提出的框架尺度跨越16个IPU,对实验进行的缩放开销不超过8%。我们在实践中介绍了我们的框架的示例,对三个国家的流行病学模型进行了推断,并简要概述了结果。
Epidemiology models are central in understanding and controlling large scale pandemics. Several epidemiology models require simulation-based inference such as Approximate Bayesian Computation (ABC) to fit their parameters to observations. ABC inference is highly amenable to efficient hardware acceleration. In this work, we develop parallel ABC inference of a stochastic epidemiology model for COVID-19. The statistical inference framework is implemented and compared on Intel Xeon CPU, NVIDIA Tesla V100 GPU and the Graphcore Mk1 IPU, and the results are discussed in the context of their computational architectures. Results show that GPUs are 4x and IPUs are 30x faster than Xeon CPUs. Extensive performance analysis indicates that the difference between IPU and GPU can be attributed to higher communication bandwidth, closeness of memory to compute, and higher compute power in the IPU. The proposed framework scales across 16 IPUs, with scaling overhead not exceeding 8% for the experiments performed. We present an example of our framework in practice, performing inference on the epidemiology model across three countries, and giving a brief overview of the results.