论文标题
自动创建特定于领域的语言的高带宽内存体系结构:计算流体动力学的情况
Automatic Creation of High-Bandwidth Memory Architectures from Domain-Specific Languages: The Case of Computational Fluid Dynamics
论文作者
论文摘要
数值模拟可以帮助解决复杂的问题。这些算法中的大多数都是非常平行的,因此得益于空间并行性。现代FPGA设备可以利用高带宽的内存技术,但是当应用程序是记忆时,设计师必须制作高级通信和内存体系结构,以实现有效的数据移动和芯片存储。此开发过程需要在特定领域专家中罕见的硬件设计技能。在本文中,我们提出了从域特异性语言(DSL)的自动工具流,以使张量表达式在配备HBM配备的FPGA上生成大量平行的加速器。设计人员可以使用此流程来集成和评估各种编译器或硬件优化。我们将计算流体动力学(CFD)作为范式示例。我们的流程从张量操作的高级规范开始,并将基于MLIR的编译器与内部硬件生成流相结合,以生成具有并行加速器和专用内存体系结构的系统,该系统有效地移动数据,旨在充分利用可用的CPU-FPGA带宽。我们模拟了具有数百万个元素的应用程序,以一个计算单元和定制精度实现了多达103个Gflops,而定制了Xilinx肺泡U280。我们的FPGA实施比专家制作的英特尔CPU实施高达25倍。
Numerical simulations can help solve complex problems. Most of these algorithms are massively parallel and thus good candidates for FPGA acceleration thanks to spatial parallelism. Modern FPGA devices can leverage high-bandwidth memory technologies, but when applications are memory-bound designers must craft advanced communication and memory architectures for efficient data movement and on-chip storage. This development process requires hardware design skills that are uncommon in domain-specific experts. In this paper, we propose an automated tool flow from a domain-specific language (DSL) for tensor expressions to generate massively-parallel accelerators on HBM-equipped FPGAs. Designers can use this flow to integrate and evaluate various compiler or hardware optimizations. We use computational fluid dynamics (CFD) as a paradigmatic example. Our flow starts from the high-level specification of tensor operations and combines an MLIR-based compiler with an in-house hardware generation flow to generate systems with parallel accelerators and a specialized memory architecture that moves data efficiently, aiming at fully exploiting the available CPU-FPGA bandwidth. We simulated applications with millions of elements, achieving up to 103 GFLOPS with one compute unit and custom precision when targeting a Xilinx Alveo U280. Our FPGA implementation is up to 25x more energy-efficient than expert-crafted Intel CPU implementations.