论文标题
加速稀疏基质矩阵乘法用GPU张量芯
Accelerating Sparse Matrix-Matrix Multiplication with GPU Tensor Cores
论文作者
论文摘要
稀疏的一般矩阵矩阵乘法(SPGEMM)是许多科学和数据分析应用中的重要组成部分。但是,输入矩阵的稀疏模式及其模式的相互作用使SPGEMM具有挑战性。现代GPU包括张量核心单元(TCU),该单元专门研究密集的矩阵乘法。我们的目的是重新使用稀疏矩阵的TCU。我们的SPGEMM算法TSPARSE的关键思想是使用TCU的混合精度模式繁殖稀疏的矩形块。 TSPARSE将输入矩阵分配到图块中,仅在包含一个或多个元素的图块上操作。它创建了图块的任务列表,并使用TCUS执行这些图块的矩阵乘法。据我们所知,这是第一次在SPGEMM的背景下使用TCU。我们表明,通过我们的平铺方法,Spgemm受益于TCU。与Cusparse,Cusp,Rmerge2,NSParse,AC-Spgemm和Speck相比,我们的方法显着提高了SPGEMM的性能。
Sparse general matrix-matrix multiplication (spGEMM) is an essential component in many scientific and data analytics applications. However, the sparsity pattern of the input matrices and the interaction of their patterns make spGEMM challenging. Modern GPUs include Tensor Core Units (TCUs), which specialize in dense matrix multiplication. Our aim is to re-purpose TCUs for sparse matrices. The key idea of our spGEMM algorithm, tSparse, is to multiply sparse rectangular blocks using the mixed precision mode of TCUs. tSparse partitions the input matrices into tiles and operates only on tiles which contain one or more elements. It creates a task list of the tiles, and performs matrix multiplication of these tiles using TCUs. To the best of our knowledge, this is the first time that TCUs are used in the context of spGEMM. We show that spGEMM, with our tiling approach, benefits from TCUs. Our approach significantly improves the performance of spGEMM in comparison to cuSPARSE, CUSP, RMerge2, Nsparse, AC-SpGEMM and spECK.