论文标题

Featgraph:图形神经网络系统的灵活有效的后端

FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems

论文作者

Hu, Yuwei, Ye, Zihao, Wang, Minjie, Yu, Jiali, Zheng, Da, Li, Mu, Zhang, Zheng, Zhang, Zhiru, Wang, Yida

论文摘要

图形神经网络(GNN)正在越来越受欢迎,这是一种有希望的机器学习方法。与传统的图形工作负载不同,每个顶点/边缘都与标量相关联,GNNS将功能张量附加到每个顶点/边缘。此附加特征维度以及更复杂的顶点和边缘的计算对局部和并行性具有巨大的影响,而现有的图形处理系统无法利用这些局部性和并行性。 本文提出了通过将图形遍历和特征维度计算进行优化来加速GNN工作负载的功能。 FeationGraph提供了一个灵活的编程接口,可以通过在每个顶点/边缘上用细粒的用户定义功能(UDF)组成粗粒稀疏模板来表达各种GNN模型。 featgraph将图形遍历的优化纳入稀疏模板中,并允许用户指定具有功能维度计划(FDS)的UDF的优化。壮举加快了端到端的GNN训练和推断,最多可以在CPU上进行32倍,而GPU上的7倍。

Graph neural networks (GNNs) are gaining increasing popularity as a promising approach to machine learning on graphs. Unlike traditional graph workloads where each vertex/edge is associated with a scalar, GNNs attach a feature tensor to each vertex/edge. This additional feature dimension, along with consequently more complex vertex- and edge-wise computations, has enormous implications on locality and parallelism, which existing graph processing systems fail to exploit. This paper proposes FeatGraph to accelerate GNN workloads by co-optimizing graph traversal and feature dimension computation. FeatGraph provides a flexible programming interface to express diverse GNN models by composing coarse-grained sparse templates with fine-grained user-defined functions (UDFs) on each vertex/edge. FeatGraph incorporates optimizations for graph traversal into the sparse templates and allows users to specify optimizations for UDFs with a feature dimension schedule (FDS). FeatGraph speeds up end-to-end GNN training and inference by up to 32x on CPU and 7x on GPU.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源