论文标题

GPU上的图形模式挖掘算法的有效策略

Efficient Strategies for Graph Pattern Mining Algorithms on GPUs

论文作者

Ferraz, Samuel, Dias, Vinicius, Teixeira, Carlos H. C., Teodoro, George, Meira Jr, Wagner

论文摘要

图模式挖掘(GPM)是一个重要的,迅速发展和计算要求的领域。 GPM计算取决于子图枚举,该枚举包括从输入图中提取匹配给定属性的子图。图形处理单元(GPU)是在许多领域加速应用程序的有效平台。但是,由于典型的不隔离记忆访问,分歧和负载不平衡,子图枚举的不规则性使得在GPU上有效执行的不规则性使其具有挑战性。不幸的是,这些方面在先前的工作中尚未完全解决。因此,这项工作提出了在GPU上有效设计和实施子图枚举的新型策略。我们支持深度优先的搜索样式搜索(DFS范围范围),该搜索最大化存储器性能,同时提供足够的并行性以被GPU利用,以及以扭曲为中心的设计,可最大程度地降低执行差异并提高计算功能的利用率。我们还提出了一个低成本的负载平衡层,以避免在GPU中的螺纹扭曲之间进行空闲和重新分配工作。我们的策略已部署在名为Dumato的系统中,该系统提供了一个简单的编程接口,以有效地实现GPM算法。我们的评估表明,Dumato通常比最先进的GPM系统快的数量级,并且可以开采更大的子图(最多12个顶点)。

Graph Pattern Mining (GPM) is an important, rapidly evolving, and computation demanding area. GPM computation relies on subgraph enumeration, which consists in extracting subgraphs that match a given property from an input graph. Graphics Processing Units (GPUs) have been an effective platform to accelerate applications in many areas. However, the irregularity of subgraph enumeration makes it challenging for efficient execution on GPU due to typical uncoalesced memory access, divergence, and load imbalance. Unfortunately, these aspects have not been fully addressed in previous work. Thus, this work proposes novel strategies to design and implement subgraph enumeration efficiently on GPU. We support a depth-first search style search (DFS-wide) that maximizes memory performance while providing enough parallelism to be exploited by the GPU, along with a warp-centric design that minimizes execution divergence and improves utilization of the computing capabilities. We also propose a low-cost load balancing layer to avoid idleness and redistribute work among thread warps in a GPU. Our strategies have been deployed in a system named DuMato, which provides a simple programming interface to allow efficient implementation of GPM algorithms. Our evaluation has shown that DuMato is often an order of magnitude faster than state-of-the-art GPM systems and can mine larger subgraphs (up to 12 vertices).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源