论文标题
在大规模平行体系结构上有效的,无内存的稀疏mttkrp
Efficient, Out-of-Memory Sparse MTTKRP on Massively Parallel Architectures
论文作者
论文摘要
张量分解(TD)是从高维(多模式)稀疏数据中提取潜在信息的重要方法。这项研究提出了一个新的框架,用于加速对大量平行GPU体系结构的基本TD操作。与先前的工作相反,提出的阻塞线性化坐标(BLCO)格式可以使用在单个张量副本上有效的统一实现来对张量算法进行有效的张量计算。我们的自适应阻止和线性化策略不仅符合GPU设备的资源限制,而且还可以加速数据索引,消除控制流和内存访问不规则,并减少内核启动开销。为了解决GPU上的实质同步成本,我们引入了一种机会性冲突解决算法,其中主题是协作而不是争夺内存访问以发现和解决其矛盾的更新,而无需保留任何辅助信息或在特定模式方向上存储非零元素。结果,我们的框架与先前的最新框架相比提供了卓越的内存性能,并且是唯一能够处理不可存储张量的框架。在最新的Intel和NVIDIA GPU上,BLCO在一系列现实世界中的稀疏剂量范围内,在最新的混合模式压缩稀疏纤维(MM-CSF)上实现了2.12-2.6倍的几何均值加速(最高33.35倍加速)。
Tensor decomposition (TD) is an important method for extracting latent information from high-dimensional (multi-modal) sparse data. This study presents a novel framework for accelerating fundamental TD operations on massively parallel GPU architectures. In contrast to prior work, the proposed Blocked Linearized Coordinate (BLCO) format enables efficient out-of-memory computation of tensor algorithms using a unified implementation that works on a single tensor copy. Our adaptive blocking and linearization strategies not only meet the resource constraints of GPU devices, but also accelerate data indexing, eliminate control-flow and memory-access irregularities, and reduce kernel launching overhead. To address the substantial synchronization cost on GPUs, we introduce an opportunistic conflict resolution algorithm, in which threads collaborate instead of contending on memory access to discover and resolve their conflicting updates on-the-fly, without keeping any auxiliary information or storing non-zero elements in specific mode orientations. As a result, our framework delivers superior in-memory performance compared to prior state-of-the-art, and is the only framework capable of processing out-of-memory tensors. On the latest Intel and NVIDIA GPUs, BLCO achieves 2.12-2.6X geometric-mean speedup (with up to 33.35X speedup) over the state-of-the-art mixed-mode compressed sparse fiber (MM-CSF) on a range of real-world sparse tensors.