论文标题

gpus上的灵活的表演剂Gemm内核

Flexible Performant GEMM Kernels on GPUs

论文作者

Faingnaert, Thomas, Besard, Tim, De Sutter, Bjorn

论文摘要

一般矩阵乘法或GEMM内核位于高性能计算和机器学习中。最近的NVIDIA GPU包括GEMM加速器,例如Nvidia的张量核心。他们的剥削受到两个语言问题的阻碍:它需要低级编程,这意味着较低的程序员生产力或使用仅提供有限组件的库。由于根据已建立的组件来重新提出算法通常会引入开销,因此图书馆缺乏灵活性限制了探索新算法的自由。因此,使用GEMM的研究人员可以立即享受编程生产力,高性能和研究灵活性。 在本文中,我们解决了这个问题。我们介绍了三组抽象和接口,以在科学朱莉娅编程语言中编程GEMM。这些接口和抽象是为研究人员的需求和朱莉娅的特征共同设计的,以实现足够的关注点和灵活性,以轻松地以许多不同方式扩展基本的宝石而不支付绩效价格。将我们的宝石与最先进的图书馆的库公共和剑术进行比较,我们证明了我们的性能在库的同一球场中,在某些情况下甚至超过了它,而无需在CUDA C ++或组件中编写一系列代码,而无需面对灵活性限制。

General Matrix Multiplication or GEMM kernels take centre place in high performance computing and machine learning. Recent NVIDIA GPUs include GEMM accelerators, such as NVIDIA's Tensor Cores. Their exploitation is hampered by the two-language problem: it requires either low-level programming which implies low programmer productivity or using libraries that only offer a limited set of components. Because rephrasing algorithms in terms of established components often introduces overhead, the libraries' lack of flexibility limits the freedom to explore new algorithms. Researchers using GEMMs can hence not enjoy programming productivity, high performance, and research flexibility at once. In this paper we solve this problem. We present three sets of abstractions and interfaces to program GEMMs within the scientific Julia programming language. The interfaces and abstractions are co-designed for researchers' needs and Julia's features to achieve sufficient separation of concerns and flexibility to easily extend basic GEMMs in many different ways without paying a performance price. Comparing our GEMMs to state-of-the-art libraries cuBLAS and CUTLASS, we demonstrate that our performance is in the same ballpark of the libraries, and in some cases even exceeds it, without having to write a single line of code in CUDA C++ or assembly, and without facing flexibility limitations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源