论文标题
PERMDNN:具有置换的对角线矩阵有效的压缩DNN体系结构
PERMDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices
论文作者
论文摘要
深神经网络(DNN)已成为最重要,最受欢迎的人工智能(AI)技术。模型大小的增长对基础计算平台构成了关键的能效挑战。因此,模型压缩成为一个关键问题。但是,当前的方法受到各种缺点的限制。具体而言,网络稀疏方法患有不规则性,启发式性质和大型索引开销。另一方面,最近的基于结构化矩阵的方法(即圆圈)受到相对复杂的算术计算(即FFT)的限制,柔性较低的压缩比及其无法完全利用输入稀疏性。为了解决这些缺点,本文提出了Permdnn,这是一种新颖的方法,用于生成和执行使用置换的对角线矩阵的结构性稀疏DNN模型。与非结构化的稀疏方法相比,Permdnn消除了索引开销,非效率压缩效应和耗时的验证的缺点。与循环结构的方法相比,Permdnn享有更高降低计算复杂性,柔性压缩比,简单算术计算和全面利用输入稀疏性的好处。我们提出了Permdnn Architecture,这是一个多处理元素(PE)完全连接的(FC)层目标计算引擎。整个体系结构具有高度可扩展性和灵活性,因此可以通过不同的模型配置来支持不同应用程序的需求。我们使用CMOS 28NM技术实施32-PE设计。与EIE相比,Permdnn的整个整个范围高3.3倍〜4.8倍,面积效率提高了5.9 x〜 8.5倍,在不同的工作负载上提高了2.8 x〜4.0x的能量效率。与Circnn相比,Permdnn的吞吐量提高了11.51倍,能源效率提高了3.89倍。
Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique. The growth of model size poses a key energy efficiency challenge for the underlying computing platform. Thus, model compression becomes a crucial problem. However, the current approaches are limited by various drawbacks. Specifically, network sparsification approach suffers from irregularity, heuristic nature and large indexing overhead. On the other hand, the recent structured matrix-based approach (i.e., CirCNN) is limited by the relatively complex arithmetic computation (i.e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity. To address these drawbacks, this paper proposes PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices. Compared with unstructured sparsification approach, PermDNN eliminates the drawbacks of indexing overhead, non-heuristic compression effects and time-consuming retraining. Compared with circulant structure-imposing approach, PermDNN enjoys the benefits of higher reduction in computational complexity, flexible compression ratio, simple arithmetic computation and full utilization of input sparsity. We propose PermDNN architecture, a multi-processing element (PE) fully-connected (FC) layer-targeted computing engine. The entire architecture is highly scalable and flexible, and hence it can support the needs of different applications with different model configurations. We implement a 32-PE design using CMOS 28nm technology. Compared with EIE, PermDNN achieves 3.3x~4.8x higher throughout, 5.9x~8.5x better area efficiency and 2.8x~4.0x better energy efficiency on different workloads. Compared with CirCNN, PermDNN achieves 11.51x higher throughput and 3.89x better energy efficiency.