论文标题
PRUNIX:非理想的卷积神经网络修剪备忘录加速器
PRUNIX: Non-Ideality Aware Convolutional Neural Network Pruning for Memristive Accelerators
论文作者
论文摘要
在这项工作中,Prunix提出了用于在基于Memristor Crossbar的加速器上部署的培训和修剪卷积神经网络的框架。 Prunix考虑了Memristor横杆的众多非理想效应,包括重量量化,状态饮用,老龄化和折磨。 Prunix利用了一种新型的锯齿正则化,旨在提高非理想耐受性和稀疏性,以及一种新型的自适应修剪算法(APA),旨在通过考虑CNN对CNN不同层的敏感性来最大程度地降低准确性损失。我们将我们的正则化和修剪方法与多个CNN体系结构上的其他标准进行比较,并在量化和其他非理想效应的总体稀疏性为85%时,观察到13%的测试准确性的提高,这类似于其他方法
In this work, PRUNIX, a framework for training and pruning convolutional neural networks is proposed for deployment on memristor crossbar based accelerators. PRUNIX takes into account the numerous non-ideal effects of memristor crossbars including weight quantization, state-drift, aging and stuck-at-faults. PRUNIX utilises a novel Group Sawtooth Regularization intended to improve non-ideality tolerance as well as sparsity, and a novel Adaptive Pruning Algorithm (APA) intended to minimise accuracy loss by considering the sensitivity of different layers of a CNN to pruning. We compare our regularization and pruning methods with other standards on multiple CNN architectures, and observe an improvement of 13% test accuracy when quantization and other non-ideal effects are accounted for with an overall sparsity of 85%, which is similar to other methods