论文标题
SlimConv:通过重量降低卷积神经网络中的通道冗余
SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights Flipping
论文作者
论文摘要
卷积神经网络(CNN)特征地图中的频道冗余导致记忆和计算资源的大量消费。在这项工作中,我们设计了一个新颖的Slim卷积(Slimconv)模块,以通过减少通道冗余来提高CNN的性能。我们的SlimConv由三个主要步骤组成:重建,转换和保险丝,通过这些步骤以更有效的方式将功能分配和重组,以便可以有效地压缩学习的权重。特别是,我们模型的核心是重量翻转操作,可以在很大程度上改善特征多样性,从而有助于性能。我们的SlimConv是一个插件的建筑单元,可用于直接替换CNN中的卷积层。我们通过对ImageNet,MS Coco2014,Pascal VOC2012分段和Pascal VOC2007检测数据集进行全面实验来验证SLIMCONV的有效性。实验表明,与非配备的ConterParts相比,配备SlimConv的模型可以始终如一地实现更好的性能,记忆和计算资源的消耗量更少。例如,带有SLIMCONV的RESNET-101具有4.87 GFLOPS的TOP-1分类精度为77.84%,ImageNet上的参数为27.96万参数,降低了3 GFLOPS和38%的参数,其性能差不多0.5%。
The channel redundancy in feature maps of convolutional neural networks (CNNs) results in the large consumption of memories and computational resources. In this work, we design a novel Slim Convolution (SlimConv) module to boost the performance of CNNs by reducing channel redundancies. Our SlimConv consists of three main steps: Reconstruct, Transform and Fuse, through which the features are splitted and reorganized in a more efficient way, such that the learned weights can be compressed effectively. In particular, the core of our model is a weight flipping operation which can largely improve the feature diversities, contributing to the performance crucially. Our SlimConv is a plug-and-play architectural unit which can be used to replace convolutional layers in CNNs directly. We validate the effectiveness of SlimConv by conducting comprehensive experiments on ImageNet, MS COCO2014, Pascal VOC2012 segmentation, and Pascal VOC2007 detection datasets. The experiments show that SlimConv-equipped models can achieve better performances consistently, less consumption of memory and computation resources than non-equipped conterparts. For example, the ResNet-101 fitted with SlimConv achieves 77.84% top-1 classification accuracy with 4.87 GFLOPs and 27.96M parameters on ImageNet, which shows almost 0.5% better performance with about 3 GFLOPs and 38% parameters reduced.