论文标题
重新访问神经网络压缩的随机通道修剪
Revisiting Random Channel Pruning for Neural Network Compression
论文作者
论文摘要
通道(或3D滤波器)修剪是加速神经网络推断的有效方法。有一系列算法试图解决这个实际问题,每种问题都在某些方面被声称有效。但是,缺乏直接比较这些算法的基准,这主要是由于算法的复杂性和某些自定义设置(例如特定的网络配置或培训过程)。公平的基准对于进一步的渠道修剪很重要。 同时,最近的研究表明,修剪算法发现的通道构型至少与预训练的权重一样重要。这使通道修剪起一个新角色,即搜索最佳通道配置。在本文中,我们尝试通过随机搜索来确定修剪模型的通道配置。提出的方法提供了一种比较不同方法的新方法,即与随机修剪相比,它们的行为方式。我们表明,与其他频道修剪方法相比,这种简单的策略效果很好。我们还表明,在这种设置下,在不同的渠道重要性评估方法中没有明确的赢家,然后将研究工作倾斜到高级渠道配置搜索方法中。
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of neural networks. There has been a flurry of algorithms that try to solve this practical problem, each being claimed effective in some ways. Yet, a benchmark to compare those algorithms directly is lacking, mainly due to the complexity of the algorithms and some custom settings such as the particular network configuration or training procedure. A fair benchmark is important for the further development of channel pruning. Meanwhile, recent investigations reveal that the channel configurations discovered by pruning algorithms are at least as important as the pre-trained weights. This gives channel pruning a new role, namely searching the optimal channel configuration. In this paper, we try to determine the channel configuration of the pruned models by random search. The proposed approach provides a new way to compare different methods, namely how well they behave compared with random pruning. We show that this simple strategy works quite well compared with other channel pruning methods. We also show that under this setting, there are surprisingly no clear winners among different channel importance evaluation methods, which then may tilt the research efforts into advanced channel configuration searching methods.