论文标题
CNN中的每个层都不平淡吗?
Is Each Layer Non-trivial in CNN?
论文作者
论文摘要
卷积神经网络(CNN)模型在许多领域都取得了巨大的成功。随着Resnet的出现,实践中使用的网络越来越深。但是,网络中的每个层都不平淡吗?为了回答这个问题,我们在训练集上训练了一个网络,然后用零替换网络卷积内核,并在测试集中测试结果模型。我们将实验结果与基线进行了比较,并表明我们可以达到相似甚至相同的性能。尽管卷积内核是网络的核心,但我们证明了其中一些是微不足道的,并且是规则的。
Convolutional neural network (CNN) models have achieved great success in many fields. With the advent of ResNet, networks used in practice are getting deeper and wider. However, is each layer non-trivial in networks? To answer this question, we trained a network on the training set, then we replace the network convolution kernels with zeros and test the result models on the test set. We compared experimental results with baseline and showed that we can reach similar or even the same performances. Although convolution kernels are the cores of networks, we demonstrate that some of them are trivial and regular in ResNet.