论文标题

进餐v2:在没有技巧的Imagenet上提高香草Resnet-50至80%+ TOP-1的精度

MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

论文作者

Shen, Zhiqiang, Savvides, Marios

论文摘要

我们引入了一个简单而有效的蒸馏框架,该框架能够将Imagenet上的香草Resnet-50提高至80%+ TOP-1的精度,而无需技巧。我们通过分析现有分类系统中的问题来构建这样的框架,并通过以下方式通过以下方式简化基本方法集合知识蒸馏,仅在最终输出上采用相似性损失和鉴别器,以及(2)使用所有教师合奏的软效率的平均值作为强有力的监督。有趣的是,提出了三种新的视角进行蒸馏:(1)重量衰减可以削弱甚至完全去除,因为软标签也具有正则化效果; (2)对学生使用良好的初始化至关重要; (3)如果重量初始化,则在蒸馏过程中不需要一hot/硬标签。我们表明,这种直接的框架可以实现最新的结果,而无需涉及任何常用技术,例如架构修改;外部训练数据超出了Imagenet; Autoaug/Randaug;余弦学习率;混合/cutmix培训;标签平滑;等等。我们的方法使用带有Vanilla Resnet-50的224x224的单个农作物大小在Imagenet上获得了80.67%的TOP-1精度,在同一网络结构下,大量的最先前的余量优于先前的最先前的最先进。我们的结果可以被视为使用知识蒸馏的强大基线,据我们最大的知识,这也是第一种能够增强Vanilla Resnet-50在没有体系结构修改或其他培训数据的情况下超过80%的方法。在较小的RESNET-18上,我们的蒸馏框架始终从69.76%提高到73.19%,这在现实世界应用中显示出巨大的实践值。我们的代码和模型可在以下网址提供:https://github.com/szq0214/meal-v2。

We introduce a simple yet effective distillation framework that is able to boost the vanilla ResNet-50 to 80%+ Top-1 accuracy on ImageNet without tricks. We construct such a framework through analyzing the problems in the existing classification system and simplify the base method ensemble knowledge distillation via discriminators by: (1) adopting the similarity loss and discriminator only on the final outputs and (2) using the average of softmax probabilities from all teacher ensembles as the stronger supervision. Intriguingly, three novel perspectives are presented for distillation: (1) weight decay can be weakened or even completely removed since the soft label also has a regularization effect; (2) using a good initialization for students is critical; and (3) one-hot/hard label is not necessary in the distillation process if the weights are well initialized. We show that such a straight-forward framework can achieve state-of-the-art results without involving any commonly-used techniques, such as architecture modification; outside training data beyond ImageNet; autoaug/randaug; cosine learning rate; mixup/cutmix training; label smoothing; etc. Our method obtains 80.67% top-1 accuracy on ImageNet using a single crop-size of 224x224 with vanilla ResNet-50, outperforming the previous state-of-the-arts by a significant margin under the same network structure. Our result can be regarded as a strong baseline using knowledge distillation, and to our best knowledge, this is also the first method that is able to boost vanilla ResNet-50 to surpass 80% on ImageNet without architecture modification or additional training data. On smaller ResNet-18, our distillation framework consistently improves from 69.76% to 73.19%, which shows tremendous practical values in real-world applications. Our code and models are available at: https://github.com/szq0214/MEAL-V2.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源