论文标题

PROSELFLC:渐进的自我标签校正训练可靠的深神经网络

ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks

论文作者

Wang, Xinshao, Hua, Yang, Kodirov, Elyor, Clifton, David A., Robertson, Neil M.

论文摘要

为了训练强大的深神经网络(DNNS),我们系统地研究了几种目标修改方法,其中包括输出正则化,自我和非自动标签校正(LC)。发现了两个关键问题:(1)自我LC是最吸引人的,因为它利用了自己的知识,不需要额外的模型。但是,在文献中,如何自动决定学习者的信任程度? (2)一些方法会受到惩罚,而另一些方法奖励低渗透预测,促使我们问哪一种更好? 为了解决第一个问题,采取两个良好接受的命题 - 深度神经网络在拟合噪声[3]和最小熵正规化原则之前先学习有意义的模式[10] - 我们提出了一种名为Proselflc的新颖端到端方法,该方法是根据学习时间和熵设计的。具体而言,在数据点下,如果对模型进行了足够的时间训练,并且预测的熵较低(置信度很高),则我们逐渐增加对其预测标签分布的信任,而不是注释。根据ProSelfLC的说法,在第二个问题上,我们从经验上证明,最好重新定义有意义的低渗透状态并优化学习者。这是防御熵最小化的防御。 我们通过在清洁和嘈杂的环境中进行广泛的实验来证明ProSelfLC的有效性。源代码可在https://github.com/xinshaoamoswang/proselflc-cvpr2r2021上获得。 关键字:熵最小化,最大熵,置信惩罚,自我知识蒸馏,标签校正,标签噪声,半监督学习,输出正则化

To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which include output regularisation, self and non-self label correction (LC). Two key issues are discovered: (1) Self LC is the most appealing as it exploits its own knowledge and requires no extra models. However, how to automatically decide the trust degree of a learner as training goes is not well answered in the literature? (2) Some methods penalise while the others reward low-entropy predictions, prompting us to ask which one is better? To resolve the first issue, taking two well-accepted propositions--deep neural networks learn meaningful patterns before fitting noise [3] and minimum entropy regularisation principle [10]--we propose a novel end-to-end method named ProSelfLC, which is designed according to learning time and entropy. Specifically, given a data point, we progressively increase trust in its predicted label distribution versus its annotated one if a model has been trained for enough time and the prediction is of low entropy (high confidence). For the second issue, according to ProSelfLC, we empirically prove that it is better to redefine a meaningful low-entropy status and optimise the learner toward it. This serves as a defence of entropy minimisation. We demonstrate the effectiveness of ProSelfLC through extensive experiments in both clean and noisy settings. The source code is available at https://github.com/XinshaoAmosWang/ProSelfLC-CVPR2021. Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源