论文标题

匆忙:通过随机平滑学习的强大对比度学习

RUSH: Robust Contrastive Learning via Randomized Smoothing

论文作者

Pang, Yijiang, Liu, Boyang, Zhou, Jiayu

论文摘要

最近,对抗性训练已被纳入自我监督的对比预训练中,以增强标签效率,并具有令人兴奋的对抗性鲁棒性。但是,鲁棒性是经过昂贵的对抗训练的代价。在本文中,我们表明了一个令人惊讶的事实,即对比的预训练与鲁棒性具有有趣而隐含的联系,并且在受过训练的代表性中具有这种自然的鲁棒性,使我们能够针对对抗性攻击设计有强大的鲁棒算法,以防止对抗性攻击,并结合了标准的对比预训练和随机平滑。它提高了标准准确性和强大的精度,并且与对抗性训练相比,它大大降低了培训成本。我们使用广泛的经验研究表明,拟议的Rush在一阶攻击下的共同基准(CIFAR-10,CIFAR-100和STL-10)的大幅度优于对抗性训练的强大分类器。特别是,在$ \ ell _ {\ infty} $ - 尺寸为8/255 PGD攻击CIFAR-10的范围扰动下,我们使用RESNET-18的型号作为骨架达到77.8%的稳健精度和87.9%的标准精度。与最先进的工作相比,我们的工作的鲁棒精度提高了15%以上,标准准确性略有提高。

Recently, adversarial training has been incorporated in self-supervised contrastive pre-training to augment label efficiency with exciting adversarial robustness. However, the robustness came at a cost of expensive adversarial training. In this paper, we show a surprising fact that contrastive pre-training has an interesting yet implicit connection with robustness, and such natural robustness in the pre trained representation enables us to design a powerful robust algorithm against adversarial attacks, RUSH, that combines the standard contrastive pre-training and randomized smoothing. It boosts both standard accuracy and robust accuracy, and significantly reduces training costs as compared with adversarial training. We use extensive empirical studies to show that the proposed RUSH outperforms robust classifiers from adversarial training, by a significant margin on common benchmarks (CIFAR-10, CIFAR-100, and STL-10) under first-order attacks. In particular, under $\ell_{\infty}$-norm perturbations of size 8/255 PGD attack on CIFAR-10, our model using ResNet-18 as backbone reached 77.8% robust accuracy and 87.9% standard accuracy. Our work has an improvement of over 15% in robust accuracy and a slight improvement in standard accuracy, compared to the state-of-the-arts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源