论文标题
对比度学习与对抗性例子
Contrastive Learning with Adversarial Examples
论文作者
论文摘要
对比度学习(CL)是视觉表示自我监督学习(SSL)的流行技术。它使用一对未标记的培训示例的增强型来定义一个分类任务,以学习深入嵌入的借口。尽管在增强程序中进行了广泛的工作,但先前的工作并未解决具有挑战性的负面对的选择,因为采样批次中的图像是独立处理的。本文通过引入一个新的对抗性示例来解决该问题,以进行约束学习,并使用这些示例来为SSL定义新的对抗性培训算法,称为CLAE。与标准CL相比,对抗性示例的使用会产生更具挑战性的积极对和对抗训练,从而在优化过程中考虑批处理中的所有图像,从而产生更硬的负对。 Clae与文献中的许多CL方法兼容。实验表明,它可以提高多个数据集上几个现有CL基线的性能。
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations. It uses pairs of augmentations of unlabeled training examples to define a classification task for pretext learning of a deep embedding. Despite extensive works in augmentation procedures, prior works do not address the selection of challenging negative pairs, as images within a sampled batch are treated independently. This paper addresses the problem, by introducing a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE. When compared to standard CL, the use of adversarial examples creates more challenging positive pairs and adversarial training produces harder negative pairs by accounting for all images in a batch during the optimization. CLAE is compatible with many CL methods in the literature. Experiments show that it improves the performance of several existing CL baselines on multiple datasets.