论文标题
学习表征强大地对小组转变和对抗性示例
Learning Representations Robust to Group Shifts and Adversarial Examples
论文作者
论文摘要
尽管深层神经网络在各种任务上取得了高度的表现,但广泛的研究表明,输入中的小调整可能会使模型预测失败。这个深层神经网络的问题导致了许多方法来提高模型鲁棒性,包括对抗性训练和分配稳健优化。尽管这两种方法都旨在学习强大的模型,但它们具有基本不同的动机:对抗性训练试图训练深层神经网络,以防止扰动,而分布强大的优化旨在改善最困难的“不确定分布”的模型性能。在这项工作中,我们提出了一种结合对抗性训练和组分配的算法,以提高强大的表示学习。在三个图像基准数据集上进行的实验表明,所提出的方法在不牺牲许多标准度量的情况下在稳健指标上取得了卓越的结果。
Despite the high performance achieved by deep neural networks on various tasks, extensive studies have demonstrated that small tweaks in the input could fail the model predictions. This issue of deep neural networks has led to a number of methods to improve model robustness, including adversarial training and distributionally robust optimization. Though both of these two methods are geared towards learning robust models, they have essentially different motivations: adversarial training attempts to train deep neural networks against perturbations, while distributional robust optimization aims at improving model performance on the most difficult "uncertain distributions". In this work, we propose an algorithm that combines adversarial training and group distribution robust optimization to improve robust representation learning. Experiments on three image benchmark datasets illustrate that the proposed method achieves superior results on robust metrics without sacrificing much of the standard measures.