论文标题
不变性和对对抗性扰动的敏感性之间的基本权衡
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
论文作者
论文摘要
对抗性的例子是为诱发错误分类而设计的恶意投入。经常研究的基于灵敏度的对抗性示例将语义上的变化引入了导致不同模型预测的输入。本文研究了一种互补的故障模式,基于不变性的对抗性示例,引入了最小的语义变化,以修改输入的真实标签,但可以保留模型的预测。我们证明了这两种类型的对抗例子之间的基本权衡。 我们表明,防御基于灵敏度的攻击会积极损害模型对基于不变性的攻击的准确性,并且需要新的方法来抵抗两种攻击类型。特别是,我们通过产生(事实证明)强大的小型扰动来打破最新的对抗训练和认证模型,但根据人类标签,它会改变输入类别。最后,我们正式表明,过度不变的分类器的存在源于标准数据集中过度稳定的预测特征。
Adversarial examples are malicious inputs crafted to induce misclassification. Commonly studied sensitivity-based adversarial examples introduce semantically-small changes to an input that result in a different model prediction. This paper studies a complementary failure mode, invariance-based adversarial examples, that introduce minimal semantic changes that modify an input's true label yet preserve the model's prediction. We demonstrate fundamental tradeoffs between these two types of adversarial examples. We show that defenses against sensitivity-based attacks actively harm a model's accuracy on invariance-based attacks, and that new approaches are needed to resist both attack types. In particular, we break state-of-the-art adversarially-trained and certifiably-robust models by generating small perturbations that the models are (provably) robust to, yet that change an input's class according to human labelers. Finally, we formally show that the existence of excessively invariant classifiers arises from the presence of overly-robust predictive features in standard datasets.