论文标题
校准替代损失的对抗性稳健分类
Calibrated Surrogate Losses for Adversarially Robust Classification
论文作者
论文摘要
对抗性稳健的分类寻求一种对测试模式对抗性扰动不敏感的分类器。这个问题通常是通过最小值目标提出的,其中目标损失是0-1损失的最差案例值,但受扰动大小的约束。最近的工作提出了对对抗性0-1损失的凸代替代物,以使优化更加易于处理。一个主要的问题是一致性,即替代风险的最小化是否意味着对抗性0-1风险的最小化。在这项工作中,我们通过校准镜头分析了这个问题,这是一致性的一个刻录概念。我们表明,当仅限于线性模型类别时,没有凸面替代损失相对于对抗性0-1损失进行校准。我们进一步引入了一类非概念损失,并提供了必须校准此类损失的必要条件。我们还表明,如果潜在的分布满足Massart的噪声条件,则在对抗环境中也可以校准凸损。
Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns. This problem is often formulated via a minimax objective, where the target loss is the worst-case value of the 0-1 loss subject to a bound on the size of perturbation. Recent work has proposed convex surrogates for the adversarial 0-1 loss, in an effort to make optimization more tractable. A primary question is that of consistency, that is, whether minimization of the surrogate risk implies minimization of the adversarial 0-1 risk. In this work, we analyze this question through the lens of calibration, which is a pointwise notion of consistency. We show that no convex surrogate loss is calibrated with respect to the adversarial 0-1 loss when restricted to the class of linear models. We further introduce a class of nonconvex losses and offer necessary and sufficient conditions for losses in this class to be calibrated. We also show that if the underlying distribution satisfies Massart's noise condition, convex losses can also be calibrated in the adversarial setting.