论文标题

一种新颖的插件方法,用于对抗性强大的概括

A Novel Plug-and-Play Approach for Adversarially Robust Generalization

论文作者

Maurya, Deepak, Barik, Adarsh, Honorio, Jean

论文摘要

在这项工作中,我们提出了一个强大的框架,该框架采用了对抗性良好的训练来保护ML模型免受扰动测试数据的影响。从计算和统计角度可以看出我们的贡献。首先,从计算/优化的角度来看,我们为多种广泛使用的损耗函数得出了现成的精确解决方案,这些损失功能具有各种规范限制,这些函数在对抗性扰动方面具有各种监督和无监督的ML问题,包括回归,分类,两层神经网络,图形模型,图形模型以及矩阵完成。这些解决方案要么以封闭形式或易于处理的优化问题,例如1-D凸优化,半决赛编程,凸编程差异或基于分类的算法。其次,从统计/概括的角度来看,使用其中一些结果,我们得出了各种问题的对抗性rademacher复杂性的新界限,这需要新的概括范围。第三,我们在现实世界数据集上进行了一些理智检查实验,以解决诸如回归和分类之类的监督问题,以及无监督的问题,例如矩阵完成和学习图形模型,而计算上的计算机很少。

In this work, we propose a robust framework that employs adversarially robust training to safeguard the ML models against perturbed testing data. Our contributions can be seen from both computational and statistical perspectives. Firstly, from a computational/optimization point of view, we derive the ready-to-use exact solution for several widely used loss functions with a variety of norm constraints on adversarial perturbation for various supervised and unsupervised ML problems, including regression, classification, two-layer neural networks, graphical models, and matrix completion. The solutions are either in closed-form, or an easily tractable optimization problem such as 1-D convex optimization, semidefinite programming, difference of convex programming or a sorting-based algorithm. Secondly, from statistical/generalization viewpoint, using some of these results, we derive novel bounds of the adversarial Rademacher complexity for various problems, which entails new generalization bounds. Thirdly, we perform some sanity-check experiments on real-world datasets for supervised problems such as regression and classification, as well as for unsupervised problems such as matrix completion and learning graphical models, with very little computational overhead.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源