论文标题

通过自适应网络探索模型的鲁棒性和改进的对抗训练

Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training

论文作者

Xu, Zheng, Shafahi, Ali, Goldstein, Tom

论文摘要

事实证明,对抗性训练可以有效地硬化网络与对抗性例子。但是,获得的鲁棒性受网络容量和培训样本数量的限制。因此,为了构建更强大的模型,常见的做法是在具有更多参数的加宽网络上训练。为了提高鲁棒性,我们提出了一个有条件的归一化模块,以适应输入样品的条件。一旦经过对抗训练,我们的自适应网络就可以在清洁验证的准确性和鲁棒性上胜过其非自适应的网络。我们的方法是客观的不可知论,并且一致地改善了常规的对抗训练目标和交易目标。我们的自适应网络还优于更大的扩大的非自适应体系结构,其参数多1.5倍。我们进一步在对抗训练中介绍了几种实用的``技巧'',以提高鲁棒性并经验验证其效率。

Adversarial training has proven to be effective in hardening networks against adversarial examples. However, the gained robustness is limited by network capacity and number of training samples. Consequently, to build more robust models, it is common practice to train on widened networks with more parameters. To boost robustness, we propose a conditional normalization module to adapt networks when conditioned on input samples. Our adaptive networks, once adversarially trained, can outperform their non-adaptive counterparts on both clean validation accuracy and robustness. Our method is objective agnostic and consistently improves both the conventional adversarial training objective and the TRADES objective. Our adaptive networks also outperform larger widened non-adaptive architectures that have 1.5 times more parameters. We further introduce several practical ``tricks'' in adversarial training to improve robustness and empirically verify their efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源