论文标题

关于改善DNN鲁棒性的正规化之间的连接

On Connections between Regularizations for Improving DNN Robustness

论文作者

Guo, Yiwen, Chen, Long, Chen, Yurong, Zhang, Changshui

论文摘要

本文从理论的角度分析了最近提出的,用于改善深神经网络(DNNS)的对抗性鲁棒性。具体而言,我们研究了几种有效方法之间的可能联系,包括投入梯度正则化,雅各布式正则化,曲率正则化和跨液化功能。我们在具有一般整流线性激活的DNN上对其进行了调查,这构成了图像分类和许多其他机器学习应用的最普遍的模型系列之一。我们阐明了这些正常化的基本要素,并重新解释了它们的功能。通过我们的研究镜头,可以在不久的将来发明更有原则和有效的正规化。

This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs), from a theoretical point of view. Specifically, we study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional. We investigate them on DNNs with general rectified linear activations, which constitute one of the most prevalent families of models for image classification and a host of other machine learning applications. We shed light on essential ingredients of these regularizations and re-interpret their functionality. Through the lens of our study, more principled and efficient regularizations can possibly be invented in the near future.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源