论文标题

对抗性强大的模型可能无法更好地转移:从正则化的角度来看,域可转移性的足够条件

Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization

论文作者

Xu, Xiaojun, Zhang, Jacky Yibo, Ma, Evelyn, Son, Danny, Koyejo, Oluwasanmi, Li, Bo

论文摘要

机器学习(ML)鲁棒性和域的概括从根本上相关:它们基本上涉及对抗和自然设置下的数据分布变化。一方面,最近的研究表明,更健壮(受对抗训练)模型更具普遍性。另一方面,缺乏对其基本联系的理论理解。在本文中,我们探讨了考虑到不同因素(例如规范正规化和数据增强)(DA)等不同因素的正则化和域可转移性之间的关系。我们提出了一个一般的理论框架,证明涉及模型函数类正则化的因素是相对域转移性的足够条件。我们的分析意味着``鲁棒性''既不是必要的,也不足以进行可传递性;而是,正则化是理解域的可转移性的更基本的观点。然后,我们讨论流行的DA协议(包括对抗性培训)(包括对抗性培训),并表明何时可以将它们视为函数类别在某些条件下的正规化,从而在某些条件下进行了验证,并在进行多个实验,并在进行较高的实验。我们进行了验证,并将其进行了反对。不同的数据集。

Machine learning (ML) robustness and domain generalization are fundamentally correlated: they essentially concern data distribution shifts under adversarial and natural settings, respectively. On one hand, recent studies show that more robust (adversarially trained) models are more generalizable. On the other hand, there is a lack of theoretical understanding of their fundamental connections. In this paper, we explore the relationship between regularization and domain transferability considering different factors such as norm regularization and data augmentations (DA). We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability. Our analysis implies that ``robustness" is neither necessary nor sufficient for transferability; rather, regularization is a more fundamental perspective for understanding domain transferability. We then discuss popular DA protocols (including adversarial training) and show when they can be viewed as the function class regularization under certain conditions and therefore improve generalization. We conduct extensive experiments to verify our theoretical findings and show several counterexamples where robustness and generalization are negatively correlated on different datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源