论文标题
了解对抗性正规化在监督学习中的作用
Understanding the Role of Adversarial Regularization in Supervised Learning
论文作者
论文摘要
尽管许多尝试提供了对抗正则化的经验证据,但对这种现象的理论理解仍然难以捉摸。在这项研究中,我们旨在解决对抗性正则化是否确实比基本水平的唯一监督更好。为了使这种见解对成果,我们研究了消失的梯度问题,渐近迭代复杂性,梯度流量和可证明的收敛性在唯一的监督和对抗正则化的背景下。关键要素是理论上的理由,其经验证据证明了梯度下降中对抗加速度的经验证据。此外,以最近引入的基于单位容量的概括约束的动机,我们分析了对抗框架中的概括误差。在我们的观察结果的指导下,我们对这项措施解释概括的能力表示怀疑。因此,我们将作为探索可以解释对抗性学习中泛化行为的新措施的开放问题。此外,我们观察到神经嵌入的载体空间中的一种有趣的现象,同时将对抗性学习与唯一的监督对比。
Despite numerous attempts sought to provide empirical evidence of adversarial regularization outperforming sole supervision, the theoretical understanding of such phenomena remains elusive. In this study, we aim to resolve whether adversarial regularization indeed performs better than sole supervision at a fundamental level. To bring this insight into fruition, we study vanishing gradient issue, asymptotic iteration complexity, gradient flow and provable convergence in the context of sole supervision and adversarial regularization. The key ingredient is a theoretical justification supported by empirical evidence of adversarial acceleration in gradient descent. In addition, motivated by a recently introduced unit-wise capacity based generalization bound, we analyze the generalization error in adversarial framework. Guided by our observation, we cast doubts on the ability of this measure to explain generalization. We therefore leave as open questions to explore new measures that can explain generalization behavior in adversarial learning. Furthermore, we observe an intriguing phenomenon in the neural embedded vector space while contrasting adversarial learning with sole supervision.