论文标题

自然训练中的SGD超参数如何影响对抗性鲁棒性?

How do SGD hyperparameters in natural training affect adversarial robustness?

论文作者

Kamath, Sandesh, Deshpande, Amit, Subrahmanyam, K V

论文摘要

学习率,批次大小和动量是SGD算法中的三个重要的超参数。从Jastrzebski等人的工作中知道。 ARXIV:1711.04623,神经网络的大批量训练产生的模型不能很好地概括。 Yao等。 ARXIV:1802.08241观察到,大批量训练产生的模型较差。在同一篇论文中,作者训练具有不同批次大小的模型,并计算损失函数Hessian的特征值。他们观察到,随着批处理大小的增加,黑森的主要特征值变得更大。他们还表明,对抗性训练和小批量训练都会导致黑森州的主要特征值下降或降低其频谱。他们结合了对抗训练和二阶信息,以提出一种新的大批量培训算法,并获得具有良好概括的强大模型。在本文中,我们从经验上观察了SGD超参数对接受不受干扰样品训练的网络的准确性和对抗性鲁棒性的影响。 Jastrzebski等。考虑了具有固定学习率与批处理比率的培训模型。他们观察到较高的比率是概括。我们观察到,按照Jastrzebski等人的提议,经过恒定学习率与批次大小比率的网络产量模型良好,并且具有几乎恒定的对抗性鲁棒性,与批处理大小无关。我们观察到,与基于批处理大小比率的SGD训练相比,具有不同批次大小和固定学习率的动量更有效。

Learning rate, batch size and momentum are three important hyperparameters in the SGD algorithm. It is known from the work of Jastrzebski et al. arXiv:1711.04623 that large batch size training of neural networks yields models which do not generalize well. Yao et al. arXiv:1802.08241 observe that large batch training yields models that have poor adversarial robustness. In the same paper, the authors train models with different batch sizes and compute the eigenvalues of the Hessian of loss function. They observe that as the batch size increases, the dominant eigenvalues of the Hessian become larger. They also show that both adversarial training and small-batch training leads to a drop in the dominant eigenvalues of the Hessian or lowering its spectrum. They combine adversarial training and second order information to come up with a new large-batch training algorithm and obtain robust models with good generalization. In this paper, we empirically observe the effect of the SGD hyperparameters on the accuracy and adversarial robustness of networks trained with unperturbed samples. Jastrzebski et al. considered training models with a fixed learning rate to batch size ratio. They observed that higher the ratio, better is the generalization. We observe that networks trained with constant learning rate to batch size ratio, as proposed in Jastrzebski et al., yield models which generalize well and also have almost constant adversarial robustness, independent of the batch size. We observe that momentum is more effective with varying batch sizes and a fixed learning rate than with constant learning rate to batch size ratio based SGD training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源