论文标题

牛顿型最小值优化方法

Newton-type Methods for Minimax Optimization

论文作者

Zhang, Guojun, Wu, Kaiwen, Poupart, Pascal, Yu, Yaoliang

论文摘要

差异游戏,尤其是两个玩家的顺序零和游戏(又称Minimax优化),一直是应用科学的重要建模工具,并且由于许多最近的应用,例如对抗性培训,生成模型和增强学习,因此对机器学习产生了重新兴趣。但是,现有理论主要集中在凸 - 凸函数上,但很少有例外。在这项工作中,我们提出了两种新型的牛顿型算法,用于非Convex-Nonconcave minimax优化。我们证明了他们在严格的局部最小值点上的本地融合,这是全球解决方案的替代品。我们认为,我们的牛顿型算法很好地补充了现有的算法(a)它们会更快地收敛到严格的本地最小点; (b)当问题条件不足时,它们会更加有效; (c)他们的计算复杂性仍然相似。我们通过实验对本质上非概念且条件不良的训练gan进行实验来验证牛顿型算法的有效性。我们的代码可在https://github.com/watml/min-max-2nd-order上找到。

Differential games, in particular two-player sequential zero-sum games (a.k.a. minimax optimization), have been an important modeling tool in applied science and received renewed interest in machine learning due to many recent applications, such as adversarial training, generative models and reinforcement learning. However, existing theory mostly focuses on convex-concave functions with few exceptions. In this work, we propose two novel Newton-type algorithms for nonconvex-nonconcave minimax optimization. We prove their local convergence at strict local minimax points, which are surrogates of global solutions. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to strict local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify the effectiveness of our Newton-type algorithms through experiments on training GANs which are intrinsically nonconvex and ill-conditioned. Our code is available at https://github.com/watml/min-max-2nd-order.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源