论文标题

无梯度的最小值优化:差异降低和更快的收敛性

Gradient Free Minimax Optimization: Variance Reduction and Faster Convergence

论文作者

Xu, Tengyu, Wang, Zhe, Liang, Yingbin, Poor, H. Vincent

论文摘要

许多重要的机器学习应用程序等于解决最小值优化问题,在许多情况下,无法访问梯度信息,而只能使用功能值。在本文中,我们专注于这样的无梯度设置,并考虑非凸孔concave minimax随机优化问题。在文献中,已经提出了各种零阶(即无梯度)最小方法,但是它们都没有达到$ \ Mathcal {o}(ε^{ - 3})$的潜在可行计算复杂性。在本文中,我们采用降低方差技术来设计一种新型的零阶方差减少梯度下降(ZO-VRGDA)算法。 We show that the ZO-VRGDA algorithm achieves the best known query complexity of $\mathcal{O}(κ(d_1 + d_2)ε^{-3})$, which outperforms all previous complexity bound by orders of magnitude, where $d_1$ and $d_2$ denote the dimensions of the optimization variables and $κ$ denotes the condition number.特别是,通过我们开发的新分析技术,我们的结果不依赖于现有方法中通常需要的依赖或准确性依赖性的步骤。据我们所知,这是对零级最小优化的首次研究,并降低了方差。黑框分布强稳定优化问题的实验结果证明了我们新算法的优势性能。

Many important machine learning applications amount to solving minimax optimization problems, and in many cases there is no access to the gradient information, but only the function values. In this paper, we focus on such a gradient-free setting, and consider the nonconvex-strongly-concave minimax stochastic optimization problem. In the literature, various zeroth-order (i.e., gradient-free) minimax methods have been proposed, but none of them achieve the potentially feasible computational complexity of $\mathcal{O}(ε^{-3})$ suggested by the stochastic nonconvex minimization theorem. In this paper, we adopt the variance reduction technique to design a novel zeroth-order variance reduced gradient descent ascent (ZO-VRGDA) algorithm. We show that the ZO-VRGDA algorithm achieves the best known query complexity of $\mathcal{O}(κ(d_1 + d_2)ε^{-3})$, which outperforms all previous complexity bound by orders of magnitude, where $d_1$ and $d_2$ denote the dimensions of the optimization variables and $κ$ denotes the condition number. In particular, with a new analysis technique that we develop, our result does not rely on a diminishing or accuracy-dependent stepsize usually required in the existing methods. To our best knowledge, this is the first study of zeroth-order minimax optimization with variance reduction. Experimental results on the black-box distributional robust optimization problem demonstrates the advantageous performance of our new algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源