论文标题
大规模非凸优化:随机化,差距估计和数值分辨率
Large-scale nonconvex optimization: randomization, gap estimation, and numerical resolution
论文作者
论文摘要
我们解决了一个大规模和非convex优化问题,涉及一个总术语。该术语可以解释为n个代理人对某些共同利益的贡献总和。我们研究了通过随机分组获得的这个问题的放松。事实证明,放松差距被证明会融合到零,而n则与骨料的尺寸无关。我们提出了一种随机方法来构建原始问题的近似最小化器,给定随机问题的近似解决方案。 McDiarmid的浓度不平等用于量化该方法成功的可能性。我们认为弗兰克 - 沃尔夫(FW)算法解决了随机问题。算法的每一次迭代都需要解决一个可以分解为n个独立优化问题的子问题。对于FW算法,获得了均匀收敛速率。为了处理可能由FW算法引起的内存溢出问题,我们提出了一种随机的Frank-Wolfe(SFW)算法,该算法确保了预期和概率感官的收敛性。混合二次二次程序的数值实验说明了该方法的效率。
We address a large-scale and nonconvex optimization problem, involving an aggregative term. This term can be interpreted as the sum of the contributions of N agents to some common good, with N large. We investigate a relaxation of this problem, obtained by randomization. The relaxation gap is proved to converge to zeros as N goes to infinity, independently of the dimension of the aggregate. We propose a stochastic method to construct an approximate minimizer of the original problem, given an approximate solution of the randomized problem. McDiarmid's concentration inequality is used to quantify the probability of success of the method. We consider the Frank-Wolfe (FW) algorithm for the resolution of the randomized problem. Each iteration of the algorithm requires to solve a subproblem which can be decomposed into N independent optimization problems. A sublinear convergence rate is obtained for the FW algorithm. In order to handle the memory overflow problem possibly caused by the FW algorithm, we propose a stochastic Frank-Wolfe (SFW) algorithm, which ensures the convergence in both expectation and probability senses. Numerical experiments on a mixed-integer quadratic program illustrate the efficiency of the method.