论文标题
复合优化的一致近似
Consistent Approximations in Composite Optimization
论文作者
论文摘要
在计算程序和灵敏度分析中出现了优化问题的近似值。最终对溶液的影响可能很重要,即使是问题的组成部分的近似值也会转化为解决方案中的大误差。我们指定条件,在最小化器,固定点和级别的意义上,近似值表现得很好,这将导致一个一致近似值的框架。该框架是针对广泛的复合问题开发的,这些问题既不是凸面也不光滑的。我们使用随机优化,基于神经网络的机器学习,分配强大的优化,惩罚和增强拉格朗日方法的示例来证明框架,内部点方法,同型方法,平滑方法,扩展的非线性编程,差异范围编程,差异范围的convex编程和多目标优化。增强的近端方法说明了算法的可能性。定量分析通过提供收敛速度来补充开发。
Approximations of optimization problems arise in computational procedures and sensitivity analysis. The resulting effect on solutions can be significant, with even small approximations of components of a problem translating into large errors in the solutions. We specify conditions under which approximations are well behaved in the sense of minimizers, stationary points, and level-sets and this leads to a framework of consistent approximations. The framework is developed for a broad class of composite problems, which are neither convex nor smooth. We demonstrate the framework using examples from stochastic optimization, neural-network based machine learning, distributionally robust optimization, penalty and augmented Lagrangian methods, interior-point methods, homotopy methods, smoothing methods, extended nonlinear programming, difference-of-convex programming, and multi-objective optimization. An enhanced proximal method illustrates the algorithmic possibilities. A quantitative analysis supplements the development by furnishing rates of convergence.