论文标题
Vaidya在小维度中凸的随机优化的方法
Vaidya's method for convex stochastic optimization in small dimension
论文作者
论文摘要
本文考虑了在相对较低的维空间(例如100个变量)中凸的随机优化的一般问题。众所周知,对于小维度的确定性凸优化问题,最快的收敛是通过重心类型方法(例如Vaidya的切割平面方法)实现的。对于随机优化问题,是否可以使用Vaidya方法的问题取决于它如何在亚级别中积累不准确性的问题。作者的最新结果指出,这些错误并未在Vaidya方法的迭代上积累,这允许提出其用于随机优化问题的类似物。主要技术是用其概率对应物(随机亚级别的算术平均值)代替Vaidya方法中的亚级别。本文实施了所述计划,该计划最终导致有效的(如果可以进行批处理的平行计算),以解决相对较低的维度空间中的凸随机优化问题。
This paper considers a general problem of convex stochastic optimization in a relatively low-dimensional space (e.g., 100 variables). It is known that for deterministic convex optimization problems of small dimensions, the fastest convergence is achieved by the center of gravity type methods (e.g., Vaidya's cutting plane method). For stochastic optimization problems, the question of whether Vaidya's method can be used comes down to the question of how it accumulates inaccuracy in the subgradient. The recent result of the authors states that the errors do not accumulate on iterations of Vaidya's method, which allows proposing its analog for stochastic optimization problems. The primary technique is to replace the subgradient in Vaidya's method with its probabilistic counterpart (the arithmetic mean of the stochastic subgradients). The present paper implements the described plan, which ultimately leads to an effective (if parallel computations for batching are possible) method for solving convex stochastic optimization problems in relatively low-dimensional spaces.