论文标题
一阶在线优化方法的跟踪错误的界限
Bounds for the tracking error of first-order online optimization methods
论文作者
论文摘要
本文研究了在线算法,以解决平稳的时变优化问题,首先关注具有持续的阶梯尺寸,动量和外推长的方法。假设有很强的凸度,则得出了在线梯度下降的跟踪迭代误差的精确结果(最佳解决方案和迭代率之间的差异的极限至上)。然后,本文考虑了一般的一阶框架,其中建立了跟踪局部误差上的通用下限。此外,提出了一种使用“长步”的方法,并显示出将下限达到固定常数。然后将此方法与特定示例的在线梯度下降进行比较。最后,本文分析了当成本不强烈凸出时,正规化的效果。通过正则化,可以实现非重格结合。本文分别在合成时间变化最小二乘和逻辑回归问题上测试加速和正规化方法结束。
This paper investigates online algorithms for smooth time-varying optimization problems, focusing first on methods with constant step-size, momentum, and extrapolation-length. Assuming strong convexity, precise results for the tracking iterate error (the limit supremum of the norm of the difference between the optimal solution and the iterates) for online gradient descent are derived. The paper then considers a general first-order framework, where a universal lower bound on the tracking iterate error is established. Furthermore, a method using "long-steps" is proposed and shown to achieve the lower bound up to a fixed constant. This method is then compared with online gradient descent for specific examples. Finally, the paper analyzes the effect of regularization when the cost is not strongly convex. With regularization, it is possible to achieve a non-regret bound. The paper ends by testing the accelerated and regularized methods on synthetic time-varying least-squares and logistic regression problems, respectively.