论文标题
政策梯度方法的随机递归动量
Stochastic Recursive Momentum for Policy Gradient Methods
论文作者
论文摘要
在本文中,我们提出了一种新型算法,称为政策梯度(Storm-PG)的随机递归动量,该算法以指数的移动方式运行Sarah型随机递归递归降低的策略梯度。 Storm-PG享受了一个可证明的$ O(1/ε^3)$样本复杂性,符合Storm-PG,与策略梯度算法的最著名收敛率相匹配。同时,Storm-PG避免了大批次和小批量之间的交替,这些批次持续使用可比的降低策略梯度方法,从而允许更简单的参数调整。数值实验描述了我们的算法优于比较策略梯度算法。
In this paper, we propose a novel algorithm named STOchastic Recursive Momentum for Policy Gradient (STORM-PG), which operates a SARAH-type stochastic recursive variance-reduced policy gradient in an exponential moving average fashion. STORM-PG enjoys a provably sharp $O(1/ε^3)$ sample complexity bound for STORM-PG, matching the best-known convergence rate for policy gradient algorithm. In the mean time, STORM-PG avoids the alternations between large batches and small batches which persists in comparable variance-reduced policy gradient methods, allowing considerably simpler parameter tuning. Numerical experiments depicts the superiority of our algorithm over comparative policy gradient algorithms.