论文标题
FISAR:基于深度神经网络的优化,前向不变的安全增强学习
FISAR: Forward Invariant Safe Reinforcement Learning with a Deep Neural Network-Based Optimize
论文作者
论文摘要
本文研究了用约束的强化学习,这在安全至关重要的环境中是必不可少的。为了单调减少约束违规,我们将约束作为Lyapunov功能,并对策略参数的更新动态施加新的线性约束。结果,原始的安全集可以是向前不变的。但是,由于新的保证可行约束是在更新动力学而不是原始策略参数上施加的,因此不再适用经典优化算法。为了解决这个问题,我们建议学习一个基于基于的深层神经网络(DNN)优化器,以优化目标,同时满足线性约束。约束 - 满意度是通过投影到由多个线性不等式约束制定的多层室中实现的,可以通过我们新设计的度量来分析求解。据我们所知,这是\ textIt {first}基于DNN的优化器,可通过正向不变性保证进行约束优化。我们表明,我们的优化器会训练一项政策,以减少违反约束并单调累积奖励。基于数值约束优化和障碍性导航的结果验证了理论发现。
This paper investigates reinforcement learning with constraints, which are indispensable in safety-critical environments. To drive the constraint violation monotonically decrease, we take the constraints as Lyapunov functions and impose new linear constraints on the policy parameters' updating dynamics. As a result, the original safety set can be forward-invariant. However, because the new guaranteed-feasible constraints are imposed on the updating dynamics instead of the original policy parameters, classic optimization algorithms are no longer applicable. To address this, we propose to learn a generic deep neural network (DNN)-based optimizer to optimize the objective while satisfying the linear constraints. The constraint-satisfaction is achieved via projection onto a polytope formulated by multiple linear inequality constraints, which can be solved analytically with our newly designed metric. To the best of our knowledge, this is the \textit{first} DNN-based optimizer for constrained optimization with the forward invariance guarantee. We show that our optimizer trains a policy to decrease the constraint violation and maximize the cumulative reward monotonically. Results on numerical constrained optimization and obstacle-avoidance navigation validate the theoretical findings.