论文标题

具有多项式逻辑功能近似的基于模型的增强学习

Model-Based Reinforcement Learning with Multinomial Logistic Function Approximation

论文作者

Hwang, Taehyun, Oh, Min-hwan

论文摘要

我们研究了基于模型的增强学习(RL)针对情节马尔可夫决策过程(MDP),其过渡概率通过未知的过渡核心参数为具有状态和动作的特征。尽管在分析线性MDP设置中分析算法方面的最新进展,但对更通用的过渡模型的理解是非常限制的。在本文中,我们为MDP建立了一种可证明的有效的RL算法,其状态转变由多项式逻辑模型给出。为了平衡探索探索折衷,我们提出了一种基于置信度的算法。我们表明,我们提出的算法达到$ \ tilde {o}(d \ sqrt {h^3 t})$遗憾,其中$ d $是过渡核心的维度,$ h $是地平线,$ t $是步骤的总数。据我们所知,这是具有多项式逻辑函数近似值的第一个基于模型的RL算法,并具有可证明的保证。我们还通过数字上全面评估了我们提出的算法,并表明它始终优于现有方法,因此可以实现可证明的效率和实践上的出色性能。

We study model-based reinforcement learning (RL) for episodic Markov decision processes (MDP) whose transition probability is parametrized by an unknown transition core with features of state and action. Despite much recent progress in analyzing algorithms in the linear MDP setting, the understanding of more general transition models is very restrictive. In this paper, we establish a provably efficient RL algorithm for the MDP whose state transition is given by a multinomial logistic model. To balance the exploration-exploitation trade-off, we propose an upper confidence bound-based algorithm. We show that our proposed algorithm achieves $\tilde{O}(d \sqrt{H^3 T})$ regret bound where $d$ is the dimension of the transition core, $H$ is the horizon, and $T$ is the total number of steps. To the best of our knowledge, this is the first model-based RL algorithm with multinomial logistic function approximation with provable guarantees. We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms the existing methods, hence achieving both provable efficiency and practical superior performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源