论文标题
用增量高斯混合模型快速加固学习
Fast Reinforcement Learning with Incremental Gaussian Mixture Models
论文作者
论文摘要
这项工作提出了一种新颖的算法,该算法将数据效力函数近似值与连续状态空间中的增强学习集成在一起。一种能够从单个通过数据中学习的在线和增量算法,称为增量高斯混合网络(IGMN),用作共同状态和Q值空间的样品效应近似值,所有这些都在单个模型中,从而使一个简洁的和数据效率高效的algorithm(即,从辅助方面学习algoriths a algoriths a algorithmith a algoriths in a algoriths。分析结果以解释所获得的算法的特性,并观察到,IGMN函数近似值的使用带来了与梯度下降方法培训的常规神经网络相关的增强学习的一些重要优势。
This work presents a novel algorithm that integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. An online and incremental algorithm capable of learning from a single pass through data, called Incremental Gaussian Mixture Network (IGMN), was employed as a sample-efficient function approximator for the joint state and Q-values space, all in a single model, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. Results are analyzed to explain the properties of the obtained algorithm, and it is observed that the use of the IGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks trained by gradient descent methods.