论文标题
选择前的量化:主动动态偏爱强大的增强学习
Quantification before Selection: Active Dynamics Preference for Robust Reinforcement Learning
论文作者
论文摘要
培训强大的政策对于在现实世界中的政策部署或处理不同动态系统中未知动态不匹配至关重要。域随机化〜(DR)是一种简单而优雅的方法,可以训练保守的政策,以反对不同的动态系统,而无需有关目标系统参数的专家知识。但是,现有的作品表明,通过DR训练的政策往往保守过度保守,并且在目标领域的表现较差。我们的关键见解是,具有不同参数的动态系统为策略提供了不同级别的难度,并且由于策略的发展,在系统中表现良好的难度正在不断变化。如果我们可以为该政策进行适当的困难而积极地对系统进行采样,它将稳定培训过程,并防止该政策变得过于保守或过度优势。为了实现这一想法,我们引入了主动动力学偏好〜(ADP),从而量化了采样系统参数的信息性和密度。 ADP积极选择具有高信息性和低密度的系统参数。我们在四个机器人运动任务中验证我们的方法,并在训练环境和测试环境之间存在各种差异。广泛的结果表明,与几个基线相比,我们的方法对系统不一致具有较高的鲁棒性。
Training a robust policy is critical for policy deployment in real-world systems or dealing with unknown dynamics mismatch in different dynamic systems. Domain Randomization~(DR) is a simple and elegant approach that trains a conservative policy to counter different dynamic systems without expert knowledge about the target system parameters. However, existing works reveal that the policy trained through DR tends to be over-conservative and performs poorly in target domains. Our key insight is that dynamic systems with different parameters provide different levels of difficulty for the policy, and the difficulty of behaving well in a system is constantly changing due to the evolution of the policy. If we can actively sample the systems with proper difficulty for the policy on the fly, it will stabilize the training process and prevent the policy from becoming over-conservative or over-optimistic. To operationalize this idea, we introduce Active Dynamics Preference~(ADP), which quantifies the informativeness and density of sampled system parameters. ADP actively selects system parameters with high informativeness and low density. We validate our approach in four robotic locomotion tasks with various discrepancies between the training and testing environments. Extensive results demonstrate that our approach has superior robustness for system inconsistency compared to several baselines.