论文标题
联合估算人类示威的专业知识和奖励偏好
Joint Estimation of Expertise and Reward Preferences From Human Demonstrations
论文作者
论文摘要
当机器人从人类的例子中学习时,大多数方法都认为人类伴侣提供了最佳行为的例子。但是,在某些应用程序中,机器人向非专家人类学习。我们认为,机器人不仅应该了解人类的目标,还应了解其专业知识水平。然后,机器人可以利用此联合信息来减少或增加其为人类合作伙伴提供帮助的频率,或者在从新手用户学习新技能时更加谨慎。同样,通过考虑人类的专业知识,即使由于缺乏专业知识,机器人也能够推断人的真正目标。在本文中,我们建议共同推断人类(可能)(可能)非最佳示范的人类的专业知识水平和客观功能。提出了两种推论方法。在第一种方法中,推断是在有限的,离散的一组可能的目标功能和专业级别上进行的。在第二种方法中,机器人在所有可能的假设的整个空间上进行了优化,并找到了最能解释观察到的人类行为的目标函数和专业知识水平。我们在模拟和真实的用户数据中演示了我们提出的方法。
When a robot learns from human examples, most approaches assume that the human partner provides examples of optimal behavior. However, there are applications in which the robot learns from non-expert humans. We argue that the robot should learn not only about the human's objectives, but also about their expertise level. The robot could then leverage this joint information to reduce or increase the frequency at which it provides assistance to its human's partner or be more cautious when learning new skills from novice users. Similarly, by taking into account the human's expertise, the robot would also be able of inferring a human's true objectives even when the human's fails to properly demonstrate these objectives due to a lack of expertise. In this paper, we propose to jointly infer the expertise level and objective function of a human given observations of their (possibly) non-optimal demonstrations. Two inference approaches are proposed. In the first approach, inference is done over a finite, discrete set of possible objective functions and expertise levels. In the second approach, the robot optimizes over the space of all possible hypotheses and finds the objective function and expertise level that best explain the observed human behavior. We demonstrate our proposed approaches both in simulation and with real user data.