论文标题
多模式人类机器人相互作用期间的连续ERRP检测
Continuous ErrP detections during multimodal human-robot interaction
论文作者
论文摘要
对于机器人应用来说,人类的方法非常重要。在介绍的研究中,我们实施了多模式的人类机器人相互作用(HRI)方案,其中模拟机器人通过语音和手势与其人类伴侣进行交流。机器人口头宣布其意图,并使用指向手势选择适当的动作。反过来,人类伴侣评估了机器人的口头公告(意图)是否与机器人选择的动作(指向手势)相匹配。对于机器人的口头公告与机器人的相应动作选择不符的情况,我们预计人类脑电图(EEG)中与错误相关的电位(ERRP)。实时记录了人类对机器人动作的固有评估,在脑电图中显而易见,在线连续分段并异步分类。对于特征选择,我们提出了一种方法,可以将前向和向后滑动窗口组合以训练分类器。在9个受试者中,我们达到了91%的平均分类性能。正如预期的那样,我们还观察到受试者之间的差异相对较高。将来,将扩展提出的特征选择方法,以允许自定义功能选择。为此,将自动选择向前和向后滑动窗口的最佳组合,以说明分类性能中受试者间的可变性。此外,我们计划使用互动增强学习中ERRP在错误情况下明显的内在人类错误评估来改善多模式的人类机器人相互作用。
Human-in-the-loop approaches are of great importance for robot applications. In the presented study, we implemented a multimodal human-robot interaction (HRI) scenario, in which a simulated robot communicates with its human partner through speech and gestures. The robot announces its intention verbally and selects the appropriate action using pointing gestures. The human partner, in turn, evaluates whether the robot's verbal announcement (intention) matches the action (pointing gesture) chosen by the robot. For cases where the verbal announcement of the robot does not match the corresponding action choice of the robot, we expect error-related potentials (ErrPs) in the human electroencephalogram (EEG). These intrinsic evaluations of robot actions by humans, evident in the EEG, were recorded in real time, continuously segmented online and classified asynchronously. For feature selection, we propose an approach that allows the combinations of forward and backward sliding windows to train a classifier. We achieved an average classification performance of 91% across 9 subjects. As expected, we also observed a relatively high variability between the subjects. In the future, the proposed feature selection approach will be extended to allow for customization of feature selection. To this end, the best combinations of forward and backward sliding windows will be automatically selected to account for inter-subject variability in classification performance. In addition, we plan to use the intrinsic human error evaluation evident in the error case by the ErrP in interactive reinforcement learning to improve multimodal human-robot interaction.