论文标题
迈向2级驾驶自动化的自适应信任校准
Toward Adaptive Trust Calibration for Level 2 Driving Automation
论文作者
论文摘要
正确校准的人类信任对于人类与自动化之间的成功互动至关重要。但是,尽管可以通过提高自动化透明度来改善人类的信任校准,但过多的透明度可能会使人类工作量淹没。为了解决这一权衡,我们使用部分可观察到的马尔可夫决策过程(POMDP)提出了一个概率框架,用于在动作自动化环境中对人类行为的耦合信任工作负载动态进行建模。我们专门考虑在城市环境中涉及多个交叉路口的城市环境中的2级驾驶自动化,其中人类选择是否依靠自动化。我们考虑自动化可靠性,自动化透明度和场景复杂性,以及人类的依赖和眼睛凝视行为,以模拟人类信任和工作量的动态。我们证明,我们的模型框架可以基于实时人类信任和工作量信念估算来适当地改变自动化透明度,以实现信任校准。
Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration.