论文标题
注意瓶颈:朝着可解释的深层驾驶网络
Attentional Bottleneck: Towards an Interpretable Deep Driving Network
论文作者
论文摘要
深度神经网络是自动驾驶汽车行为预测和运动产生的关键组成部分。他们的主要缺点之一是缺乏透明度:它们应该为触发某些行为的易于解释易于解释。我们提出了一个称为“注意瓶颈”的建筑,其目的是提高透明度。我们的关键思想是将视觉注意力结合起来,它可以识别模型所使用的输入的哪些方面,并通过信息瓶颈使模型仅使用输入的方面,这很重要。这不仅提供了稀疏且可解释的注意图(例如,仅关注场景中的特定车辆),而且还可以添加这种透明度,免费以建模准确性。实际上,当将注意力瓶颈应用于Chauffeurnet模型时,我们发现准确性的略有提高,而我们发现准确性通过传统的视觉注意力模型恶化。
Deep neural networks are a key component of behavior prediction and motion generation for self-driving cars. One of their main drawbacks is a lack of transparency: they should provide easy to interpret rationales for what triggers certain behaviors. We propose an architecture called Attentional Bottleneck with the goal of improving transparency. Our key idea is to combine visual attention, which identifies what aspects of the input the model is using, with an information bottleneck that enables the model to only use aspects of the input which are important. This not only provides sparse and interpretable attention maps (e.g. focusing only on specific vehicles in the scene), but it adds this transparency at no cost to model accuracy. In fact, we find slight improvements in accuracy when applying Attentional Bottleneck to the ChauffeurNet model, whereas we find that the accuracy deteriorates with a traditional visual attention model.