论文标题
为解释基于脑电图的大脑计算机界面的深度学习模型的最佳实践
Towards Best Practice of Interpreting Deep Learning Models for EEG-based Brain Computer Interfaces
论文作者
论文摘要
由于深度学习已经在基于EEG的BCI的许多任务中取得了最先进的表现,因此近年来已经做出了许多努力,试图了解模型所学的内容。这通常是通过生成热图来完成的,该热图指示输入的每个像素在多大程度上有助于训练有素的模型的最终分类。尽管有广泛的使用,但尚不理解获得的解释结果可以信任的程度以及它们可以反映模型决策的准确性。为了填补这一研究差距,我们进行了一项研究,以在EEG数据集上定量评估不同的深层解释技术。结果揭示了选择适当的解释技术作为第一步的重要性。此外,我们还发现,尽管使用了总体良好性能的方法,但解释结果的质量对于单个样本而言是不一致的。许多因素,包括模型结构和数据集类型,可能会影响解释结果的质量。根据观察结果,我们提出了一组程序,以允许解释结果以易于理解和可信赖的方式呈现。我们通过从不同方案中选择的实例说明了我们方法对基于脑电图的BCI的有用性。
As deep learning has achieved state-of-the-art performance for many tasks of EEG-based BCI, many efforts have been made in recent years trying to understand what have been learned by the models. This is commonly done by generating a heatmap indicating to which extent each pixel of the input contributes to the final classification for a trained model. Despite the wide use, it is not yet understood to which extent the obtained interpretation results can be trusted and how accurate they can reflect the model decisions. In order to fill this research gap, we conduct a study to evaluate different deep interpretation techniques quantitatively on EEG datasets. The results reveal the importance of selecting a proper interpretation technique as the initial step. In addition, we also find that the quality of the interpretation results is inconsistent for individual samples despite when a method with an overall good performance is used. Many factors, including model structure and dataset types, could potentially affect the quality of the interpretation results. Based on the observations, we propose a set of procedures that allow the interpretation results to be presented in an understandable and trusted way. We illustrate the usefulness of our method for EEG-based BCI with instances selected from different scenarios.