论文标题
解释基于回归的神经网络模型
Explaining Regression Based Neural Network Model
论文作者
论文摘要
已经提出了几种解释深神经网络(DNN)的方法。但是,据我们所知,仅研究了分类网络,以确定哪些输入维度激发了决定。此外,由于这个问题没有基本真理,因此仅根据对人的有意义而定性地评估结果。在这项工作中,我们设计了一个实验环境,可以在其中建立地面真理:我们会产生理想的信号并使用错误的信号中断,并学习确定信号质量的神经网络。该质量只是基于中断信号与相应理想信号之间的距离的分数。然后,我们尝试找出网络如何估计该分数,并希望找到存在错误的信号的时步和尺寸。这种实验设置使我们能够根据几种降低大多数最新结果中存在的噪声来比较几种网络解释的方法,并提出一种名为AGRA的新方法,称为AGRA准确梯度。比较结果表明,所提出的方法的表现优于定位信号中错误的时间步长的最先进方法。
Several methods have been proposed to explain Deep Neural Network (DNN). However, to our knowledge, only classification networks have been studied to try to determine which input dimensions motivated the decision. Furthermore, as there is no ground truth to this problem, results are only assessed qualitatively in regards to what would be meaningful for a human. In this work, we design an experimental settings where the ground truth can been established: we generate ideal signals and disrupted signals with errors and learn a neural network that determines the quality of the signals. This quality is simply a score based on the distance between the disrupted signals and the corresponding ideal signal. We then try to find out how the network estimated this score and hope to find the time-step and dimensions of the signal where errors are present. This experimental setting enables us to compare several methods for network explanation and to propose a new method, named AGRA for Accurate Gradient, based on several trainings that decrease the noise present in most state-of-the-art results. Comparative results show that the proposed method outperforms state-of-the-art methods for locating time-steps where errors occur in the signal.