论文标题

顺序解释性:在顺序数据背景下理解深度学习模型的方法,应用程序和未来方向

Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data

论文作者

Shickel, Benjamin, Rashidi, Parisa

论文摘要

深度学习继续彻底改变了不断增长的关键应用领域,包括医疗保健,运输,金融和基本科学。尽管他们的预测能力提高,但由于现代深度学习模型的“黑匣子”性质,模型透明度和人类解释性仍然是一个重大挑战。在许多情况下,可解释性和绩效之间的所需平衡主要是特定于任务的。诸如医疗保健之类的以人为中心的领域需要重新侧重于了解这些框架如何以及为何制定关键和潜在的生命或死亡决定。鉴于对计算机视觉的深度学习的研究和经验成功的数量,大多数现有的可解释性研究都集中在图像处理技术上。相比之下,使用顺序数据对解释深度学习框架的关注较少。鉴于在高度顺序的域(例如自然语言处理和生理信号处理)中的最新深度学习进步,对深层顺序解释的需求是有史以来高的。在本文中,我们回顾了当前的技术来解释涉及顺序数据的深度学习技术,确定与非序列方法的相似性,并讨论顺序解释性研究的当前局限性和未来的途径。

Deep learning continues to revolutionize an ever-growing number of critical application areas including healthcare, transportation, finance, and basic sciences. Despite their increased predictive power, model transparency and human explainability remain a significant challenge due to the "black box" nature of modern deep learning models. In many cases the desired balance between interpretability and performance is predominately task specific. Human-centric domains such as healthcare necessitate a renewed focus on understanding how and why these frameworks are arriving at critical and potentially life-or-death decisions. Given the quantity of research and empirical successes of deep learning for computer vision, most of the existing interpretability research has focused on image processing techniques. Comparatively, less attention has been paid to interpreting deep learning frameworks using sequential data. Given recent deep learning advancements in highly sequential domains such as natural language processing and physiological signal processing, the need for deep sequential explanations is at an all-time high. In this paper, we review current techniques for interpreting deep learning techniques involving sequential data, identify similarities to non-sequential methods, and discuss current limitations and future avenues of sequential interpretability research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源