论文标题

黑框的语义:知识图可以帮助使深度学习系统更容易解释和解释吗?

Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?

论文作者

Gaur, Manas, Faldu, Keyur, Sheth, Amit

论文摘要

最近的一系列深度学习创新(DL)表现出了巨大的潜力,对个人和社会产生了积极和负面的影响。利用庞大的计算能力和巨大数据集的DL模型在跨技术领域的越来越困难,定义明确的研究任务(例如计算机视觉,自然语言处理,信号处理和人类计算机的互动)方面显着优于先前的历史基准。但是,DL模型的黑盒性质及其对大量数据的过度依赖为标签和密集表示,对系统的可解释性和解释性构成了挑战。此外,DLS尚未得到有效利用相关领域知识和对人类理解至关重要的经验的能力。在早期以数据为中心的方法中缺少这一方面,并​​且需要知识注入的学习和其他策略以纳入计算知识。本文展示了如何使用知识融合的学习将知识(作为知识图提供)纳入DL方法,这是其中一种策略。然后,我们讨论这如何在当前方法的可解释性和解释性上产生根本的不同,并以自然语言处理的医疗保健和教育应用程序进行了说明。

The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare and education applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源