论文标题
神经符号架构以理解上下文
Neuro-symbolic Architectures for Context Understanding
论文作者
论文摘要
计算环境理解是指代理商的能力融合不同的信息来源以进行决策,因此通常被视为复杂机器推理能力的先决条件,例如在人工智能(AI)中。数据驱动和知识驱动的方法是追求这种机器感知能力的两种经典技术。但是,尽管数据驱动的方法试图通过在现实世界中进行观察来对事件的统计规律进行建模,但它们仍然难以解释,并且缺乏自然融合外部知识的机制。相反,知识驱动的方法,结合结构化的知识库,基于公理原理执行象征性推理,并且在推论处理中更容易解释;但是,他们通常缺乏估计推论的统计显着性的能力。为了解决这些问题,我们建议将混合AI方法作为结合两种方法优势的一般框架。具体而言,我们继承了神经伴侣的概念,是一种使用知识基础来指导深度神经网络的学习进步的一种方式。我们将讨论在神经符号的两个应用中进一步依据,并且在这两种情况下,我们的系统在相对于最新的技术而保持可比性的同时保持了可比性的性能。
Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art.