论文标题

消极者:在常识性知识基础上无监督的发现

NegatER: Unsupervised Discovery of Negatives in Commonsense Knowledge Bases

论文作者

Safavi, Tara, Zhu, Jing, Koutra, Danai

论文摘要

在机器中编纂常识性知识是人工智能的长期目标。最近,通过自动知识库(KB)构建技术取得了很多进展。但是,这种技术主要集中于对积极(真)kb陈述的获取,即使负(错误)陈述通常对于常识性KBS的歧视性推理也很重要。作为迈向后者的第一步,本文提出了Negater,该框架使用上下文语言模型(LM)对常识性KBS的潜在负面影响进行排名。重要的是,由于大多数KB不包含负面因素,因此消极仅依赖于LM中的积极知识,并且不需要基本真实的负面例子。实验表明,与多种对比度数据增强方法相比,消极的产量更语义,连贯性和信息性 - 导致在挑战性的KB完成任务中提高统计学上的准确性,并确认LMS中的积极知识可以被“重新普遍”产生负面知识。

Codifying commonsense knowledge in machines is a longstanding goal of artificial intelligence. Recently, much progress toward this goal has been made with automatic knowledge base (KB) construction techniques. However, such techniques focus primarily on the acquisition of positive (true) KB statements, even though negative (false) statements are often also important for discriminative reasoning over commonsense KBs. As a first step toward the latter, this paper proposes NegatER, a framework that ranks potential negatives in commonsense KBs using a contextual language model (LM). Importantly, as most KBs do not contain negatives, NegatER relies only on the positive knowledge in the LM and does not require ground-truth negative examples. Experiments demonstrate that, compared to multiple contrastive data augmentation approaches, NegatER yields negatives that are more grammatical, coherent, and informative -- leading to statistically significant accuracy improvements in a challenging KB completion task and confirming that the positive knowledge in LMs can be "re-purposed" to generate negative knowledge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源