论文标题

零资源知识的对话世代

Zero-Resource Knowledge-Grounded Dialogue Generation

论文作者

Li, Linxiao, Xu, Can, Wu, Wei, Zhao, Yufan, Zhao, Xueliang, Tao, Chongyang

论文摘要

尽管神经对话模型通过引入外部知识表现出了巨大的潜力,可以通过引入外部知识来产生信息和引人入胜的反应,但学习这种模型通常需要很难获得知识的对话。为了克服数据挑战并降低构建知识接地对话系统的成本,我们通过假设不需要上下文知识 - 响应三元训练来探索零资源设置下的问题。为此,我们提出了表示知识,即知识桥接了一个上下文,一个响应以及知识表示为潜在变量的方式,并设计了一种变异方法,可以从对话语料库和彼此独立的知识语料库中有效地估算一代模型。对知识对话生成的三个基准测试的评估结果表明,我们的模型可以通过依靠知识接地的对话进行培训的最先进方法来实现可比的性能,并且在不同的主题和不同数据集上具有良好的概括能力。

While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain. To overcome the data challenge and reduce the cost of building a knowledge-grounded dialogue system, we explore the problem under a zero-resource setting by assuming no context-knowledge-response triples are needed for training. To this end, we propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables, and devise a variational approach that can effectively estimate a generation model from a dialogue corpus and a knowledge corpus that are independent with each other. Evaluation results on three benchmarks of knowledge-grounded dialogue generation indicate that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training, and exhibits a good generalization ability over different topics and different datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源