论文标题

低资源知识对话世代

Low-Resource Knowledge-Grounded Dialogue Generation

论文作者

Zhao, Xueliang, Wu, Wei, Tao, Chongyang, Xu, Can, Zhao, Dongyan, Yan, Rui

论文摘要

对知识的反应已被认为是智能对话代理人的重要能力。然而,作为学习这种响应产生模型的培训数据,知识接地的对话很难获得。在实践中的挑战中,我们考虑了自然假设仅可用的培训示例的自然假设。在这样的低资源环境中,我们设计了一个分离的响应解码器,以隔离依赖于知识基础对话的参数。通过这种方式,模型的主要部分可以从大量未接地的对话和非结构化文档中学到,而使用有限的培训示例可以很好地适合其余的小参数。两个基准的评估结果表明,只有1/8培训数据,我们的模型可以实现最先进的性能,并在外域知识上很好地概括。

Responding with knowledge has been recognized as an important capability for an intelligent conversational agent. Yet knowledge-grounded dialogues, as training data for learning such a response generation model, are difficult to obtain. Motivated by the challenge in practice, we consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available. In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model. By this means, the major part of the model can be learned from a large number of ungrounded dialogues and unstructured documents, while the remaining small parameters can be well fitted using the limited training examples. Evaluation results on two benchmarks indicate that with only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源