论文标题

对话:对话框响应生成的预先训练的潜在变量编码器模型

DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation

论文作者

Chen, Wei, Gong, Yeyun, Wang, Song, Yao, Bolun, Qi, Weizhen, Wei, Zhongyu, Hu, Xiaowu, Zhou, Bartuer, Mao, Yi, Chen, Weizhu, Cheng, Biao, Duan, Nan

论文摘要

在开放型域中的对话响应生成是一个重要的研究主题,主要挑战是产生相关和多样化的响应。在本文中,我们提出了一个名为Dialogved的新对话框预训练框架,该框架将连续的延迟变量引入增强的编码器训练前训练框架以提高响应的相关性和多样性。借助大型对话框语料库(REDDIT),我们使用以下4个任务(LMS)和变量自动编码器(VAES)预先培训模型:1)掩盖语言模型; 2)响应产生; 3)词袋的预测; 4)KL差异降低。我们还添加了其他参数,以模拟对话框中的转弯结构,以提高预训练模型的性能。我们在人为,DailyDialog和DSTC7-AVSD基准中进行响应产生的实验。实验结果表明,我们的模型在所有这些数据集上实现了新的最新结果。

Dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses. In this paper, we propose a new dialog pre-training framework called DialogVED, which introduces continuous latent variables into the enhanced encoder-decoder pre-training framework to increase the relevance and diversity of responses. With the help of a large dialog corpus (Reddit), we pre-train the model using the following 4 tasks adopted in language models (LMs) and variational autoencoders (VAEs): 1) masked language model; 2) response generation; 3) bag-of-words prediction; and 4) KL divergence reduction. We also add additional parameters to model the turn structure in dialogs to improve the performance of the pre-trained model. We conduct experiments on PersonaChat, DailyDialog, and DSTC7-AVSD benchmarks for response generation. Experimental results show that our model achieves the new state-of-the-art results on all these datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源