论文标题

扬声器敏感响应评估模型

Speaker Sensitive Response Evaluation Model

论文作者

Bak, JinYeong, Oh, Alice

论文摘要

自动评估开放域对话响应的产生非常具有挑战性,因为对于给定情况有许多适当的响应。现有的评估模型仅将生成的响应与地面真相响应进行比较,并将许多适当的响应评估为不合适的,如果它们偏离了地面真相。解决此问题的一种方法是考虑生成的响应与对话环境的相似性。在本文中,我们提出了一个基于该想法的自动评估模型,并从未标记的对话语料库中学习模型参数。我们的方法考虑说话者定义不同级别的相似环境。我们使用包含许多讲话者和对话的Twitter对话语料库来测试我们的评估模型。实验表明,我们的模型在与人类注释分数的高相关性方面优于其他现有评估指标。我们还表明,在Twitter上培训的模型可以应用于电影对话,而无需任何其他培训。我们提供代码和学习的参数,以便它们可以用于对话响应生成模型的自动评估。

Automatic evaluation of open-domain dialogue response generation is very challenging because there are many appropriate responses for a given context. Existing evaluation models merely compare the generated response with the ground truth response and rate many of the appropriate responses as inappropriate if they deviate from the ground truth. One approach to resolve this problem is to consider the similarity of the generated response with the conversational context. In this paper, we propose an automatic evaluation model based on that idea and learn the model parameters from an unlabeled conversation corpus. Our approach considers the speakers in defining the different levels of similar context. We use a Twitter conversation corpus that contains many speakers and conversations to test our evaluation model. Experiments show that our model outperforms the other existing evaluation metrics in terms of high correlation with human annotation scores. We also show that our model trained on Twitter can be applied to movie dialogues without any additional training. We provide our code and the learned parameters so that they can be used for automatic evaluation of dialogue response generation models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源