论文标题
对话响应对大规模人力反馈数据的排名培训
Dialogue Response Ranking Training with Large-Scale Human Feedback Data
论文作者
论文摘要
通常对现有的开放域对话模型进行训练,以最大程度地减少目标人类反应的困惑。但是,某些人类的答复比其他人更具吸引力,从而产生了更多的后续互动。当前的对话模型越来越能够产生与上下文相关的转弯,但是为了产生引人注目的代理,这些模型需要能够预测和优化真正吸引人的转弯。我们利用社交媒体反馈数据(答复和投票数)来构建一个大规模培训数据集进行反馈预测。为了减轻反馈和参与性之间可能的失真,我们将排名问题转换为对响应对的比较,响应对涉及几乎没有混杂因素。我们训练了Dialogrpt,这是一组基于GPT-2的模型,对13300万人类的反馈数据,最终的Ranker的表现优于几个基线。特别是,我们的排名者在预测Reddit反馈方面的表现优于常规对话框的困惑基线。我们最终结合了反馈预测模型和类似人类的评分模型,以对机器生成的对话响应进行排名。众包人类评估表明,我们的排名方法与实际人类偏好相比,比基线模型更好。
Existing open-domain dialog models are generally trained to minimize the perplexity of target human responses. However, some human replies are more engaging than others, spawning more followup interactions. Current conversational models are increasingly capable of producing turns that are context-relevant, but in order to produce compelling agents, these models need to be able to predict and optimize for turns that are genuinely engaging. We leverage social media feedback data (number of replies and upvotes) to build a large-scale training dataset for feedback prediction. To alleviate possible distortion between the feedback and engagingness, we convert the ranking problem to a comparison of response pairs which involve few confounding factors. We trained DialogRPT, a set of GPT-2 based models on 133M pairs of human feedback data and the resulting ranker outperformed several baselines. Particularly, our ranker outperforms the conventional dialog perplexity baseline with a large margin on predicting Reddit feedback. We finally combine the feedback prediction models and a human-like scoring model to rank the machine-generated dialog responses. Crowd-sourced human evaluation shows that our ranking method correlates better with real human preferences than baseline models.