论文标题
开放域对话系统中的离线和在线满意度预测
Offline and Online Satisfaction Prediction in Open-Domain Conversational Systems
论文作者
论文摘要
随着口语对话助手在日益复杂的域中运作,预测对话系统中的用户满意度已变得至关重要。在线满意度预测(即,在每回合之后预测用户对系统的满意度)可以用作隐式用户反馈的新代理,并提供了有希望的机会来创建更敏感和有效的对话代理,以适应用户与代理商的参与。为了实现这一目标,我们提出了一个专门为开放域口语对话剂(称为Convsat)设计的对话满意度预测模型。为了跨域进行强大的操作,Convsat汇总了对话的多个表示,即对话历史记录,话语和响应内容以及以系统和用户为导向的行为信号。我们首先在在线制度中对标准数据集(对话分解检测挑战)进行针对最先进方法的CORVSAT性能,然后在与真实用户的大型对话中评估CORVSAT,作为Alexa奖竞赛的一部分收集。我们的实验结果表明,与先前报道的最新方法相比,CORVSAT显着提高了两个数据集上离线和在线设置的满意度预测。我们研究的见解可以实现更智能的对话系统,这可以实时适应推断的用户满意度和参与度。
Predicting user satisfaction in conversational systems has become critical, as spoken conversational assistants operate in increasingly complex domains. Online satisfaction prediction (i.e., predicting satisfaction of the user with the system after each turn) could be used as a new proxy for implicit user feedback, and offers promising opportunities to create more responsive and effective conversational agents, which adapt to the user's engagement with the agent. To accomplish this goal, we propose a conversational satisfaction prediction model specifically designed for open-domain spoken conversational agents, called ConvSAT. To operate robustly across domains, ConvSAT aggregates multiple representations of the conversation, namely the conversation history, utterance and response content, and system- and user-oriented behavioral signals. We first calibrate ConvSAT performance against state of the art methods on a standard dataset (Dialogue Breakdown Detection Challenge) in an online regime, and then evaluate ConvSAT on a large dataset of conversations with real users, collected as part of the Alexa Prize competition. Our experimental results show that ConvSAT significantly improves satisfaction prediction for both offline and online setting on both datasets, compared to the previously reported state-of-the-art approaches. The insights from our study can enable more intelligent conversational systems, which could adapt in real-time to the inferred user satisfaction and engagement.