论文标题
引导l'coate dan les les模型de序列序列pour pour la预测des de对话
Guider l'attention dans les modeles de sequence a sequence pour la prediction des actes de dialogue
论文作者
论文摘要
基于对话对话框预测对话行为(DA)的任务是开发对话代理的关键组成部分。准确地预测DAS需要对对话和全局标签依赖性进行精确的建模。我们利用在神经机译(NMT)中广泛采用的SEQ2SEQ方法来改善标签顺序的建模。已知SEQ2SEQ模型可以学习复杂的全局依赖性,而当前使用线性条件随机字段(CRF)仅模型局部标记依赖项的方法。在这项工作中,我们介绍了一种用于DA分类的SEQ2SEQ模型,使用:分层编码器,一种新颖的引导注意机制和光束搜索应用于训练和推理。与艺术的状态相比,我们的模型不需要手工制作的功能,并且是经过训练的端到端。此外,所提出的方法在SWDA上获得了85%的无与伦比的精度得分,而最先进的精度得分为MRDA的91.6%。
The task of predicting dialog acts (DA) based on conversational dialog is a key component in the development of conversational agents. Accurately predicting DAs requires a precise modeling of both the conversation and the global tag dependencies. We leverage seq2seq approaches widely adopted in Neural Machine Translation (NMT) to improve the modelling of tag sequentiality. Seq2seq models are known to learn complex global dependencies while currently proposed approaches using linear conditional random fields (CRF) only model local tag dependencies. In this work, we introduce a seq2seq model tailored for DA classification using: a hierarchical encoder, a novel guided attention mechanism and beam search applied to both training and inference. Compared to the state of the art our model does not require handcrafted features and is trained end-to-end. Furthermore, the proposed approach achieves an unmatched accuracy score of 85% on SwDA, and state-of-the-art accuracy score of 91.6% on MRDA.