论文标题

无监督对话结构的结构性关注

Structured Attention for Unsupervised Dialogue Structure Induction

论文作者

Qiu, Liang, Zhao, Yizhou, Shi, Weiyan, Liang, Yuan, Shi, Feng, Yuan, Tao, Yu, Zhou, Zhu, Song-Chun

论文摘要

从一个或一组对话中诱导有意义的结构表示是计算语言学中的至关重要但具有挑战性的任务。在该领域的进步对于对话系统设计和话语分析至关重要。它也可以扩展以求解语法推断。在这项工作中,我们建议将结构化的注意层与离散潜在状态相结合,以无监督的方式学习对话结构。与Vanilla VRNN相比,结构化的注意力使模型能够将重点放在源句子嵌入的不同部分,同时实施结构感应偏置。实验表明,在两党对话数据集中,VRNN具有结构化注意力学习的语义结构类似于用于生成此对话语料库的模板。在多方对话数据集中,我们的模型学习了一种交互式结构,展示了其区分说话者或地址的能力,并在没有明确的人类注释的情况下自动解开对话。

Inducing a meaningful structural representation from one or a set of dialogues is a crucial but challenging task in computational linguistics. Advancement made in this area is critical for dialogue system design and discourse analysis. It can also be extended to solve grammatical inference. In this work, we propose to incorporate structured attention layers into a Variational Recurrent Neural Network (VRNN) model with discrete latent states to learn dialogue structure in an unsupervised fashion. Compared to a vanilla VRNN, structured attention enables a model to focus on different parts of the source sentence embeddings while enforcing a structural inductive bias. Experiments show that on two-party dialogue datasets, VRNN with structured attention learns semantic structures that are similar to templates used to generate this dialogue corpus. While on multi-party dialogue datasets, our model learns an interactive structure demonstrating its capability of distinguishing speakers or addresses, automatically disentangling dialogues without explicit human annotation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源