论文标题
在开关动态系统中生成叙事文本
Generating Narrative Text in a Switching Dynamical System
论文作者
论文摘要
关于叙事建模的早期工作使用了明确的计划和目标来产生故事,但语言生成本身是限制和僵化的。现代方法使用语言模型进行更强大的生成,但通常缺乏指导连贯叙事的脚手架和动态的明确表示。本文介绍了一个新模型,该模型将明确的叙事结构与神经语言模型集成在一起,从而将叙事建模形式化为开关线性动力学系统(SLDS)。 SLDS是一个动力学系统,其中系统的潜在动力学(即状态向量如何随时间变化)受顶级离散切换变量控制。开关变量代表叙事结构(例如,情感或话语状态),而潜在状态向量编码有关叙事当前状态的信息。这种概率的表述使我们能够控制生成,并且可以使用标记和未标记的数据以半监督的方式学习。此外,我们为我们的模型得出一个吉布斯采样器,该模型可以填充叙事的任意部分,并在开关变量的指导下。我们填写的(英语)叙述在自动和人类评估方面都优于几个基线。
Early work on narrative modeling used explicit plans and goals to generate stories, but the language generation itself was restricted and inflexible. Modern methods use language models for more robust generation, but often lack an explicit representation of the scaffolding and dynamics that guide a coherent narrative. This paper introduces a new model that integrates explicit narrative structure with neural language models, formalizing narrative modeling as a Switching Linear Dynamical System (SLDS). A SLDS is a dynamical system in which the latent dynamics of the system (i.e. how the state vector transforms over time) is controlled by top-level discrete switching variables. The switching variables represent narrative structure (e.g., sentiment or discourse states), while the latent state vector encodes information on the current state of the narrative. This probabilistic formulation allows us to control generation, and can be learned in a semi-supervised fashion using both labeled and unlabeled data. Additionally, we derive a Gibbs sampler for our model that can fill in arbitrary parts of the narrative, guided by the switching variables. Our filled-in (English language) narratives outperform several baselines on both automatic and human evaluations.