论文标题
通过信息理论指导改善解开的文本表示学习
Improving Disentangled Text Representation Learning with Information-Theoretic Guidance
论文作者
论文摘要
自然语言的学习解开表示对于许多NLP任务,例如有条件的文本生成,样式转移,个性化对话系统等都是必不可少的。已经针对其他形式的数据(例如图像和视频)进行了广泛研究类似的问题。但是,自然语言的离散性质使得对文本表示形式的解开更具挑战性(例如,对数据空间的操纵无法轻易实现)。受信息理论的启发,我们提出了一种新方法,该方法在没有任何语义的监督的情况下有效地表现出文本的分离表示。新的共同信息上限被得出并利用,以衡量样式和内容之间的依赖性。通过最小化该上限,提出的方法可诱导样式和内容嵌入到两个独立的低维空间中。关于条件文本生成和文本式转移的实验表明,在内容和样式保存方面,我们分离的表示的高质量。
Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. Similar problems have been studied extensively for other forms of data, such as images and videos. However, the discrete nature of natural language makes the disentangling of textual representations more challenging (e.g., the manipulation over the data space cannot be easily achieved). Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics. A new mutual information upper bound is derived and leveraged to measure dependence between style and content. By minimizing this upper bound, the proposed method induces style and content embeddings into two independent low-dimensional spaces. Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation in terms of content and style preservation.