论文标题
两极分化:基于邻近性的文本学习
Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation
论文作者
论文摘要
学习分离的现实数据表示是一个具有挑战性的开放问题。大多数以前的方法都集中在使用属性标签的监督方法上,或者是无监督的方法,这些方法通过使用特定于任务损失的培训来操纵模型的潜在空间(例如变异自动编码器(VAE))中的分解。在这项工作中,我们提出了极化-VAE,这种方法基于接近度测量值在潜在空间中选择属性,该方法反映了相对于这些属性的数据点之间的相似性。我们应用我们的方法来解开句子的语义和语法并进行转移实验。两极分化的VAE胜过VAE基线,并且在最先进的方法中具有竞争力,同时更是一个适用于其他属性分离任务的一般框架。
Learning disentangled representations of real-world data is a challenging open problem. Most previous methods have focused on either supervised approaches which use attribute labels or unsupervised approaches that manipulate the factorization in the latent space of models such as the variational autoencoder (VAE) by training with task-specific losses. In this work, we propose polarized-VAE, an approach that disentangles select attributes in the latent space based on proximity measures reflecting the similarity between data points with respect to these attributes. We apply our method to disentangle the semantics and syntax of sentences and carry out transfer experiments. Polarized-VAE outperforms the VAE baseline and is competitive with state-of-the-art approaches, while being more a general framework that is applicable to other attribute disentanglement tasks.