论文标题
使用终生的Vaegan学习跨多个数据域的潜在表示
Learning latent representations across multiple data domains using Lifelong VAEGAN
论文作者
论文摘要
灾难性遗忘的问题发生在以连续方式在多个数据库训练的深度学习模型中。最近,已经提出了生成的重播机制(GRM)来复制以前学到的知识,以减少遗忘。但是,这种方法缺乏适当的推论模型,因此无法提供数据的潜在表示。在本文中,我们提出了一种新颖的终身学习方法,即终身Vaegan(L-Vaegan),该方法不仅诱发了强大的生成性重播网络,而且还学习了有意义的潜在表示,从而使表示形式学习受益。 L-vaegan可以允许将与不同域关联的信息自动嵌入潜在空间的几个群集中,同时还可以在不同的数据域中捕获语义上有意义的共享潜在变量。提出的模型支持传统生成重播方法无法使用的许多下游任务,包括跨不同数据域的插值和推断。
The problem of catastrophic forgetting occurs in deep learning models trained on multiple databases in a sequential manner. Recently, generative replay mechanisms (GRM), have been proposed to reproduce previously learned knowledge aiming to reduce the forgetting. However, such approaches lack an appropriate inference model and therefore can not provide latent representations of data. In this paper, we propose a novel lifelong learning approach, namely the Lifelong VAEGAN (L-VAEGAN), which not only induces a powerful generative replay network but also learns meaningful latent representations, benefiting representation learning. L-VAEGAN can allow to automatically embed the information associated with different domains into several clusters in the latent space, while also capturing semantically meaningful shared latent variables, across different data domains. The proposed model supports many downstream tasks that traditional generative replay methods can not, including interpolation and inference across different data domains.