论文标题
使用Jensen-Shannon-Divergence多模式生成学习
Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence
论文作者
论文摘要
从不同的数据类型中学习是机器学习研究中的长期目标,因为在描述自然现象时,多个信息源共同出现。但是,近似多模式ELBO的现有生成模型依赖于困难或效率低下的培训方案来学习联合分布和模式之间的依赖性。在这项工作中,我们提出了一种新颖,有效的目标函数,该功能利用Jensen-Shannon Divergence进行多个分布。它同时通过动态先验直接近似于单峰和关节多模式后期。此外,从理论上讲,我们证明了新的多模式JS-Divergence(MMJSD)目标可以优化ELBO。在广泛的实验中,我们证明了与以前的无监督,生成学习任务相比,提出的MMJSD模型的优势。
Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena. However, existing generative models that approximate a multimodal ELBO rely on difficult or inefficient training schemes to learn a joint distribution and the dependencies between modalities. In this work, we propose a novel, efficient objective function that utilizes the Jensen-Shannon divergence for multiple distributions. It simultaneously approximates the unimodal and joint multimodal posteriors directly via a dynamic prior. In addition, we theoretically prove that the new multimodal JS-divergence (mmJSD) objective optimizes an ELBO. In extensive experiments, we demonstrate the advantage of the proposed mmJSD model compared to previous work in unsupervised, generative learning tasks.