论文标题
基于耦合的可逆神经网络是通用的差异近似近似器
Coupling-based Invertible Neural Networks Are Universal Diffeomorphism Approximators
论文作者
论文摘要
基于耦合流(CF-INN)的可逆神经网络具有各种机器学习应用,例如图像综合和表示学习。但是,它们的理想特征(例如分析性可逆性)是以限制功能形式的成本。这对他们的表示能力提出了一个问题:CF-INNS通用近似值是否用于可逆函数?没有普遍性,CF-INN永远无法近似的可逆转换可能会使模型类不可靠。我们通过显示一个方便的标准来回答这个问题:如果CF-INN层包含仿射耦合和可逆线性功能作为特殊情况,则是通用的。作为推论,我们可以肯定地解决了一个先前未解决的问题:基于仿射耦合的流动模型是否可以是通用的分布近似器。在证明普遍性的过程中,我们证明了一个普遍的定理,可以证明普遍性对某些差异阶级的等效性,这本身就是一种理论上的见解。
Invertible neural networks based on coupling flows (CF-INNs) have various machine learning applications such as image synthesis and representation learning. However, their desirable characteristics such as analytic invertibility come at the cost of restricting the functional forms. This poses a question on their representation power: are CF-INNs universal approximators for invertible functions? Without a universality, there could be a well-behaved invertible transformation that the CF-INN can never approximate, hence it would render the model class unreliable. We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases. As its corollary, we can affirmatively resolve a previously unsolved problem: whether normalizing flow models based on affine coupling can be universal distributional approximators. In the course of proving the universality, we prove a general theorem to show the equivalence of the universality for certain diffeomorphism classes, a theoretical insight that is of interest by itself.