论文标题
探测代表学习独立机制分析的鲁棒性
Probing the Robustness of Independent Mechanism Analysis for Representation Learning
论文作者
论文摘要
表示学习的目的之一是恢复生成数据的原始潜在代码,这是需要其他信息或归纳偏见的任务。最近提出的一种称为独立机制分析(IMA)的方法假定每个潜在来源应独立影响观察到的混合物,补充标准的非线性独立组件分析,并从独立的因果机制原理中汲取灵感。尽管在理论和实验中表明IMA有助于恢复真正的潜在潜在,但该方法的性能仅在确切满足建模假设时才表征。在这里,我们测试了该方法对违反基本假设的鲁棒性。我们发现,基于IMA的正规化恢复真实来源的好处扩展到与IMA原理不同程度的混合功能,而标准的正则化器不提供相同的优点。此外,我们表明,未注册的最大似然恢复了混合功能,这些功能会系统地偏离IMA原理,并提供了阐明基于IMA的正则化益处的论点。
One aim of representation learning is to recover the original latent code that generated the data, a task which requires additional information or inductive biases. A recently proposed approach termed Independent Mechanism Analysis (IMA) postulates that each latent source should influence the observed mixtures independently, complementing standard nonlinear independent component analysis, and taking inspiration from the principle of independent causal mechanisms. While it was shown in theory and experiments that IMA helps recovering the true latents, the method's performance was so far only characterized when the modeling assumptions are exactly satisfied. Here, we test the method's robustness to violations of the underlying assumptions. We find that the benefits of IMA-based regularization for recovering the true sources extend to mixing functions with various degrees of violation of the IMA principle, while standard regularizers do not provide the same merits. Moreover, we show that unregularized maximum likelihood recovers mixing functions which systematically deviate from the IMA principle, and provide an argument elucidating the benefits of IMA-based regularization.