论文标题

用于基于流动的表示学习的神经PCA

Neural PCA for Flow-Based Representation Learning

论文作者

Li, Shen, Hooi, Bryan

论文摘要

特别有趣的是,仅以无监督的生成方式发现有用的表示。但是,尽管现有的正常化流量是否为下游任务提供有效表示的问题,尽管尽管具有强大的样本生成和密度估算能力,但仍未得到答复。本文为这样的生成模型家庭调查了这个问题,这些模型承认确切的可逆性。我们提出了神经主成分分析(Neural-PCA),该分析在\ emph {discending}顺序中捕获主成分时在全维工作。在不利用任何标签信息的情况下,主组件恢复了\ emph {Leading}尺寸中最有用的元素,并将可忽略不计的元素放在\ emph {taff {taff {taff {taff}的尺寸中,从而使$ 5 \%$ $ - $ - $ 10 \%$ -10 \%$在Downstreams任务中。无论潜在的落后尺寸下降,这种改进在经验上都是一致的。我们的工作表明,当表示表示质量时,将必要的归纳偏差引入生成建模中。

Of particular interest is to discover useful representations solely from observations in an unsupervised generative manner. However, the question of whether existing normalizing flows provide effective representations for downstream tasks remains mostly unanswered despite their strong ability for sample generation and density estimation. This paper investigates this problem for such a family of generative models that admits exact invertibility. We propose Neural Principal Component Analysis (Neural-PCA) that operates in full dimensionality while capturing principal components in \emph{descending} order. Without exploiting any label information, the principal components recovered store the most informative elements in their \emph{leading} dimensions and leave the negligible in the \emph{trailing} ones, allowing for clear performance improvements of $5\%$-$10\%$ in downstream tasks. Such improvements are empirically found consistent irrespective of the number of latent trailing dimensions dropped. Our work suggests that necessary inductive bias be introduced into generative modelling when representation quality is of interest.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源