论文标题
学习深层模型的空间
Learning the Space of Deep Models
论文作者
论文摘要
将大量但冗余的数据(例如图像或文本)嵌入到较低维空间的层次结构中是表示方法的关键特征之一,如今,这些特征为问题提供了最新的解决方案,这些解决方案一旦努力或不可能解决问题。在这项工作中,在带有强烈元回味的情节扭转中,我们展示了受过训练的深层模型与它们优化的数据一样多余,因此如何使用深度学习模型来嵌入深度学习模型。特别是,我们表明可以使用表示形式学习来学习经过训练的深层模型的固定大小,低维的嵌入空间,并且可以通过插值或优化来探索此类空间以实现现成的模型。我们发现,可以学习相同体系结构和多个体系结构的多个实例的嵌入空间。我们解决了信号的图像分类和神经表示,显示了如何学习我们的嵌入空间,以分别捕获性能和3D形状的概念。在多架结构的环境中,我们还展示了仅在架构子集中训练的嵌入方式如何学习生成已经训练的架构实例,从未在培训时看到实例化。
Embedding of large but redundant data, such as images or text, in a hierarchy of lower-dimensional spaces is one of the key features of representation learning approaches, which nowadays provide state-of-the-art solutions to problems once believed hard or impossible to solve. In this work, in a plot twist with a strong meta aftertaste, we show how trained deep models are as redundant as the data they are optimized to process, and how it is therefore possible to use deep learning models to embed deep learning models. In particular, we show that it is possible to use representation learning to learn a fixed-size, low-dimensional embedding space of trained deep models and that such space can be explored by interpolation or optimization to attain ready-to-use models. We find that it is possible to learn an embedding space of multiple instances of the same architecture and of multiple architectures. We address image classification and neural representation of signals, showing how our embedding space can be learnt so as to capture the notions of performance and 3D shape, respectively. In the Multi-Architecture setting we also show how an embedding trained only on a subset of architectures can learn to generate already-trained instances of architectures it never sees instantiated at training time.