论文标题

拓扑密集的分布

Topologically Densified Distributions

论文作者

Hofer, Christoph D., Graf, Florian, Niethammer, Marc, Kwitt, Roland

论文摘要

我们在小样本尺寸学习的背景下研究正则化,并使用过度参数化的神经网络研究。具体而言,我们将重点从架构属性(例如网络权重的规范)转移到线性分类器之前的内部表示属性。具体而言,我们对从该空间中引起的概率度量的样本施加了拓扑约束。事实证明,这会导致围绕培训实例表示的质量集中效应,即对概括有益的财产。通过利用以前的工作在神经网络环境中施加拓扑约束,我们提供了经验证据(跨各种视觉基准)来支持我们的更好概括的主张。

We study regularization in the context of small sample-size learning with over-parameterized neural networks. Specifically, we shift focus from architectural properties, such as norms on the network weights, to properties of the internal representations before a linear classifier. Specifically, we impose a topological constraint on samples drawn from the probability measure induced in that space. This provably leads to mass concentration effects around the representations of training instances, i.e., a property beneficial for generalization. By leveraging previous work to impose topological constraints in a neural network setting, we provide empirical evidence (across various vision benchmarks) to support our claim for better generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源