论文标题
最大类分离为一个矩阵中的电感偏差
Maximum Class Separation as Inductive Bias in One Matrix
论文作者
论文摘要
最大化类之间的分离构成了机器学习中众所周知的归纳偏见,并且是许多传统算法的支柱。默认情况下,深网不配备这种归纳偏差,因此通过差异优化提出了许多替代解决方案。当前的方法倾向于共同优化分类和分离:将输入与类向量对齐,并角度分离载体。本文提出了一个简单的替代方法:通过在计算SoftMax激活之前添加一个固定的矩阵乘法来编码最大分离作为网络中的电感偏差。我们方法背后的主要观察结果是,分离不需要优化,而可以在训练之前以封闭形式解决并插入网络。我们概述了一种递归方法,以获取由任何数量类别的最大可分离向量组成的矩阵,可以通过可忽略的工程工作和计算开销添加。尽管具有简单的性质,但这个矩阵乘法仍带来了真正的影响。我们表明,我们的建议直接提高了从CIFAR到Imagenet的分类,长尾识别,分布式检测和开放式识别。我们从经验上发现,最大分离最有效地作为固定偏见。使矩阵可学习并没有增加表现。封闭形式的实现和重现实验的代码可在GitHub上获得。
Maximizing the separation between classes constitutes a well-known inductive bias in machine learning and a pillar of many traditional algorithms. By default, deep networks are not equipped with this inductive bias and therefore many alternative solutions have been proposed through differential optimization. Current approaches tend to optimize classification and separation jointly: aligning inputs with class vectors and separating class vectors angularly. This paper proposes a simple alternative: encoding maximum separation as an inductive bias in the network by adding one fixed matrix multiplication before computing the softmax activations. The main observation behind our approach is that separation does not require optimization but can be solved in closed-form prior to training and plugged into a network. We outline a recursive approach to obtain the matrix consisting of maximally separable vectors for any number of classes, which can be added with negligible engineering effort and computational overhead. Despite its simple nature, this one matrix multiplication provides real impact. We show that our proposal directly boosts classification, long-tailed recognition, out-of-distribution detection, and open-set recognition, from CIFAR to ImageNet. We find empirically that maximum separation works best as a fixed bias; making the matrix learnable adds nothing to the performance. The closed-form implementation and code to reproduce the experiments are available on github.