论文标题
深度线性分类中的隐式偏见:初始化量表与训练精度
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
论文作者
论文摘要
当最小化“对角线性网络”的指数损失时,我们提供了梯度流轨迹及其隐式优化偏见的详细渐近研究。这是显示“内核”和非内分(“ Rich”或“ Active”)制度之间过渡的最简单模型。我们展示了如何通过初始化量表之间的关系以及如何准确地最大程度地减少训练损失的方式来控制过渡。我们的结果表明,梯度下降的某些限制行为只能以荒谬的训练精确度(远远超过$ 10^{ - 100} $)。此外,在合理的初始化量表和训练精度下的隐式偏差更为复杂,并且不会被这些限制捕获。
We provide a detailed asymptotic study of gradient flow trajectories and their implicit optimization bias when minimizing the exponential loss over "diagonal linear networks". This is the simplest model displaying a transition between "kernel" and non-kernel ("rich" or "active") regimes. We show how the transition is controlled by the relationship between the initialization scale and how accurately we minimize the training loss. Our results indicate that some limit behaviors of gradient descent only kick in at ridiculous training accuracies (well beyond $10^{-100}$). Moreover, the implicit bias at reasonable initialization scales and training accuracies is more complex and not captured by these limits.