论文标题
关于过度参数化隐式神经网络的优化和概括
On the optimization and generalization of overparameterized implicit neural networks
论文作者
论文摘要
隐式神经网络在机器学习社区中变得越来越有吸引力,因为它们可以实现竞争性能,但使用较少的计算资源。最近,如果隐式网络过度参数化,则一系列理论著作为一阶方法(例如梯度下降)建立了全局融合。但是,当它们一起训练所有层时,它们的分析等同于仅研究输出层的演变。目前尚不清楚隐式层如何有助于培训。因此,在本文中,我们仅限于训练隐式层。我们表明,即使只有隐式层是训练的,也可以保证全球融合。另一方面,对隐式神经网络的何时以及如何推广训练性能的理论理解仍未得到探索。尽管在标准的进发纸网络中已经研究了此问题,但隐式神经网络的情况仍然令人着迷,因为理论上隐式网络具有无限的层。因此,本文研究了隐式神经网络的概括误差。具体而言,我们研究了由Relu函数激活的隐式网络的概括,而不是随机初始化。我们提供了一种对初始化敏感的概括结合。结果,我们表明具有适当随机初始化的梯度流可以训练足够的过度参数的隐式网络,以达到任意的概括误差。
Implicit neural networks have become increasingly attractive in the machine learning community since they can achieve competitive performance but use much less computational resources. Recently, a line of theoretical works established the global convergences for first-order methods such as gradient descent if the implicit networks are over-parameterized. However, as they train all layers together, their analyses are equivalent to only studying the evolution of the output layer. It is unclear how the implicit layer contributes to the training. Thus, in this paper, we restrict ourselves to only training the implicit layer. We show that global convergence is guaranteed, even if only the implicit layer is trained. On the other hand, the theoretical understanding of when and how the training performance of an implicit neural network can be generalized to unseen data is still under-explored. Although this problem has been studied in standard feed-forward networks, the case of implicit neural networks is still intriguing since implicit networks theoretically have infinitely many layers. Therefore, this paper investigates the generalization error for implicit neural networks. Specifically, we study the generalization of an implicit network activated by the ReLU function over random initialization. We provide a generalization bound that is initialization sensitive. As a result, we show that gradient flow with proper random initialization can train a sufficient over-parameterized implicit network to achieve arbitrarily small generalization errors.