论文标题
跳过连接是否可以改革神经网络损失格局?
Is the Skip Connection Provable to Reform the Neural Network Loss Landscape?
论文作者
论文摘要
The residual network is now one of the most effective structures in deep learning, which utilizes the skip connections to ``guarantee" the performance will not get worse. However, the non-convexity of the neural network makes it unclear whether the skip connections do provably improve the learning ability since the nonlinearity may create many local minima. In some previous works \cite{freeman2016topology}, it is shown that despite the non-convexity, the loss landscape of the two-layer ReLU network has good properties when the number $m$ of hidden nodes is very large. In this paper, we follow this line to study the topology (sub-level sets) of the loss landscape of deep ReLU neural networks with a skip connection and theoretically prove that the skip connection network inherits the good properties of the two-layer network and skip connections can help to control the子级集合的连接性,使得与某些两层Relu网络的全球最小值更糟糕的任何局部最小值都将非常``浅''。这些本地最小值的“深度”最多是$ o(m^{(η-1)/n})$,其中$ n $是输入尺寸,$η<1 $。这为Skip连接在深度学习中的有效性提供了理论解释。
The residual network is now one of the most effective structures in deep learning, which utilizes the skip connections to ``guarantee" the performance will not get worse. However, the non-convexity of the neural network makes it unclear whether the skip connections do provably improve the learning ability since the nonlinearity may create many local minima. In some previous works \cite{freeman2016topology}, it is shown that despite the non-convexity, the loss landscape of the two-layer ReLU network has good properties when the number $m$ of hidden nodes is very large. In this paper, we follow this line to study the topology (sub-level sets) of the loss landscape of deep ReLU neural networks with a skip connection and theoretically prove that the skip connection network inherits the good properties of the two-layer network and skip connections can help to control the connectedness of the sub-level sets, such that any local minima worse than the global minima of some two-layer ReLU network will be very ``shallow". The ``depth" of these local minima are at most $O(m^{(η-1)/n})$, where $n$ is the input dimension, $η<1$. This provides a theoretical explanation for the effectiveness of the skip connection in deep learning.