论文标题
使用线性区域的更紧密的上限来衡量复杂网络的表现力的一般计算框架
A General Computational Framework to Measure the Expressiveness of Complex Networks Using a Tighter Upper Bound of Linear Regions
论文作者
论文摘要
深神经网络(DNN)的表现力是了解DNN令人惊讶的表现的观点。线性区域的数量,即由DNN代表的零件线性函数的片段,通常用于测量表现力。而由接收器网络(而不是数字本身)划分的区域编号的上限是对整流器DNN的表达性的更实际测量。在这项工作中,我们提出了一个新的区域数字的新的更紧密的。受Hinz&van de Geer(2019)中矩阵计算的上限和框架的证明的启发,我们提出了生产期计算方法,以计算理论上任何网络结构的区域数字的紧密上限(例如,具有各种Skip-nections和残基结构的DNN)。我们的实验表明我们的上限是现有的界限,并解释了为什么跳过连接和残留结构可以破坏网络性能。
The expressiveness of deep neural network (DNN) is a perspective to understandthe surprising performance of DNN. The number of linear regions, i.e. pieces thata piece-wise-linear function represented by a DNN, is generally used to measurethe expressiveness. And the upper bound of regions number partitioned by a rec-tifier network, instead of the number itself, is a more practical measurement ofexpressiveness of a rectifier DNN. In this work, we propose a new and tighter up-per bound of regions number. Inspired by the proof of this upper bound and theframework of matrix computation in Hinz & Van de Geer (2019), we propose ageneral computational approach to compute a tight upper bound of regions numberfor theoretically any network structures (e.g. DNN with all kind of skip connec-tions and residual structures). Our experiments show our upper bound is tighterthan existing ones, and explain why skip connections and residual structures canimprove network performance.