论文标题

Sobolev空间中神经网络集的不透明性

Nonclosedness of Sets of Neural Networks in Sobolev Spaces

论文作者

Mahan, Scott, King, Emily, Cloninger, Alex

论文摘要

我们检查了Sobolev空间中固定体系结构的一组已实现的神经网络的封闭性。对于恰好$ m $ $ - times可区分的激活函数$ρ$,我们构造了一系列神经网络$(φ_n)_ {n \ in \ mathbb {n}} $,其实现会汇总为 - $(m-1)$ sobolev norm nort in \ $(m-1)$(mathbb {n}} $,无法完全由神经网络实现。因此,一组已实现的神经网络未按顺序关闭-A $(M-1)$ SOBOLEV SPACES $ W^{M-1,P} $,for $ p \ in [1,\ infty] $。我们进一步表明,在$ M $ $ th派生的$ρ$的条件下,这些集合不会以$ w^{m,p} $关闭。对于真实的分析激活函数,我们表明,对于任何$ k \ in \ mathbb {n} $,已实现的神经网络集并未在$ w^{k,p} $中关闭。非透明性允许近似具有无界参数增长的非网络目标函数。我们通过表明特定的已实现神经网络的特定序列可以近似于激活函数的衍生物,而权重与$ l^p $近似误差成反比。最后,我们提出了实验结果,表明网络能够通过训练与参数增加的参数紧密近似非网络目标函数。

We examine the closedness of sets of realized neural networks of a fixed architecture in Sobolev spaces. For an exactly $m$-times differentiable activation function $ρ$, we construct a sequence of neural networks $(Φ_n)_{n \in \mathbb{N}}$ whose realizations converge in order-$(m-1)$ Sobolev norm to a function that cannot be realized exactly by a neural network. Thus, sets of realized neural networks are not closed in order-$(m-1)$ Sobolev spaces $W^{m-1,p}$ for $p \in [1,\infty]$. We further show that these sets are not closed in $W^{m,p}$ under slightly stronger conditions on the $m$-th derivative of $ρ$. For a real analytic activation function, we show that sets of realized neural networks are not closed in $W^{k,p}$ for any $k \in \mathbb{N}$. The nonclosedness allows for approximation of non-network target functions with unbounded parameter growth. We partially characterize the rate of parameter growth for most activation functions by showing that a specific sequence of realized neural networks can approximate the activation function's derivative with weights increasing inversely proportional to the $L^p$ approximation error. Finally, we present experimental results showing that networks are capable of closely approximating non-network target functions with increasing parameters via training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源