论文标题

标准约束神经网络的近似范围,并应用于回归和gan

Approximation bounds for norm constrained neural networks with applications to regression and GANs

论文作者

Jiao, Yuling, Wang, Yang, Yang, Yunfei

论文摘要

本文研究了重量范围限制重量的Relu神经网络的近似能力。我们证明了这些网络的近似误差的上限和下限,用于平滑功能类。下限是通过神经网络的Rademacher复杂性得出的,神经网络的复杂性可能具有独立的关注。我们将这些近似范围应用于使用规范约束的神经网络和gans分布估计来分析回归的收敛。特别是,我们获得过度参数化神经网络的收敛速率。还表明,当歧视者是正确选择的规范约束神经网络时,甘恩可以实现最佳的学习概率分布速率。

This paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights. We prove upper and lower bounds on the approximation error of these networks for smooth function classes. The lower bound is derived through the Rademacher complexity of neural networks, which may be of independent interest. We apply these approximation bounds to analyze the convergences of regression using norm constrained neural networks and distribution estimation by GANs. In particular, we obtain convergence rates for over-parameterized neural networks. It is also shown that GANs can achieve optimal rate of learning probability distributions, when the discriminator is a properly chosen norm constrained neural network.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源