论文标题

流形神经网络的收敛速率

A Convergence Rate for Manifold Neural Networks

论文作者

Chew, Joyce, Needell, Deanna, Perlmutter, Michael

论文摘要

高维数据出现在许多应用中,几何深度学习的快速发展领域旨在开发神经网络体系结构,以分析非欧几里得领域(例如图形和歧管)中的此类数据。 Z. Wang,L。Ruiz和A. Ribeiro的最新工作是使用Laplace Beltrami操作员的光谱分解来构建歧管神经网络的一种方法。此外,在这项工作中,作者提供了一种数值方案,用于在歧管未知时实现此类神经网络,并且只能访问有限的许多样本点。作者表明,随着样本点趋向于无穷大的数量,该方案依赖于构建数据驱动图,将其收敛到连续限制。在这里,我们通过建立一个取决于歧管的内在维度但与环境维度无关的融合速率来建立这种结果。我们还讨论了收敛速率如何取决于网络的深度以及每一层中使用的过滤器数量。

High-dimensional data arises in numerous applications, and the rapidly developing field of geometric deep learning seeks to develop neural network architectures to analyze such data in non-Euclidean domains, such as graphs and manifolds. Recent work by Z. Wang, L. Ruiz, and A. Ribeiro has introduced a method for constructing manifold neural networks using the spectral decomposition of the Laplace Beltrami operator. Moreover, in this work, the authors provide a numerical scheme for implementing such neural networks when the manifold is unknown and one only has access to finitely many sample points. The authors show that this scheme, which relies upon building a data-driven graph, converges to the continuum limit as the number of sample points tends to infinity. Here, we build upon this result by establishing a rate of convergence that depends on the intrinsic dimension of the manifold but is independent of the ambient dimension. We also discuss how the rate of convergence depends on the depth of the network and the number of filters used in each layer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源