论文标题
了解正弦神经网络
Understanding Sinusoidal Neural Networks
论文作者
论文摘要
在这项工作中,我们研究了正弦MLP的结构和表示能力 - 使用正弦作为激活函数的多层感知网络。这些神经网络(称为神经领域)在代表计算机图形中的常见信号(例如图像,签名距离函数和辐射磁场)中已成为基础。这种成功主要归因于正弦MLP的两个关键特性:平滑度和紧凑性。这些功能是平滑的,因为它们是由仿射图与正弦函数的组成产生的。这项工作提供了理论上的结果,以证明正弦MLP的紧凑性特性合理,并在这些网络的定义和培训中提供控制机制。 我们建议通过将其作为谐波总和扩大正弦MLP来研究。首先,我们观察到它的第一层可以看作是谐波词典,我们称之为输入正弦神经元。然后,隐藏层使用仿射图结合了该词典,并使用正弦调节输出,这导致了正弦神经元的特殊词典。我们证明,这些正弦神经元中的每一个都作为一个谐波总和扩展,产生大量的新频率,这些频率表示为输入频率的整数线性组合。因此,每个隐藏的神经元都会产生相同的频率,并且相应的振幅完全由隐藏的仿射图确定。我们还提供了上限和一种分类这些振幅的方法,该振幅可以控制所得近似,从而使我们能够截断相应的序列。最后,我们介绍了正弦MLP训练和初始化的应用。此外,我们表明,如果输入神经元是周期性的,则整个网络将在同一时期周期性。我们将这些周期网络与傅立叶级数表示。
In this work, we investigate the structure and representation capacity of sinusoidal MLPs - multilayer perceptron networks that use sine as the activation function. These neural networks (known as neural fields) have become fundamental in representing common signals in computer graphics, such as images, signed distance functions, and radiance fields. This success can be primarily attributed to two key properties of sinusoidal MLPs: smoothness and compactness. These functions are smooth because they arise from the composition of affine maps with the sine function. This work provides theoretical results to justify the compactness property of sinusoidal MLPs and provides control mechanisms in the definition and training of these networks. We propose to study a sinusoidal MLP by expanding it as a harmonic sum. First, we observe that its first layer can be seen as a harmonic dictionary, which we call the input sinusoidal neurons. Then, a hidden layer combines this dictionary using an affine map and modulates the outputs using the sine, this results in a special dictionary of sinusoidal neurons. We prove that each of these sinusoidal neurons expands as a harmonic sum producing a large number of new frequencies expressed as integer linear combinations of the input frequencies. Thus, each hidden neuron produces the same frequencies, and the corresponding amplitudes are completely determined by the hidden affine map. We also provide an upper bound and a way of sorting these amplitudes that can control the resulting approximation, allowing us to truncate the corresponding series. Finally, we present applications for training and initialization of sinusoidal MLPs. Additionally, we show that if the input neurons are periodic, then the entire network will be periodic with the same period. We relate these periodic networks with the Fourier series representation.