论文标题
4ward:一种有效训练任意复杂的无环图的传递策略
4Ward: a Relayering Strategy for Efficient Training of Arbitrarily Complex Directed Acyclic Graphs
论文作者
论文摘要
由于他们的实施易于实施,多层感知器(MLP)在深度学习应用程序中变得无处不在。 MLP下面的图确实是多部分,即神经元的每个层仅连接到属于相邻层的神经元。相比之下,个体突触水平处的体内大脑连接表明,生物神经元网络的特征是无标度度分布或指数截断的功率定律强度分布,这暗示了潜在的新型途径用于开发进化的神经元网络。在本文中,我们介绍了``4ward'',这是一种方法和Python库,能够从任意复杂的定向无环形图中生成灵活有效的神经网络(NNS)。 4ward的灵感来自于从图形图纪律中绘制的分层算法以实现有效的向前通行证,并在具有各种ERDőS-Rényi图的计算实验中提供了显着的时间增长。 4Ward不仅通过平行于激活的计算来克服学习矩阵方法的顺序性质,而且还解决了当前最新目前遇到的可伸缩性问题,并为设计师提供了自由自定义权重初始化和激活功能的自由。我们的算法对于任何寻求在微观尺度的NN设计框架中利用复杂拓扑的研究者都可以帮助。
Thanks to their ease of implementation, multilayer perceptrons (MLPs) have become ubiquitous in deep learning applications. The graph underlying an MLP is indeed multipartite, i.e. each layer of neurons only connects to neurons belonging to the adjacent layer. In contrast, in vivo brain connectomes at the level of individual synapses suggest that biological neuronal networks are characterized by scale-free degree distributions or exponentially truncated power law strength distributions, hinting at potentially novel avenues for the exploitation of evolution-derived neuronal networks. In this paper, we present ``4Ward'', a method and Python library capable of generating flexible and efficient neural networks (NNs) from arbitrarily complex directed acyclic graphs. 4Ward is inspired by layering algorithms drawn from the graph drawing discipline to implement efficient forward passes, and provides significant time gains in computational experiments with various Erdős-Rényi graphs. 4Ward not only overcomes the sequential nature of the learning matrix method, by parallelizing the computation of activations, but also addresses the scalability issues encountered in the current state-of-the-art and provides the designer with freedom to customize weight initialization and activation functions. Our algorithm can be of aid for any investigator seeking to exploit complex topologies in a NN design framework at the microscale.