论文标题

从拓扑角度学习神经网络的连通性

Learning Connectivity of Neural Networks from a Topological Perspective

论文作者

Yuan, Kun, Li, Quanquan, Shao, Jing, Yan, Junjie

论文摘要

寻求有效的神经网络是深度学习的关键和实用领域。除了设计深度,卷积,归一化和非线性的类型外,神经网络的拓扑连接性也很重要。基于规则的模块化设计的先前原理简化了建立有效体系结构的困难,但在有限的空间中限制了可能的拓扑结构。在本文中,我们试图优化神经网络中的连通性。我们提出了一个拓扑视角,将网络代表到完整的图表中以进行分析,其中节点进行了特征的聚合和转换,边缘决定了信息的流动。通过将可学习的参数分配给反映连接幅度的边缘,可以以不同的方式执行学习过程。我们进一步将辅助稀疏性约束限制在连接性的分布上,从而促进了学习的拓扑关注关键连接。此学习过程与现有网络兼容,并具有对较大搜索空间和不同任务的适应性。实验的定量结果反映了学习的连通性优于传统的基于规则的连通性,例如随机,残留和完整。此外,它在不引入过多的计算负担的情况下,在图像分类和对象检测方面得到了显着改善。

Seeking effective neural networks is a critical and practical field in deep learning. Besides designing the depth, type of convolution, normalization, and nonlinearities, the topological connectivity of neural networks is also important. Previous principles of rule-based modular design simplify the difficulty of building an effective architecture, but constrain the possible topologies in limited spaces. In this paper, we attempt to optimize the connectivity in neural networks. We propose a topological perspective to represent a network into a complete graph for analysis, where nodes carry out aggregation and transformation of features, and edges determine the flow of information. By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner. We further attach auxiliary sparsity constraint to the distribution of connectedness, which promotes the learned topology focus on critical connections. This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks. Quantitative results of experiments reflect the learned connectivity is superior to traditional rule-based ones, such as random, residual, and complete. In addition, it obtains significant improvements in image classification and object detection without introducing excessive computation burden.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源