论文标题

Mathnet:图表表示和学习

MathNet: Haar-Like Wavelet Multiresolution-Analysis for Graph Representation and Learning

论文作者

Zheng, Xuebin, Zhou, Bingxin, Li, Ming, Wang, Yu Guang, Gao, Junbin

论文摘要

图神经网络(GNN)最近引起了极大的关注,并在图形应用中取得了重大进展。在本文中,我们提出了一个图形神经网络的框架,该框架具有多分辨率的HAAR状小波或Mathnet,并具有相互关联的卷积和汇总策略。基础方法将不同结构中的图形作为输入和组装读数层的一致图表表示,然后完成标签预测。为了实现这一目标,首先将多分辨率图表示形式构造并馈入图形卷积层进行处理。然后将分层图池层涉及下样本图分辨率,同时删除图形信号中的冗余。可以通过多级图分析形成整个工作流程,这不仅有助于将每个图的内在拓扑信息嵌入GNN中,而且还支持对前向和伴随图转换的快速计算。我们通过广泛的实验表明,所提出的框架获得了具有性能稳定性的图形分类和回归任务的明显准确性。提出的Mathnet优于各种现有的GNN模型,尤其是在大数据集上。

Graph Neural Networks (GNNs) have recently caught great attention and achieved significant progress in graph-level applications. In this paper, we propose a framework for graph neural networks with multiresolution Haar-like wavelets, or MathNet, with interrelated convolution and pooling strategies. The underlying method takes graphs in different structures as input and assembles consistent graph representations for readout layers, which then accomplishes label prediction. To achieve this, the multiresolution graph representations are first constructed and fed into graph convolutional layers for processing. The hierarchical graph pooling layers are then involved to downsample graph resolution while simultaneously remove redundancy within graph signals. The whole workflow could be formed with a multi-level graph analysis, which not only helps embed the intrinsic topological information of each graph into the GNN, but also supports fast computation of forward and adjoint graph transforms. We show by extensive experiments that the proposed framework obtains notable accuracy gains on graph classification and regression tasks with performance stability. The proposed MathNet outperforms various existing GNN models, especially on big data sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源