论文标题

图形神经网络中的神经架构搜索

Neural Architecture Search in Graph Neural Networks

论文作者

Nunes, Matheus, Pappa, Gisele L.

论文摘要

由于无处不在的关系信息,对图形数据执行分析任务变得越来越有趣。但是,与图像或句子不同,网络中没有序列的概念。节点(和边缘)不遵循绝对顺序,传统的机器学习(ML)算法很难识别模式并将其对此类数据的预测推广。图形神经网络(GNN)成功解决了这个问题。在将卷积概念概括到图形域之后,它们变得流行。但是,他们拥有大量的超参数,其设计和优化目前是基于启发式或经验直觉的手工制作的。神经体系结构搜索(NAS)方法似乎是解决此问题的有趣解决方案。在这个方向上,本文比较了两种用于优化GNN的NAS方法:一种基于强化学习的基于增强算法,第二个基于进化算法。结果考虑在两个搜索空间上的7个数据集,并表明两种方法都获得了与随机搜索相似的精度,从而提出了一个问题,即多少搜索空间维度实际上与问题相关。

Performing analytical tasks over graph data has become increasingly interesting due to the ubiquity and large availability of relational information. However, unlike images or sentences, there is no notion of sequence in networks. Nodes (and edges) follow no absolute order, and it is hard for traditional machine learning (ML) algorithms to recognize a pattern and generalize their predictions on this type of data. Graph Neural Networks (GNN) successfully tackled this problem. They became popular after the generalization of the convolution concept to the graph domain. However, they possess a large number of hyperparameters and their design and optimization is currently hand-made, based on heuristics or empirical intuition. Neural Architecture Search (NAS) methods appear as an interesting solution to this problem. In this direction, this paper compares two NAS methods for optimizing GNN: one based on reinforcement learning and a second based on evolutionary algorithms. Results consider 7 datasets over two search spaces and show that both methods obtain similar accuracies to a random search, raising the question of how many of the search space dimensions are actually relevant to the problem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源