论文标题
Nas-Bench-Graph:基准测试图神经体系结构搜索
NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search
论文作者
论文摘要
Graph神经架构搜索(Graphnas)最近引起了学术界和行业的大量关注。但是,两个主要挑战严重阻碍了对石墨的进一步研究。首先,由于实验环境没有共识,因此不同研究论文中的经验结果通常是不可比较的,甚至不可再现,从而导致不公平的比较。其次,石墨通常需要进行广泛的计算,这使得研究人员无法访问大规模计算,这使其高效且无法访问。为了解决这些挑战,我们提出了NAS Bench-Graph,这是一种量身定制的基准测试,该基准支持统一,可重现和有效的Graphnas评估。具体而言,我们构建了一个统一,表现力但紧凑的搜索空间,涵盖了26,206个独特的图形神经网络(GNN)体系结构,并提出了一种原则评估协议。 To avoid unnecessary repetitive training, we have trained and evaluated all of these architectures on nine representative graph datasets, recording detailed metrics including train, validation, and test performance in each epoch, the latency, the number of parameters, etc. Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation, which enables fair, fully reproducible, and efficient比较。为了证明其使用情况,我们对我们提出的NAS基础图表进行了深入的分析,揭示了数个有关Graphnas的有趣发现。我们还展示了如何轻松地与诸如autogl和nni之类的Gragednas开放库兼容。据我们所知,我们的工作是图形神经架构搜索的第一个基准。
Graph neural architecture search (GraphNAS) has recently aroused considerable attention in both academia and industry. However, two key challenges seriously hinder the further research of GraphNAS. First, since there is no consensus for the experimental setting, the empirical results in different research papers are often not comparable and even not reproducible, leading to unfair comparisons. Secondly, GraphNAS often needs extensive computations, which makes it highly inefficient and inaccessible to researchers without access to large-scale computation. To solve these challenges, we propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS. Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures and propose a principled evaluation protocol. To avoid unnecessary repetitive training, we have trained and evaluated all of these architectures on nine representative graph datasets, recording detailed metrics including train, validation, and test performance in each epoch, the latency, the number of parameters, etc. Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation, which enables fair, fully reproducible, and efficient comparisons. To demonstrate its usage, we make in-depth analyses of our proposed NAS-Bench-Graph, revealing several interesting findings for GraphNAS. We also showcase how the benchmark can be easily compatible with GraphNAS open libraries such as AutoGL and NNI. To the best of our knowledge, our work is the first benchmark for graph neural architecture search.