论文标题

终身学习用于开放世界节点分类的图形神经网络

Lifelong Learning of Graph Neural Networks for Open-World Node Classification

论文作者

Galke, Lukas, Franke, Benedikt, Zielke, Tobias, Scherp, Ansgar

论文摘要

图形神经网络(GNN)已成为用于图形结构数据(例如节点分类)的许多任务的标准方法。但是,随着时间的流逝,现实世界的图表通常在不断发展,甚至可能会出现新的类别。我们将这些挑战建模为终身学习的一个实例,其中学习者面临一系列任务,并可能接管过去任务中获得的知识。这些知识可以显式存储为历史数据,也可以隐含地存储在模型参数中。在这项工作中,我们系统地分析了隐式和明确知识的影响。因此,我们提出了一种用于图表上终身学习的增量培训方法,并基于$ K $ - 纽伯期的时间差异引入了一种新措施,以解决历史数据中的差异。我们将培训方法应用于五个代表性的GNN架构,并在三个新的终身节点分类数据集中对其进行评估。我们的结果表明,与图形数据的完整历史记录相比,与训练相比,GNN的接受场不超过50%以保持至少95%的精度。此外,我们的实验证实,当较少的明确知识可用时,隐式知识变得更加重要。

Graph neural networks (GNNs) have emerged as the standard method for numerous tasks on graph-structured data such as node classification. However, real-world graphs are often evolving over time and even new classes may arise. We model these challenges as an instance of lifelong learning, in which a learner faces a sequence of tasks and may take over knowledge acquired in past tasks. Such knowledge may be stored explicitly as historic data or implicitly within model parameters. In this work, we systematically analyze the influence of implicit and explicit knowledge. Therefore, we present an incremental training method for lifelong learning on graphs and introduce a new measure based on $k$-neighborhood time differences to address variances in the historic data. We apply our training method to five representative GNN architectures and evaluate them on three new lifelong node classification datasets. Our results show that no more than 50% of the GNN's receptive field is necessary to retain at least 95% accuracy compared to training over the complete history of the graph data. Furthermore, our experiments confirm that implicit knowledge becomes more important when fewer explicit knowledge is available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源