论文标题

迈向改进的概括:自我监督知识在图表上的元转移

Toward Improved Generalization: Meta Transfer of Self-supervised Knowledge on Graphs

论文作者

Cui, Wenhui, Akrami, Haleh, Joshi, Anand A., Leahy, Richard M.

论文摘要

尽管图卷积网络在功能性脑活动分析方面取得了显着的成功,但功能模式的异质性和成像数据的稀缺性仍然在许多任务中构成挑战。将知识从具有丰富训练数据的源域转移到目标域可有效地改善对稀缺培训数据的表示。但是,由于领域差异,传统的转移学习方法通​​常无法将预训练的知识推广到目标任务。图表上的自我监督学习可以提高图形特征的普遍性,因为自我划分集中在不限于特定监督任务的固有图形属性上。我们通过将元学习与自学学习的学习整合在一起,以应对fMRI数据的异质性和稀缺性来提出一种新颖的知识转移策略。具体而言,我们在源域上执行自我监督的任务并应用元学习,该任务强烈改善了使用BI级优化的模型的普遍性,以将自我监督的知识转移到目标域。通过对神经障碍分类任务的实验,我们证明了提出的策略可以通过提高基于图的知识的普遍性和可传递性来显着提高目标任务性能。

Despite the remarkable success achieved by graph convolutional networks for functional brain activity analysis, the heterogeneity of functional patterns and the scarcity of imaging data still pose challenges in many tasks. Transferring knowledge from a source domain with abundant training data to a target domain is effective for improving representation learning on scarce training data. However, traditional transfer learning methods often fail to generalize the pre-trained knowledge to the target task due to domain discrepancy. Self-supervised learning on graphs can increase the generalizability of graph features since self-supervision concentrates on inherent graph properties that are not limited to a particular supervised task. We propose a novel knowledge transfer strategy by integrating meta-learning with self-supervised learning to deal with the heterogeneity and scarcity of fMRI data. Specifically, we perform a self-supervised task on the source domain and apply meta-learning, which strongly improves the generalizability of the model using the bi-level optimization, to transfer the self-supervised knowledge to the target domain. Through experiments on a neurological disorder classification task, we demonstrate that the proposed strategy significantly improves target task performance by increasing the generalizability and transferability of graph-based knowledge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源