论文标题
推荐系统的自我监管的超图形变压器
Self-Supervised Hypergraph Transformer for Recommender Systems
论文作者
论文摘要
图形神经网络(GNN)已显示为与用户项目交互图建模的协作过滤(CF)的有前途的解决方案。现有基于GNN的推荐系统的关键思想是递归执行沿用户项目交互边缘传递的消息,以完善编码的嵌入。但是,尽管它们有效,但当前的大多数推荐模型都依赖于足够和高质量的培训数据,因此学习的表示形式可以很好地捕获准确的用户偏好。用户行为数据在许多实际建议方案中通常很吵,并且展示了偏斜的分布,这可能会导致基于GNN的模型中的次优表示性能。在本文中,我们提出了SHT,这是一种新颖的自我监视的超盖式变压器框架(SHT),该框架通过以明确的方式探索全球协作关系来增强用户表示。具体而言,我们首先赋予图形神经CF范式,以通过HyperGraph Transformer网络维持用户和项目之间的全局协作效果。在蒸馏的全球环境中,提出了一个跨视图生成的自我监督学习组件,用于对用户项目交互图的数据增强,以增强推荐系统的鲁棒性。广泛的实验表明,SHT可以显着提高各种最新基线的性能。进一步的消融研究表明,我们的SHT推荐框架在减轻数据稀疏和噪声问题方面具有出色的表示能力。源代码和评估数据集可在以下网址获得:https://github.com/akaxlh/sht。
Graph Neural Networks (GNNs) have been shown as promising solutions for collaborative filtering (CF) with the modeling of user-item interaction graphs. The key idea of existing GNN-based recommender systems is to recursively perform the message passing along the user-item interaction edge for refining the encoded embeddings. Despite their effectiveness, however, most of the current recommendation models rely on sufficient and high-quality training data, such that the learned representations can well capture accurate user preference. User behavior data in many practical recommendation scenarios is often noisy and exhibits skewed distribution, which may result in suboptimal representation performance in GNN-based models. In this paper, we propose SHT, a novel Self-Supervised Hypergraph Transformer framework (SHT) which augments user representations by exploring the global collaborative relationships in an explicit way. Specifically, we first empower the graph neural CF paradigm to maintain global collaborative effects among users and items with a hypergraph transformer network. With the distilled global context, a cross-view generative self-supervised learning component is proposed for data augmentation over the user-item interaction graph, so as to enhance the robustness of recommender systems. Extensive experiments demonstrate that SHT can significantly improve the performance over various state-of-the-art baselines. Further ablation studies show the superior representation ability of our SHT recommendation framework in alleviating the data sparsity and noise issues. The source code and evaluation datasets are available at: https://github.com/akaxlh/SHT.