论文标题

过渡信息增强了基于会话建议的删除图形神经网络

Transition Information Enhanced Disentangled Graph Neural Networks for Session-based Recommendation

论文作者

Li, Ansong

论文摘要

基于会话的建议是一项实用的建议任务,可以根据匿名行为序列预测下一个项目,其性能在很大程度上依赖于序列中项目之间的过渡信息。 SBR中的SOTA方法采用GNN来模拟从全局(即其他会话)和本地(即当前会话)上下文的相邻项目过渡。但是,大多数现有方法都平等地对待不同会议的邻居,而没有考虑不同会话中的邻居项目可能会在不同方面与目标项目共享相似的特征,并且可能具有不同的贡献。换句话说,他们没有在全球环境中探索项目之间的细性过渡信息,从而导致次优性能。在本文中,我们通过提出新的过渡信息来填补这一空白,增强了分离的图形神经网络(TIE-DGNN)模型,以捕获项目之间的细粒状过渡信息,并尝试通过对项目的各种因素进行建模来解释过渡的原因。具体而言,我们提出了一个感知位置的全局图,该图利用相对位置信息对相邻项目过渡进行建模。然后,我们将项目嵌入到块中,每个块都代表一个因素,并使用Distangling模块单独学习全局图上的因子嵌入。对于本地上下文,我们通过使用注意机制从当前会话中捕获过渡信息来训练项目嵌入。为此,我们的模型考虑了两个级别的过渡信息。尤其是在全球文本中,我们不仅考虑项目之间的固有性过渡信息,而且还考虑了因子级别的用户意图以解释过渡的关键原因。在三个数据集上进行的广泛实验证明了我们方法比SOTA方法的优越性。

Session-based recommendation is a practical recommendation task that predicts the next item based on an anonymous behavior sequence, and its performance relies heavily on the transition information between items in the sequence. The SOTA methods in SBR employ GNN to model neighboring item transitions from global (i.e, other sessions) and local (i.e, current session) contexts. However, most existing methods treat neighbors from different sessions equally without considering that the neighbor items from different sessions may share similar features with the target item on different aspects and may have different contributions. In other words, they have not explored finer-granularity transition information between items in the global context, leading to sub-optimal performance. In this paper, we fill this gap by proposing a novel Transition Information Enhanced Disentangled Graph Neural Network (TIE-DGNN) model to capture finer-granular transition information between items and try to interpret the reason of the transition by modeling the various factors of the item. Specifically, we propose a position-aware global graph, which utilizes the relative position information to model the neighboring item transition. Then, we slice item embeddings into blocks, each of which represents a factor, and use disentangling module to separately learn the factor embeddings over the global graph. For local context, we train item embeddings by using attention mechanisms to capture transition information from the current session. To this end, our model considers two levels of transition information. Especially in global text, we not only consider finer-granularity transition information between items but also take user intents at factor-level into account to interpret the key reason for the transition. Extensive experiments on three datasets demonstrate the superiority of our method over the SOTA methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源