论文标题
知识感知的神经网络具有个性化功能参考以进行冷启动推荐
Knowledge-aware Neural Networks with Personalized Feature Referencing for Cold-start Recommendation
论文作者
论文摘要
将知识图(kgs)作为侧面信息纳入推荐中,最近引起了很多关注。尽管在一般建议方案中取得了成功,但对于用户与非常有限的交互式信息相关联的冷启动问题,先前的方法可能会缺乏性能满意度。由于常规方法依赖于探索交互拓扑,但是它们可能无法在冷启动场景中捕获足够的信息。为了减轻问题,我们提出了一个具有个性化特征参考机制的新颖知识感知的神经网络,即KPER。与大多数先前的方法不同,这些方法只是简单地从kgs中丰富了目标的语义,例如产品属性,KPER利用KGS作为“语义桥”来为冷启动用户或项目提取功能参考。具体而言,给定冷启动目标,KPER首先探测语义相关的,但不一定在结构上关闭用户或项目作为自适应种子,用于引用特征。然后引入封闭信息聚合模块,以了解冷启动用户和项目的组合潜在功能。我们对四个现实世界数据集进行的广泛实验表明,KPER在冷启动场景中始终优于所有竞争方法,同时在一般场景中保持优势而不会损害整体性能,例如,通过在所有数据列表中实现0.81%-16.08%和1.01%-14.49%的绩效改善,在TOPESTASETS中提高了0.81%-16.08%和1.01%-14.49%的绩效。
Incorporating knowledge graphs (KGs) as side information in recommendation has recently attracted considerable attention. Despite the success in general recommendation scenarios, prior methods may fall short of performance satisfaction for the cold-start problem in which users are associated with very limited interactive information. Since the conventional methods rely on exploring the interaction topology, they may however fail to capture sufficient information in cold-start scenarios. To mitigate the problem, we propose a novel Knowledge-aware Neural Networks with Personalized Feature Referencing Mechanism, namely KPER. Different from most prior methods which simply enrich the targets' semantics from KGs, e.g., product attributes, KPER utilizes the KGs as a "semantic bridge" to extract feature references for cold-start users or items. Specifically, given cold-start targets, KPER first probes semantically relevant but not necessarily structurally close users or items as adaptive seeds for referencing features. Then a Gated Information Aggregation module is introduced to learn the combinatorial latent features for cold-start users and items. Our extensive experiments over four real-world datasets show that, KPER consistently outperforms all competing methods in cold-start scenarios, whilst maintaining superiority in general scenarios without compromising overall performance, e.g., by achieving 0.81%-16.08% and 1.01%-14.49% performance improvement across all datasets in Top-10 recommendation.