论文标题
个性化的及时学习以解释建议
Personalized Prompt Learning for Explainable Recommendation
论文作者
论文摘要
提供用户理解的解释以证明建议可以帮助用户更好地了解推荐的项目,提高系统的易用性并获得用户的信任。一种典型的方法来意识到这是自然语言的产生。但是,以前的作品主要采用经常性的神经网络来达到目的,从而使潜在的有效的预训练的变压器模型探讨了。实际上,用户和项目ID,作为推荐系统中的重要标识符,本质上是在不同的语义空间中,就像已经对预训练的模型已经接受过培训一样。因此,如何有效地将ID与此类模型融合成为一个关键问题。受到迅速学习的最新进展的启发,我们提出了两种解决方案:查找代表ID的替代单词(称为离散及时学习),并将ID矢量直接输入预训练的模型(称为连续及时学习)。在后一种情况下,ID向量是随机初始化的,但该模型在大型语料库中进行了预先培训,因此它们实际上处于不同的学习阶段。为了弥合差距,我们进一步提出了两种培训策略:顺序调整和建议作为正则化。广泛的实验表明,配备培训策略的连续及时及时学习方法在三个可解释的建议的数据集上始终优于强大的基准。
Providing user-understandable explanations to justify recommendations could help users better understand the recommended items, increase the system's ease of use, and gain users' trust. A typical approach to realize it is natural language generation. However, previous works mostly adopt recurrent neural networks to meet the ends, leaving the potentially more effective pre-trained Transformer models under-explored. In fact, user and item IDs, as important identifiers in recommender systems, are inherently in different semantic space as words that pre-trained models were already trained on. Thus, how to effectively fuse IDs into such models becomes a critical issue. Inspired by recent advancement in prompt learning, we come up with two solutions: find alternative words to represent IDs (called discrete prompt learning), and directly input ID vectors to a pre-trained model (termed continuous prompt learning). In the latter case, ID vectors are randomly initialized but the model is trained in advance on large corpora, so they are actually in different learning stages. To bridge the gap, we further propose two training strategies: sequential tuning and recommendation as regularization. Extensive experiments show that our continuous prompt learning approach equipped with the training strategies consistently outperforms strong baselines on three datasets of explainable recommendation.