论文标题
通过可解释的特征映射和评估解释性的解释建议
Explainable Recommendation via Interpretable Feature Mapping and Evaluation of Explainability
论文作者
论文摘要
潜在因素协作过滤(CF)通过学习用户和项目的语义表示,是推荐系统广泛使用的技术。最近,可解释的建议吸引了研究社区的很多关注。但是,在建议缓解困境的情况下通常需要元数据的推荐性能与建议的表现之间存在权衡。我们提出了一种新颖的特征映射方法,该方法将无法解释的一般特征映射到可解释的方面特征上,从而在建议中通过同时最大程度地减少了评级预测损失和解释损失,从而在建议中实现了令人满意的准确性和解释性。为了评估解释性,我们提出了两个新的评估指标,专门针对使用替代地面真理的方面级别解释设计。实验结果表明,在建议和解释中都表现出色,从而消除了对元数据的需求。代码可从https://github.com/pd90506/amcf获得。
Latent factor collaborative filtering (CF) has been a widely used technique for recommender system by learning the semantic representations of users and items. Recently, explainable recommendation has attracted much attention from research community. However, trade-off exists between explainability and performance of the recommendation where metadata is often needed to alleviate the dilemma. We present a novel feature mapping approach that maps the uninterpretable general features onto the interpretable aspect features, achieving both satisfactory accuracy and explainability in the recommendations by simultaneous minimization of rating prediction loss and interpretation loss. To evaluate the explainability, we propose two new evaluation metrics specifically designed for aspect-level explanation using surrogate ground truth. Experimental results demonstrate a strong performance in both recommendation and explaining explanation, eliminating the need for metadata. Code is available from https://github.com/pd90506/AMCF.