论文标题
解释和征服:基于文本的个性化评论以实现透明度
Explain and Conquer: Personalised Text-based Reviews to Achieve Transparency
论文作者
论文摘要
存在许多二元数据的情况。社交网络是一个众所周知的例子。在这些情况下,成对的元素是链接的,建立一个反映交互的网络。解释为什么建立这些关系对于获得透明度至关重要,这是一个越来越重要的概念。由于自然语言理解任务的传播,这些解释通常是使用文本提出的。我们的目的是代表和解释任何代理商建立的对(例如,推荐系统或付费促销机制),以便考虑到基于文本的个性化。考虑到其他二元数据上下文的适用性,我们专注于TripAdvisor平台。这些项目是用户和餐馆的子集以及这些用户发布的评论的互动。我们提出了PTER(个性化基于文本的评论)模型。我们可以从适合特定用户互动的餐厅的可用评论中预测。 PTER利用BERT(Transformers来自Transformers-transformer-osoder模型的双向编码器表示)。我们按照基于功能的方法来定制一个深神网络,并提出了LTR(学习排名)下游任务。根据额外的(说明排名)基准,我们与随机基线和其他最新模型进行了几次比较。我们的方法的表现优于其他协作过滤建议。
There are many contexts in which dyadic data are present. Social networks are a well-known example. In these contexts, pairs of elements are linked building a network that reflects interactions. Explaining why these relationships are established is essential to obtain transparency, an increasingly important notion. These explanations are often presented using text, thanks to the spread of the natural language understanding tasks. Our aim is to represent and explain pairs established by any agent (e.g., a recommender system or a paid promotion mechanism), so that text-based personalisation is taken into account. We have focused on the TripAdvisor platform, considering the applicability to other dyadic data contexts. The items are a subset of users and restaurants and the interactions the reviews posted by these users. We propose the PTER (Personalised TExt-based Reviews) model. We predict, from the available reviews for a given restaurant, those that fit to the specific user interactions. PTER leverages the BERT (Bidirectional Encoders Representations from Transformers) transformer-encoder model. We customised a deep neural network following the feature-based approach, presenting a LTR (Learning To Rank) downstream task. We carried out several comparisons of our proposal with a random baseline and other models of the state of the art, following the EXTRA (EXplanaTion RAnking) benchmark. Our method outperforms other collaborative filtering proposals.