论文标题
下一个POI推荐的分散协作学习框架
Decentralized Collaborative Learning Framework for Next POI Recommendation
论文作者
论文摘要
下一个利益点(POI)的建议已成为基于位置的社交网络(LBSN)中必不可少的功能,因为它在帮助人们决定下一个POI访问方面有效。但是,准确的建议需要大量的历史检查数据,因此威胁用户隐私,因为云服务器需要处理位置敏感的数据。尽管有几个用于保护隐私的POI建议的设备框架,但在存储和计算方面,它们仍然是资源密集的,并且对用户POI交互的高稀疏性表现出有限的鲁棒性。在此基础上,我们为POI推荐(DCLR)提出了一个新颖的分散的协作学习框架,该框架允许用户以协作方式在本地培训其个性化模型。 DCLR大大降低了本地模型对云的依赖性进行训练,并可用于扩展任意的集中建议模型。为了抵消学习每个本地模型时在设备用户数据的稀疏性,我们设计了两个自我划分信号,以通过POI的地理和分类相关性在服务器上预处理POI表示。为了促进协作学习,我们创新建议将来自地理或语义上相似用户的知识纳入每个本地模型,并以明显的聚合和相互信息最大化。协作学习过程可利用设备之间的通信,同时仅需要中央服务器的少量参与来识别用户组,并且与诸如差异隐私之类的常见隐私保护机制兼容。我们使用两个现实世界数据集评估了DCLR,其中结果表明,与集中式同行相比,DCLR的表现优于最先进的设备框架,并产生竞争成果。
Next Point-of-Interest (POI) recommendation has become an indispensable functionality in Location-based Social Networks (LBSNs) due to its effectiveness in helping people decide the next POI to visit. However, accurate recommendation requires a vast amount of historical check-in data, thus threatening user privacy as the location-sensitive data needs to be handled by cloud servers. Although there have been several on-device frameworks for privacy-preserving POI recommendations, they are still resource-intensive when it comes to storage and computation, and show limited robustness to the high sparsity of user-POI interactions. On this basis, we propose a novel decentralized collaborative learning framework for POI recommendation (DCLR), which allows users to train their personalized models locally in a collaborative manner. DCLR significantly reduces the local models' dependence on the cloud for training, and can be used to expand arbitrary centralized recommendation models. To counteract the sparsity of on-device user data when learning each local model, we design two self-supervision signals to pretrain the POI representations on the server with geographical and categorical correlations of POIs. To facilitate collaborative learning, we innovatively propose to incorporate knowledge from either geographically or semantically similar users into each local model with attentive aggregation and mutual information maximization. The collaborative learning process makes use of communications between devices while requiring only minor engagement from the central server for identifying user groups, and is compatible with common privacy preservation mechanisms like differential privacy. We evaluate DCLR with two real-world datasets, where the results show that DCLR outperforms state-of-the-art on-device frameworks and yields competitive results compared with centralized counterparts.