论文标题

大型推荐模型对冰冷的用户是否公平?

Are Big Recommendation Models Fair to Cold Users?

论文作者

Wu, Chuhan, Wu, Fangzhao, Qi, Tao, Huang, Yongfeng

论文摘要

大型模型被在线推荐系统广泛使用,以提高建议性能。它们通常是在历史用户行为数据上学习的,以推断用户兴趣并预测未来的用户行为(例如,点击)。实际上,与感兴趣的建模和未来行为预测的寒冷用户相比,具有更多历史行为的重型用户的行为通常可以提供更丰富的线索。大型模型可能会通过从其行为方式中学习更多,并为寒冷的用户带来不公平性,从而偏爱沉重的用户。在本文中,我们研究了大型推荐模型对冰冷的用户是否公平。我们从经验上证明,优化大型推荐模型的整体性能可能会导致对性能降解的不公平性。为了解决这个问题,我们提出了一种基于自我验证的BigFair方法,该方法使用原始用户数据作为教师的模型预测,以随机删除用户行为对增强数据的预测进行规范,这可以鼓励该模型捕获沉重和寒冷用户的兴趣分布。两个数据集上的实验表明,BigFair可以有效地改善冷建议模型的性能公平,而不会损害重型用户的性能。

Big models are widely used by online recommender systems to boost recommendation performance. They are usually learned on historical user behavior data to infer user interest and predict future user behaviors (e.g., clicks). In fact, the behaviors of heavy users with more historical behaviors can usually provide richer clues than cold users in interest modeling and future behavior prediction. Big models may favor heavy users by learning more from their behavior patterns and bring unfairness to cold users. In this paper, we study whether big recommendation models are fair to cold users. We empirically demonstrate that optimizing the overall performance of big recommendation models may lead to unfairness to cold users in terms of performance degradation. To solve this problem, we propose a BigFair method based on self-distillation, which uses the model predictions on original user data as a teacher to regularize predictions on augmented data with randomly dropped user behaviors, which can encourage the model to fairly capture interest distributions of heavy and cold users. Experiments on two datasets show that BigFair can effectively improve the performance fairness of big recommendation models on cold users without harming the performance on heavy users.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源