论文标题

PrivNet:在转移学习中保护私人属性以进行推荐

PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation

论文作者

Hu, Guangneng, Yang, Qiang

论文摘要

转移学习是一种有效的技术,可以通过来自源域的知识来改善目标推荐系统。现有的研究重点是目标域的建议性能,同时忽略了源域的隐私泄漏。但是,转移的知识可能会明显地泄露源域的私人信息。例如,攻击者可以从源域数据所有者提供的历史购买中准确地推断出用户人口统计信息。本文通过在保护源隐私的同时改善目标绩效来学习隐私感知的神经表示来解决上述保护隐私问题。关键的想法是模拟培训期间的攻击,以保护未来的看不见的用户隐私,以对抗性游戏为仿真,以便转移学习模型对攻击变得强大。实验表明,拟议的Privnet模型可以成功地解开受益于转移隐私的知识。

Transfer learning is an effective technique to improve a target recommender system with the knowledge from a source domain. Existing research focuses on the recommendation performance of the target domain while ignores the privacy leakage of the source domain. The transferred knowledge, however, may unintendedly leak private information of the source domain. For example, an attacker can accurately infer user demographics from their historical purchase provided by a source domain data owner. This paper addresses the above privacy-preserving issue by learning a privacy-aware neural representation by improving target performance while protecting source privacy. The key idea is to simulate the attacks during the training for protecting unseen users' privacy in the future, modeled by an adversarial game, so that the transfer learning model becomes robust to attacks. Experiments show that the proposed PrivNet model can successfully disentangle the knowledge benefitting the transfer from leaking the privacy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源