论文标题

对对抗性推荐系统的调查:从攻击/防御策略到生成的对抗网络

A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks

论文作者

Deldjoo, Yashar, Di Noia, Tommaso, Merra, Felice Antonio

论文摘要

基于协作过滤(CF)的潜在因子模型(LFM),例如矩阵分解(MF)和深CF方法,由于其出色的性能和建议精度,在现代推荐系统(RS)中广泛使用。但是,成功伴随着一个重大的新挑战:机器学习的许多应用(ML)本质上都是对抗性的。近年来,已经表明,这些方法容易受到对抗性示例的影响,即旨在迫使建议模型来产生错误输出的微妙但非随机扰动。 这项调查的目的是两倍:(i)为RS的安全性(即攻击和辩护建议模型)提供最新进展(AML),(ii)由于其在生成应用程序中的生成应用程序(高度)数据分布的能力,AML在生成的对抗性网络(GAN)中的另一种成功应用。在这项调查中,我们对主要RS和ML期刊和会议发表的74篇文章进行了详尽的文献评论。这篇评论是RS社区的参考,从事RS的安全性或使用GAN的生成模型来提高其质量。

Latent-factor models (LFM) based on collaborative filtering (CF), such as matrix factorization (MF) and deep CF methods, are widely used in modern recommender systems (RS) due to their excellent performance and recommendation accuracy. However, success has been accompanied with a major new arising challenge: many applications of machine learning (ML) are adversarial in nature. In recent years, it has been shown that these methods are vulnerable to adversarial examples, i.e., subtle but non-random perturbations designed to force recommendation models to produce erroneous outputs. The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions. In this survey, we provide an exhaustive literature review of 74 articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源