论文标题

对抗项目促销:使用图像来解决冷启动的顶级推荐人的核心漏洞

Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start

论文作者

Liu, Zhuoran, Larson, Martha

论文摘要

电子商务平台为客户提供了与客户喜好相匹配的建议项目的排名列表。电子商务平台上的商人希望其物品在这些排名列表的顶部中尽可能高。在本文中,我们演示了不道德的商人如何创建人为地推广其产品并提高其排名的项目图像。使用图像来解决冷启动问题的推荐系统容易受到此安全风险的影响。我们描述了一种新型的攻击,对抗项目促销(AIP),该攻击直接在Top-N推荐人的核心:排名机制本身。推荐系统中的对抗图像的现有工作调查了针对深度学习分类器的常规攻击的含义。相比之下,我们的AIP攻击正在嵌入攻击,该攻击试图以愚弄排名者(而不是分类器)并直接导致项目促销的方式推动特征表示形式。我们介绍了三个AIP攻击内幕攻击,专家攻击和语义攻击,这些攻击是根据三个连续更现实的攻击模型定义的。我们的实验在使用图像来解决冷启动的框架中对三个代表性的视觉推荐算法进行安装时,评估了这些攻击的危险。我们还评估了潜在的防御能力,包括对抗性训练,发现当前存在的,目前存在的技术并不能消除AIP攻击的危险。总而言之,我们表明,使用图像来解决冷启动可以打开推荐的系统,从而对潜在的威胁具有明显的实际影响。

E-commerce platforms provide their customers with ranked lists of recommended items matching the customers' preferences. Merchants on e-commerce platforms would like their items to appear as high as possible in the top-N of these ranked lists. In this paper, we demonstrate how unscrupulous merchants can create item images that artificially promote their products, improving their rankings. Recommender systems that use images to address the cold start problem are vulnerable to this security risk. We describe a new type of attack, Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N recommenders: the ranking mechanism itself. Existing work on adversarial images in recommender systems investigates the implications of conventional attacks, which target deep learning classifiers. In contrast, our AIP attacks are embedding attacks that seek to push features representations in a way that fools the ranker (not a classifier) and directly lead to item promotion. We introduce three AIP attacks insider attack, expert attack, and semantic attack, which are defined with respect to three successively more realistic attack models. Our experiments evaluate the danger of these attacks when mounted against three representative visually-aware recommender algorithms in a framework that uses images to address cold start. We also evaluate potential defenses, including adversarial training and find that common, currently-existing, techniques do not eliminate the danger of AIP attacks. In sum, we show that using images to address cold start opens recommender systems to potential threats with clear practical implications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源