论文标题
用于中毒聚类的黑盒对抗攻击
A black-box adversarial attack for poisoning clustering
论文作者
论文摘要
聚集算法在决策和明智的自动化过程中起着基本作用。由于这些应用的广泛使用,对这种针对对抗性噪声的算法的鲁棒性分析已成为必要。但是,据我们所知,目前只有少数作品解决了这个问题。为了填补这一空白,我们在这项工作中提出了一个黑框对抗攻击,用于制作对抗样本,以测试聚类算法的鲁棒性。我们将问题提出为一个受约束的最小化程序,其结构一般,并且根据其能力约束,可以由攻击者定制。我们不假设有关受害者聚类算法内部结构的任何信息,并且允许攻击者仅将其查询为服务。在没有任何衍生信息的情况下,我们以抽象遗传算法(AGA)启发的自定义方法执行优化。在实验部分中,我们证明了在不同情况下针对我们制作的对抗样本的不同单一和集合聚类算法的敏感性。此外,我们使用最先进的方法对算法进行比较,这表明我们能够达到甚至超越其性能。最后,为了强调产生的噪声的一般性质,我们表明我们的攻击甚至可以针对有监督的算法,例如SVM,随机森林和神经网络。
Clustering algorithms play a fundamental role as tools in decision-making and sensible automation processes. Due to the widespread use of these applications, a robustness analysis of this family of algorithms against adversarial noise has become imperative. To the best of our knowledge, however, only a few works have currently addressed this problem. In an attempt to fill this gap, in this work, we propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms. We formulate the problem as a constrained minimization program, general in its structure and customizable by the attacker according to her capability constraints. We do not assume any information about the internal structure of the victim clustering algorithm, and we allow the attacker to query it as a service only. In the absence of any derivative information, we perform the optimization with a custom approach inspired by the Abstract Genetic Algorithm (AGA). In the experimental part, we demonstrate the sensibility of different single and ensemble clustering algorithms against our crafted adversarial samples on different scenarios. Furthermore, we perform a comparison of our algorithm with a state-of-the-art approach showing that we are able to reach or even outperform its performance. Finally, to highlight the general nature of the generated noise, we show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.