论文标题

强大的集体分类针对结构攻击

Robust Collective Classification against Structural Attacks

论文作者

Zhou, Kai, Vorobeychik, Yevgeniy

论文摘要

集体学习方法利用数据点之间的关系以增强分类性能。但是,这种关系是基础图形模型中的边缘,向对手展示了额外的攻击表面。我们将重要类别模型,关联马尔可夫网络(AMN)的对抗性鲁棒性研究到结构攻击,在该结构攻击中,攻击者可以在测试时修改图形结构。我们制定了学习强大的AMN分类器作为双层程序的任务,其中内部问题是一个具有挑战性的非线性整数程序,可以计算AMN的最佳结构变化。为了应对这一技术挑战,我们首先放松了攻击者问题,然后使用二元性来获得凸二次的上限,以解决强大的AMN问题。然后,我们证明了所得近似最佳解决方案的质量的束缚,并在实验上证明了我们方法的功效。最后,我们将方法应用于跨传输学习环境中,并表明稳健的AMN比最先进的深度学习方法更强大,同时牺牲了非对抗性数据的准确性几乎没有。

Collective learning methods exploit relations among data points to enhance classification performance. However, such relations, represented as edges in the underlying graphical model, expose an extra attack surface to the adversaries. We study adversarial robustness of an important class of such graphical models, Associative Markov Networks (AMN), to structural attacks, where an attacker can modify the graph structure at test time. We formulate the task of learning a robust AMN classifier as a bi-level program, where the inner problem is a challenging non-linear integer program that computes optimal structural changes to the AMN. To address this technical challenge, we first relax the attacker problem, and then use duality to obtain a convex quadratic upper bound for the robust AMN problem. We then prove a bound on the quality of the resulting approximately optimal solutions, and experimentally demonstrate the efficacy of our approach. Finally, we apply our approach in a transductive learning setting, and show that robust AMN is much more robust than state-of-the-art deep learning methods, while sacrificing little in accuracy on non-adversarial data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源