论文标题

LGV:从大几何附近提高对抗性示例可转移性

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

论文作者

Gubri, Martin, Cordy, Maxime, Papadakis, Mike, Traon, Yves Le, Sen, Koushik

论文摘要

我们提出了从大几何附近(LGV)的转移性,这是一种新技术,以提高黑盒对抗攻击的转移性。 LGV从验证的替代模型开始,并从恒定且高学习率的其他一些训练时期收集了多个重量集。 LGV利用了我们与可传递性相关的两个几何特性。首先,属于最佳体重的模型是更好的替代物。其次,我们确定一个能够在此更大最佳中生成有效的替代合奏的子空间。通过广泛的实验,我们表明单独使用LGV优于四个既定测试时间转换的所有(组合),高于1.8至59.9个百分点。我们的发现为解释对抗性例子的可转移性的几何形状的重要性提供了新的启示。

We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the transferability of black-box adversarial attacks. LGV starts from a pretrained surrogate model and collects multiple weight sets from a few additional training epochs with a constant and high learning rate. LGV exploits two geometric properties that we relate to transferability. First, models that belong to a wider weight optimum are better surrogates. Second, we identify a subspace able to generate an effective surrogate ensemble among this wider optimum. Through extensive experiments, we show that LGV alone outperforms all (combinations of) four established test-time transformations by 1.8 to 59.9 percentage points. Our findings shed new light on the importance of the geometry of the weight space to explain the transferability of adversarial examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源