论文标题
对抗音频分类器的对手培训
Adversarially Training for Audio Classifiers
论文作者
论文摘要
在本文中,我们研究了对抗训练对六个高级深神经网络鲁棒性的潜在影响,以对各种靶向和非目标对抗攻击。我们首先表明,在tonnetz Chromagram附加的离散小波变换的2D表示上训练的Resnet-56模型在识别精度方面优于其他模型。然后,我们证明了对抗性训练对该模型的积极影响以及其他深层体系结构对六种类型的攻击算法(白色和黑色框)的积极影响,其成本降低了识别精度和有限的对抗性扰动。我们在两个基准测试环境声音数据集上进行实验,并表明,如果没有对对手的预算分配有任何限制,则受对抗训练的模型的愚弄率可以超过90 \%。换句话说,对抗性攻击在任何尺度上都存在,但是与非对抗训练的模型相比,它们可能需要更高的对抗性扰动。
In this paper, we investigate the potential effect of the adversarially training on the robustness of six advanced deep neural networks against a variety of targeted and non-targeted adversarial attacks. We firstly show that, the ResNet-56 model trained on the 2D representation of the discrete wavelet transform appended with the tonnetz chromagram outperforms other models in terms of recognition accuracy. Then we demonstrate the positive impact of adversarially training on this model as well as other deep architectures against six types of attack algorithms (white and black-box) with the cost of the reduced recognition accuracy and limited adversarial perturbation. We run our experiments on two benchmarking environmental sound datasets and show that without any imposed limitations on the budget allocations for the adversary, the fooling rate of the adversarially trained models can exceed 90\%. In other words, adversarial attacks exist in any scales, but they might require higher adversarial perturbations compared to non-adversarially trained models.