论文标题

将混合8T-6T SRAM记忆体系结构的鲁棒性和脆弱性暴露于深神经网络中的对抗性攻击

Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks

论文作者

Moitra, Abhishek, Panda, Priyadarshini

论文摘要

深度学习能够解决许多曾经不可能的问题。但是,它们容易受到输入对抗攻击的攻击,从而使它们无法自主部署在关键应用程序中。几项以算法为中心的作品讨论了引起对抗性攻击并改善深神经网络(DNN)的对抗性鲁棒性的方法。在这项工作中,我们引起了混合6T-8T记忆的优势和脆弱性,以改善对抗性的鲁棒性并引起对DNNS的对抗性攻击。我们表明,由于错误的6T-SRAM细胞引起的混合记忆中的比特噪声具有基于混合记忆配置(V_DD,8T-6T比)的确定性行为。可以将这种受控的噪声(手术噪声)策略性地引入特定的DNN层中,以提高DNN的对抗精度。同时,可以将手术噪声小心地注入杂种内存中的DNN参数中,以引起对抗性攻击。为了使用手术噪声提高DNN的对抗性鲁棒性,我们提出了一种方法来选择适当的DNN层及其相应的混合记忆构型,以引入所需的手术噪声。使用此情况,我们比基线模型(没有引入手术噪声),在不重新训练FGSM(例如FGSM)的情况下获得了2-8%的对抗精度。为了使用手术噪声证明对抗性攻击,我们设计了一种对杂种内存库中存储的DNN参数的新颖的,白色的攻击,这会导致DNN推理准确性下降超过60%以上,置信值超过90%。我们通过实验支持我们的主张,并在VGG19和RESNET18网络上使用基准数据集-CIFAR10和CIFAR100执行。

Deep Learning is able to solve a plethora of once impossible problems. However, they are vulnerable to input adversarial attacks preventing them from being autonomously deployed in critical applications. Several algorithm-centered works have discussed methods to cause adversarial attacks and improve adversarial robustness of a Deep Neural Network (DNN). In this work, we elicit the advantages and vulnerabilities of hybrid 6T-8T memories to improve the adversarial robustness and cause adversarial attacks on DNNs. We show that bit-error noise in hybrid memories due to erroneous 6T-SRAM cells have deterministic behaviour based on the hybrid memory configurations (V_DD, 8T-6T ratio). This controlled noise (surgical noise) can be strategically introduced into specific DNN layers to improve the adversarial accuracy of DNNs. At the same time, surgical noise can be carefully injected into the DNN parameters stored in hybrid memory to cause adversarial attacks. To improve the adversarial robustness of DNNs using surgical noise, we propose a methodology to select appropriate DNN layers and their corresponding hybrid memory configurations to introduce the required surgical noise. Using this, we achieve 2-8% higher adversarial accuracy without re-training against white-box attacks like FGSM, than the baseline models (with no surgical noise introduced). To demonstrate adversarial attacks using surgical noise, we design a novel, white-box attack on DNN parameters stored in hybrid memory banks that causes the DNN inference accuracy to drop by more than 60% with over 90% confidence value. We support our claims with experiments, performed using benchmark datasets-CIFAR10 and CIFAR100 on VGG19 and ResNet18 networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源