论文标题

重新思考不持有性的备忘录横杆中的神经网络中的对抗性鲁棒性

Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks

论文作者

Bhattacharjee, Abhiroop, Panda, Priyadarshini

论文摘要

深度神经网络(DNN)已被证明容易发生对抗攻击。能够有效地执行矩阵 - 矢量 - 义务(MVM)的回忆横梁,用于实现硬件上的DNN。但是,横杆非理想性一直贬值,因为它们在执行MVM时会造成错误,从而导致DNN中的计算准确性损失。已经提出了几种基于软件的防御能力,以使DNN具有对手的稳定性。但是,以前的工作没有证明横杆非理想性在释放对抗性鲁棒性方面赋予了优势。我们表明,固有的硬件非理想性对映射的DNN产生了对抗性的鲁棒性,而无需任何其他优化。我们使用基准数据集(CIFAR-10,CIFAR-100和Tiny Imagenet)评估了最先进的DNN(VGG8和VGG16网络)的对抗性弹性。我们发现,横杆非理想性在横杆映射的DNN中释放出明显高于基线软件DNN的对抗性鲁棒性(> 10-20%)。我们进一步评估了通过其他最先进的效率驱动的对抗防御能力评估方法的性能,并发现我们的方法在减少对抗性损失方面表现出色。

Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks. Memristive crossbars, being able to perform Matrix-Vector-Multiplications (MVMs) efficiently, are used to realize DNNs on hardware. However, crossbar non-idealities have always been devalued since they cause errors in performing MVMs, leading to computational accuracy losses in DNNs. Several software-based defenses have been proposed to make DNNs adversarially robust. However, no previous work has demonstrated the advantage conferred by the crossbar non-idealities in unleashing adversarial robustness. We show that the intrinsic hardware non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization. We evaluate the adversarial resilience of state-of-the-art DNNs (VGG8 & VGG16 networks) using benchmark datasets (CIFAR-10, CIFAR-100 & Tiny Imagenet) across various crossbar sizes. We find that crossbar non-idealities unleash significantly greater adversarial robustness (>10-20%) in crossbar-mapped DNNs than baseline software DNNs. We further assess the performance of our approach with other state-of-the-art efficiency-driven adversarial defenses and find that our approach performs significantly well in terms of reducing adversarial loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源