论文标题

海绵例子:对神经网络的能量延迟攻击

Sponge Examples: Energy-Latency Attacks on Neural Networks

论文作者

Shumailov, Ilia, Zhao, Yiren, Bates, Daniel, Papernot, Nicolas, Mullins, Robert, Anderson, Ross

论文摘要

神经网络培训和推断的高能量成本导致使用加速硬件,例如GPU和TPU。尽管这使我们能够训练数据中心中的大规模神经网络并将其部署在边缘设备上,但到目前为止,重点是平均案例性能。在这项工作中,我们引入了针对神经网络的新型威胁向量,其能源消耗或决策潜伏期至关重要。我们展示了对手如何利用精心制作的$ \ boldsymbol {Sponge}〜\ boldsymbol {示例} $,这些输入旨在最大化能源消耗和延迟。 我们对既定视觉和语言模型进行了这种攻击的两种变体,使能耗增加了10到200倍。我们的攻击也可用于延迟网络具有关键实时性能的决策,例如自动驾驶汽车的感知。我们演示了跨CPU的恶意输入以及包括GPU和ASIC模拟器在内的各种硬件加速器芯片的可移植性。我们通过提出一种防御策略来结束,该策略通过将硬件中的能源消耗分析从平均案例转移到最坏情况的角度来减轻我们的攻击。

The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While this enabled us to train large-scale neural networks in datacenters and deploy them on edge devices, the focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully crafted $\boldsymbol{sponge}~\boldsymbol{examples}$, which are inputs designed to maximise energy consumption and latency. We mount two variants of this attack on established vision and language models, increasing energy consumption by a factor of 10 to 200. Our attacks can also be used to delay decisions where a network has critical real-time performance, such as in perception for autonomous vehicles. We demonstrate the portability of our malicious inputs across CPUs and a variety of hardware accelerator chips including GPUs, and an ASIC simulator. We conclude by proposing a defense strategy which mitigates our attack by shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源