论文标题

防止基于蒸馏的神经网络IP的攻击

Preventing Distillation-based Attacks on Neural Network IP

论文作者

Grailoo, Mahdieh, Abideen, Zain Ul, Leier, Mairo, Pagliarini, Samuel

论文摘要

当今的硬件已经部署了神经网络(NNS),成为宝贵的知识产权(IP),因为他们的培训和优化投入了很多小时。因此,攻击者可能有兴趣复制,反向工程,甚至修改此IP。硬件混淆的当前实践,包括广泛研究的逻辑锁定技术,不足以保护训练有素的NN的实际IP:其权重。简单地将重量隐藏在基于密钥的方案后面的权重是效率低下(渴望资源的)和不足的(攻击者可以利用知识蒸馏)。本文提出了一种直观的方法,以毒化防止基于蒸馏攻击的预测。这是在硬件实施NNS中考虑这种中毒方法的第一项工作。提出的技术使nn混淆,因此攻击者无法完全或准确地训练NN。我们阐述了一个威胁模型,该模型突出了随机逻辑混淆与NN IP的混淆之间的差异。基于此威胁模型,我们的安全分析表明,中毒成功并显着降低了各种代表性数据集上被盗NN模型的准确性。此外,保持准确性和预测分布,没有任何功能受到干扰,也没有产生高的开销。最后,我们强调说,我们提出的方法是灵活的,不需要操纵NN工具链。

Neural networks (NNs) are already deployed in hardware today, becoming valuable intellectual property (IP) as many hours are invested in their training and optimization. Therefore, attackers may be interested in copying, reverse engineering, or even modifying this IP. The current practices in hardware obfuscation, including the widely studied logic locking technique, are insufficient to protect the actual IP of a well-trained NN: its weights. Simply hiding the weights behind a key-based scheme is inefficient (resource-hungry) and inadequate (attackers can exploit knowledge distillation). This paper proposes an intuitive method to poison the predictions that prevent distillation-based attacks; this is the first work to consider such a poisoning approach in hardware-implemented NNs. The proposed technique obfuscates a NN so an attacker cannot train the NN entirely or accurately. We elaborate a threat model which highlights the difference between random logic obfuscation and the obfuscation of NN IP. Based on this threat model, our security analysis shows that the poisoning successfully and significantly reduces the accuracy of the stolen NN model on various representative datasets. Moreover, the accuracy and prediction distributions are maintained, no functionality is disturbed, nor are high overheads incurred. Finally, we highlight that our proposed approach is flexible and does not require manipulation of the NN toolchain.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源