论文标题

源代码模型的语义鲁棒性

Semantic Robustness of Models of Source Code

论文作者

Ramakrishnan, Goutham, Henkel, Jordan, Wang, Zi, Albarghouthi, Aws, Jha, Somesh, Reps, Thomas

论文摘要

深度神经网络容易受到对抗性示例的攻击 - 导致预测不正确的小输入扰动。我们研究了源代码模型的此问题,我们希望网络在保留代码功能的源代码修改方面具有鲁棒性。 (1)我们定义了一个强大的对手,该对手可以采用参数,语义保护程序转换的序列; (2)我们展示了如何进行对抗性训练,以学习对这种对手的强大模型; (3)我们对不同语言和体系结构进行评估,表明鲁棒性的定量增长很大。

Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions. We study this problem for models of source code, where we want the network to be robust to source-code modifications that preserve code functionality. (1) We define a powerful adversary that can employ sequences of parametric, semantics-preserving program transformations; (2) we show how to perform adversarial training to learn models robust to such adversaries; (3) we conduct an evaluation on different languages and architectures, demonstrating significant quantitative gains in robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源