论文标题

代码的对抗性鲁棒性

Adversarial Robustness for Code

论文作者

Bielik, Pavol, Vechev, Martin

论文摘要

最近已使用机器学习和深度学习来成功地解决代码领域中的许多任务,例如查找和修复错误,代码完成,倒数,类型推理等。但是,代码模型的对抗性鲁棒性的问题在很大程度上没有引起人们的注意。在这项工作中,我们通过:(i)对代码的对抗性攻击(具有离散和高度结构化输入的域),(ii)表明,与其他领域相似,代码的神经模型很容易受到对抗性攻击的影响,并且(iii)结合了现有和新颖的技术,以提高了鲁棒性,以提高鲁棒性,以提高鲁棒性。

Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others. However, the issue of adversarial robustness of models for code has gone largely unnoticed. In this work, we explore this issue by: (i) instantiating adversarial attacks for code (a domain with discrete and highly structured inputs), (ii) showing that, similar to other domains, neural models for code are vulnerable to adversarial attacks, and (iii) combining existing and novel techniques to improve robustness while preserving high accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源