论文标题

关于恶意软件分类器的实践对抗示例的调查

A survey on practical adversarial examples for malware classifiers

论文作者

Park, Daniel, Yener, Bülent

论文摘要

基于机器学习的解决方案在解决处理大量数据(例如恶意软件检测和分类)的问题方面非常有帮助。但是,已经发现深度神经网络容易受到对抗性示例的影响,或者有目的地扰动而导致标签不正确的输入。研究人员表明,可以利用这种漏洞来创建逃避的恶意软件样本。但是,许多提出的攻击不会生成可执行文件,而是生成功能向量。为了充分了解对抗性示例对恶意软件检测的影响,我们回顾了针对生成可执行的对抗性恶意软件示例的恶意软件分类器的实际攻击。我们还讨论了这一研究领域的当前挑战,以及改进和未来研究方向的建议。

Machine learning based solutions have been very helpful in solving problems that deal with immense amounts of data, such as malware detection and classification. However, deep neural networks have been found to be vulnerable to adversarial examples, or inputs that have been purposefully perturbed to result in an incorrect label. Researchers have shown that this vulnerability can be exploited to create evasive malware samples. However, many proposed attacks do not generate an executable and instead generate a feature vector. To fully understand the impact of adversarial examples on malware detection, we review practical attacks against malware classifiers that generate executable adversarial malware examples. We also discuss current challenges in this area of research, as well as suggestions for improvement and future research directions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源