论文标题

在威胁性虚拟补丁之前深处

Deep ahead-of-threat virtual patching

论文作者

Copty, Fady, Kassis, Andre, Keidar-Barner, Sharon, Murik, Dov

论文摘要

许多应用程序具有可以利用的安全漏洞。由于测试问题的NP完整性质,几乎不可能找到所有这些。安全解决方案通过连续的应用程序测试,漏洞的快速绘制,修补程序的自动部署以及在网络和端点安全工具中部署的虚拟修补检测技术提供防御性。这些技术受到黑帽前寻找漏洞的需要的限制。我们提出了一种创新技术,以在发现它们之前几乎修补漏洞。我们利用监督学习数据生成的测试技术,并展示人工智能技术如何使用这些数据来创建预测性的深度神经网络网络模型,这些模型可以读取应用程序的输入并实时预测它是否是潜在的恶意输入。我们设置了一个提前威胁性的实验,在该实验中,我们在应用程序的旧版本上生成了数据,然后评估了几年后发现的漏洞的预测模型准确性。我们的实验表明,在LIBXML2和LIBTIFF漏洞上的威胁性检测分别为91.3%和93.7%。我们希望继续在这一研究领域工作,并为更多图书馆提供威胁性虚拟修补程序。这项研究的成功可以改变申请后无尽赛车的当前状态,并使捍卫者领先攻击者一步

Many applications have security vulnerabilities that can be exploited. It is practically impossible to find all of them due to the NP-complete nature of the testing problem. Security solutions provide defenses against these attacks through continuous application testing, fast-patching of vulnerabilities, automatic deployment of patches, and virtual patching detection techniques deployed in network and endpoint security tools. These techniques are limited by the need to find vulnerabilities before the black-hats. We propose an innovative technique to virtually patch vulnerabilities before they are found. We leverage testing techniques for supervised-learning data generation, and show how artificial intelligence techniques can use this data to create predictive deep neural-network models that read an application's input and predict in real time whether it is a potential malicious input. We set up an ahead-of-threat experiment in which we generated data on old versions of an application, and then evaluated the predictive model accuracy on vulnerabilities found years later. Our experiments show ahead-of-threat detection on LibXML2 and LibTIFF vulnerabilities with 91.3% and 93.7% accuracy, respectively. We expect to continue work on this field of research and provide ahead-of-threat virtual patching for more libraries. Success in this research can change the current state of endless racing after application vulnerabilities and put the defenders one step ahead of the attackers

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源