论文标题

OMNI:自动合奏,具有意外的模型反对对抗逃避攻击

Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack

论文作者

Shu, Rui, Xia, Tianpei, Williams, Laurie, Menzies, Tim

论文摘要

背景:基于机器学习的安全检测模型在现代恶意软件和入侵检测系统中已经普遍存在。但是,先前的研究表明,这种模型容易受到对抗性逃避攻击的影响。在这种类型的攻击中,输入(即对抗性示例)是由智能恶意对手精心制作的,目的是被现有的最新模型(例如,深神经网络)错误分类。一旦攻击者可以欺骗分类者认为恶意输入实际上是良性的,他们就可以使基于机器的恶意软件或入侵检测系统无效。目标:通过合奏模型的想法,帮助安全从业者和研究人员对非自适应,白盒和非目标的对抗性逃避攻击建立了更强大的模型。方法:我们提出了一种称为Omni的方法,其主要思想是探索创建“意外模型”合奏的方法;也就是说,与对手目标模型的超参数具有很大距离的模型,然后我们通过该模型进行了优化的加权集合预测。结果:在五种安全数据集(NSL-KDD,CIC-IDS-2017,CSE-CIC-IDS2018,CICAND-MAL2017,CICAND-MAL2017,CICAND-MAL2017,cicand-Mal2017和contagio pdf toble a in an a in o pd omniial ISA)中,使用了五种类型的对抗性逃避攻击(FGSM,BIM,JSMA,DeepFooland Carlini-Wagner)的研究(NSL-KDD,CIC-IDS-2017,以及ANGONATIAL ASNI ISNI)与其他基线治疗相比,攻击。结论:当采用合奏防御对抗逃避攻击时,我们建议通过诸如诸如高参数优化之类的方法来创建一个与攻击者预期模型(即目标模型)相距甚远的意外模型的合奏。

Background: Machine learning-based security detection models have become prevalent in modern malware and intrusion detection systems. However, previous studies show that such models are susceptible to adversarial evasion attacks. In this type of attack, inputs (i.e., adversarial examples) are specially crafted by intelligent malicious adversaries, with the aim of being misclassified by existing state-of-the-art models (e.g., deep neural networks). Once the attackers can fool a classifier to think that a malicious input is actually benign, they can render a machine learning-based malware or intrusion detection system ineffective. Goal: To help security practitioners and researchers build a more robust model against non-adaptive, white-box, and non-targeted adversarial evasion attacks through the idea of an ensemble model. Method: We propose an approach called Omni, the main idea of which is to explore methods that create an ensemble of "unexpected models"; i.e., models whose control hyperparameters have a large distance to the hyperparameters of an adversary's target model, with which we then make an optimized weighted ensemble prediction. Result: In studies with five types of adversarial evasion attacks (FGSM, BIM, JSMA, DeepFooland Carlini-Wagner) on five security datasets (NSL-KDD, CIC-IDS-2017, CSE-CIC-IDS2018, CICAnd-Mal2017, and the Contagio PDF dataset), we show Omni is a promising approach as a defense strategy against adversarial attacks when compared with other baseline treatments. Conclusion: When employing ensemble defense against adversarial evasion attacks, we suggest creating an ensemble with unexpected models that are distant from the attacker's expected model (i.e., target model) through methods such as hyperparameter optimization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源