论文标题

移动人群中的对抗任务的生成对抗网络驱动的检测

Generative Adversarial Network-Driven Detection of Adversarial Tasks in Mobile Crowdsensing

论文作者

Chen, Zhiyan, Kantarci, Burak

论文摘要

移动人货币系统在基于非专用和无处不在的属性建立的各种攻击方面容易受到攻击。广泛研究了基于机器学习(ML)的方法,以构建攻击检测系统并确保MCS系统安全性。但是,旨在阻止感应前端和MC后端杠杆技术的对手,这对于MCS平台和服务提供商来说是具有挑战性的,以开发针对这些攻击的适当检测框架。生成的对抗网络(GAN)已应用于生成与真实分类器极为相似的合成样本,使合成样本与原始样品没有区别。先前的工作表明,基于GAN的攻击比经验设计的攻击样本表现出更为重要的破坏,并且在MCS平台上的检测率低。考虑到这一点,本文旨在通过集成基于GAN的模型来检测智能设计的非法传感服务请求。为此,我们提出了一个两级级联分类器,该分类器将GAN歧视器与二进制分类器相结合,以防止对抗性假任务。通过模拟,我们将结果与单级二进制分类器进行了比较,数字结果表明,提议的方法将对抗性攻击检测率(AADR)从$ 0 \%$ $ \%$降至$ 97.5 \%\%\%$ by Knn/nb,从$ 45.9 \%\%$ $ $ $ $ $ \%\%\%。同时,对于两级分类器,原始攻击检测率(OADR)改善了三个二进制分类器,并进行了比较,例如NB从$ 26.1 \%\%\%\%$ $ 61.5 \%$ $。

Mobile Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties. Machine learning (ML)-based approaches are widely investigated to build attack detection systems and ensure MCS systems security. However, adversaries that aim to clog the sensing front-end and MCS back-end leverage intelligent techniques, which are challenging for MCS platform and service providers to develop appropriate detection frameworks against these attacks. Generative Adversarial Networks (GANs) have been applied to generate synthetic samples, that are extremely similar to the real ones, deceiving classifiers such that the synthetic samples are indistinguishable from the originals. Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples, and result in low detection rate at the MCS platform. With this in mind, this paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model. To this end, we propose a two-level cascading classifier that combines the GAN discriminator with a binary classifier to prevent adversarial fake tasks. Through simulations, we compare our results to a single-level binary classifier, and the numeric results show that proposed approach raises Adversarial Attack Detection Rate (AADR), from $0\%$ to $97.5\%$ by KNN/NB, from $45.9\%$ to $100\%$ by Decision Tree. Meanwhile, with two-levels classifiers, Original Attack Detection Rate (OADR) improves for the three binary classifiers, with comparison, such as NB from $26.1\%$ to $61.5\%$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源