论文标题
针对对抗性示例的深度学习防御措施进行动态风险评估
Deep Learning Defenses Against Adversarial Examples for Dynamic Risk Assessment
论文作者
论文摘要
深度神经网络是几十年前首次开发的,但是直到最近,由于其计算能力要求,它们才开始被广泛使用。从那时起,它们越来越多地应用于许多领域,并经历了深远的进步。更重要的是,它们已用于关键问题,例如在医疗保健程序中做出决策或风险管理至关重要的自动驾驶。在这些领域的诊断或决策中的任何错误都可能发生严重事故,甚至导致死亡。这是一个忙碌的人,因为已经反复报道了攻击这种类型的模型很简单。因此,必须研究这些攻击以评估其风险,并且需要制定防御能力,以使模型更强大。对于这项工作,选择了最广为人知的攻击(对抗性攻击),并针对它实施了几种防御措施(即对抗性训练,降低维度的降低和预测相似性)。获得的结果使模型在保持类似精度的同时更加健壮。这个想法是使用乳腺癌数据集和VGG16和密集的神经网络模型开发的,但是该解决方案可以应用于其他领域的数据集以及不同的卷积和密集的深神经网络模型。
Deep Neural Networks were first developed decades ago, but it was not until recently that they started being extensively used, due to their computing power requirements. Since then, they are increasingly being applied to many fields and have undergone far-reaching advancements. More importantly, they have been utilized for critical matters, such as making decisions in healthcare procedures or autonomous driving, where risk management is crucial. Any mistakes in the diagnostics or decision-making in these fields could entail grave accidents, and even death. This is preoccupying, because it has been repeatedly reported that it is straightforward to attack this type of models. Thus, these attacks must be studied to be able to assess their risk, and defenses need to be developed to make models more robust. For this work, the most widely known attack was selected (adversarial attack) and several defenses were implemented against it (i.e. adversarial training, dimensionality reduc tion and prediction similarity). The obtained outcomes make the model more robust while keeping a similar accuracy. The idea was developed using a breast cancer dataset and a VGG16 and dense neural network model, but the solutions could be applied to datasets from other areas and different convolutional and dense deep neural network models.