论文标题
深度学习防御对医学成像的安全
The Security of Deep Learning Defences for Medical Imaging
论文作者
论文摘要
深度学习在医学图像分析领域表现出了巨大的希望。医疗专业人员和医疗保健提供者一直在采用这项技术来加快和增强其工作。这些系统使用易受对抗样本的深度神经网络(DNN);具有不可思议的变化的图像可以改变模型的预测。研究人员提出了防御能力,使DNN在危害之前更强大或检测到对抗样本。但是,这些作品都没有考虑一个知情的攻击者,该攻击者可以适应国防机制。我们表明,一位知情的攻击者可以逃避五个目前的最新防御状态,同时成功地欺骗了受害者的深度学习模式,从而使这些防御毫无用处。然后,我们建议更好的替代方法,以保护医疗保健DNN免于此类攻击:(1)加强系统的安全性,(2)使用数字签名。
Deep learning has shown great promise in the domain of medical image analysis. Medical professionals and healthcare providers have been adopting the technology to speed up and enhance their work. These systems use deep neural networks (DNN) which are vulnerable to adversarial samples; images with imperceivable changes that can alter the model's prediction. Researchers have proposed defences which either make a DNN more robust or detect the adversarial samples before they do harm. However, none of these works consider an informed attacker which can adapt to the defence mechanism. We show that an informed attacker can evade five of the current state of the art defences while successfully fooling the victim's deep learning model, rendering these defences useless. We then suggest better alternatives for securing healthcare DNNs from such attacks: (1) harden the system's security and (2) use digital signatures.