论文标题
基于脑电图的大脑计算机接口容易受到后门攻击的影响
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks
论文作者
论文摘要
基于脑电图(EEG)的脑部计算机界面(BCIS)的研究和开发迅速发展,部分原因是对大脑的深入了解和对解码EEG信号的复杂机器学习方法的广泛采用。但是,最近的研究表明,机器学习算法容易受到对抗性攻击的影响。本文建议将狭窄的脉搏用于中毒基于脑电图的BCI的攻击,这在实践中是可以实施的,并且从未考虑过。可以通过将中毒样本注入训练集中,可以在机器学习模型中创建危险的后门。然后,带有后门键的测试样品将分为攻击者指定的目标类。最大程度地将我们的方法与以前的方法区分开来的是,后门密钥不需要与EEG试验同步,从而非常容易实现。证明了后门攻击方法的有效性和鲁棒性,这突出了基于脑电图的BCI的关键安全问题,并呼吁紧急关注它来解决它。
Research and development of electroencephalogram (EEG) based brain-computer interfaces (BCIs) have advanced rapidly, partly due to deeper understanding of the brain and wide adoption of sophisticated machine learning approaches for decoding the EEG signals. However, recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks. This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before. One can create dangerous backdoors in the machine learning model by injecting poisoning samples into the training set. Test samples with the backdoor key will then be classified into the target class specified by the attacker. What most distinguishes our approach from previous ones is that the backdoor key does not need to be synchronized with the EEG trials, making it very easy to implement. The effectiveness and robustness of the backdoor attack approach is demonstrated, highlighting a critical security concern for EEG-based BCIs and calling for urgent attention to address it.