论文标题
抑制对医学成像联邦学习的中毒攻击
Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
论文作者
论文摘要
由于数据的可用性和多样性,多个数据拥有实体(例如,医院)之间的协作可以加速培训过程并产生更好的机器学习模型。但是,隐私问题使得在保密的同时交换数据具有挑战性。联合学习(FL)是一个有前途的解决方案,可以通过交换模型参数而不是原始数据来实现协作培训。但是,大多数现有的FL解决方案都在假设参与客户是\ emph {诚实}的假设下工作,因此可能会失败恶意政党的中毒攻击,其目标是恶化全球模型绩效。在这项工作中,我们提出了一个强大的聚合规则,称为基于距离的离群值抑制(DOS),该规则对拜占庭式失败有弹性。提出的方法计算不同客户端的本地参数更新之间的距离,并使用基于Copula的离群值检测(COPOD)为每个客户端获得一个异常得分。使用SoftMax函数将所得离群的分数转换为归一化权重,并使用局部参数的加权平均值来更新全局模型。 DOS聚合可以有效地抑制恶意客户端的参数更新,而无需任何超参数选择,即使数据分布是异质的。与其他最先进的方法相比,对两个医学成像数据集(CHEXPERT和HAM10000)的评估证明了DOS方法对各种中毒攻击的较高鲁棒性。可以在此处找到该代码https://github.com/naiftt/spafd。
Collaboration among multiple data-owning entities (e.g., hospitals) can accelerate the training process and yield better machine learning models due to the availability and diversity of data. However, privacy concerns make it challenging to exchange data while preserving confidentiality. Federated Learning (FL) is a promising solution that enables collaborative training through exchange of model parameters instead of raw data. However, most existing FL solutions work under the assumption that participating clients are \emph{honest} and thus can fail against poisoning attacks from malicious parties, whose goal is to deteriorate the global model performance. In this work, we propose a robust aggregation rule called Distance-based Outlier Suppression (DOS) that is resilient to byzantine failures. The proposed method computes the distance between local parameter updates of different clients and obtains an outlier score for each client using Copula-based Outlier Detection (COPOD). The resulting outlier scores are converted into normalized weights using a softmax function, and a weighted average of the local parameters is used for updating the global model. DOS aggregation can effectively suppress parameter updates from malicious clients without the need for any hyperparameter selection, even when the data distributions are heterogeneous. Evaluation on two medical imaging datasets (CheXpert and HAM10000) demonstrates the higher robustness of DOS method against a variety of poisoning attacks in comparison to other state-of-the-art methods. The code can be found here https://github.com/Naiftt/SPAFD.