论文标题
联合的零拍学习以进行视觉识别
Federated Zero-Shot Learning for Visual Recognition
论文作者
论文摘要
零击学习是一种学习制度,通过概括从可见类中学到的视觉语义关系来识别看不见的课程。为了获得有效的ZSL模型,可以诉诸从多个来源策划培训样本,这可能不可避免地提高了有关不同组织之间数据共享的隐私问题。在本文中,我们提出了一个新颖的联合零射击学习FedZSL框架,该框架从位于边缘设备上的分散数据中学习了一个中心模型。为了更好地概括为以前看不见的类,FEDZSL允许从非重叠类采样的每个设备上的训练数据,这些数据远非I.I.D.这种传统的联邦学习通常假设。我们在FEDZSL协议中确定了两个关键挑战:1)受过训练的模型容易偏向于本地观察到的类,因此未能推广到其他设备上出现在看不见的类和/或所见类中; 2)由于培训数据中的每个类别都来自单个来源,因此中心模型极易受到模型置换(后门)攻击的影响。为了解决这些问题,我们提出了三个局部目标,以通过关系蒸馏来进行视觉声音对齐和跨设备对齐,这利用了归一化的类协方差,以使整个设备的预测逻辑的一致性正常。为了防止后门攻击,提出了一种功能级捍卫技术。由于恶意样本与给定的语义属性的相关性较小,因此将丢弃低大小的视觉特征以稳定模型更新。 FedZSL的有效性和鲁棒性通过在三个零击基准数据集上进行的广泛实验证明。
Zero-shot learning is a learning regime that recognizes unseen classes by generalizing the visual-semantic relationship learned from the seen classes. To obtain an effective ZSL model, one may resort to curating training samples from multiple sources, which may inevitably raise the privacy concerns about data sharing across different organizations. In this paper, we propose a novel Federated Zero-Shot Learning FedZSL framework, which learns a central model from the decentralized data residing on edge devices. To better generalize to previously unseen classes, FedZSL allows the training data on each device sampled from the non-overlapping classes, which are far from the i.i.d. that traditional federated learning commonly assumes. We identify two key challenges in our FedZSL protocol: 1) the trained models are prone to be biased to the locally observed classes, thus failing to generalize to the unseen classes and/or seen classes appeared on other devices; 2) as each category in the training data comes from a single source, the central model is highly vulnerable to model replacement (backdoor) attacks. To address these issues, we propose three local objectives for visual-semantic alignment and cross-device alignment through relation distillation, which leverages the normalized class-wise covariance to regularize the consistency of the prediction logits across devices. To defend against the backdoor attacks, a feature magnitude defending technique is proposed. As malicious samples are less correlated to the given semantic attributes, the visual features of low magnitude will be discarded to stabilize model updates. The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.