论文标题
通过隐私 - 敏捷簇改善联合学习的面部识别
Improving Federated Learning Face Recognition via Privacy-Agnostic Clusters
论文作者
论文摘要
联合学习范式(FL)范式可以极大地解决公众对面部识别数据隐私的关注。但是,由于任务的独特性,传统的FL方法的性能很差:客户之间的广播课程中心对于识别表现至关重要,但导致隐私泄漏。为了解决隐私性悖论,这项工作提出了隐私,一个框架在很大程度上通过客户之间的辅助和隐私性信息来改善联合学习的面部识别。隐私面主要由两个组成部分组成:首先,提出了一种实用的差异私人本地聚类(DPLC)机制,以从当地班级中心提炼自然的群集。其次,随后意识到的认可损失会鼓励客户之间的全球共识,这导致了更具歧视性的特征。在数学上,提出的框架在数学上是私人的,引入了轻巧的开销,并产生了突出的性能提升(\ textit {efextit {efextit {eftectit {ef.}, +9.63 \%\%\%和+10.26 \%tar@far@far = 1e-far = 1e-far = 1e-far = 1e-far = 1e-far = 1e-b and ijb-b和ijb-c。大规模数据集的大量实验和消融研究表明了我们方法的功效和实用性。
The growing public concerns on data privacy in face recognition can be greatly addressed by the federated learning (FL) paradigm. However, conventional FL methods perform poorly due to the uniqueness of the task: broadcasting class centers among clients is crucial for recognition performances but leads to privacy leakage. To resolve the privacy-utility paradox, this work proposes PrivacyFace, a framework largely improves the federated learning face recognition via communicating auxiliary and privacy-agnostic information among clients. PrivacyFace mainly consists of two components: First, a practical Differentially Private Local Clustering (DPLC) mechanism is proposed to distill sanitized clusters from local class centers. Second, a consensus-aware recognition loss subsequently encourages global consensuses among clients, which ergo results in more discriminative features. The proposed framework is mathematically proved to be differentially private, introducing a lightweight overhead as well as yielding prominent performance boosts (\textit{e.g.}, +9.63\% and +10.26\% for TAR@FAR=1e-4 on IJB-B and IJB-C respectively). Extensive experiments and ablation studies on a large-scale dataset have demonstrated the efficacy and practicability of our method.