论文标题

隐私FLACYFL:隐私和安全联盟学习的模拟器

PrivacyFL: A simulator for privacy-preserving and secure federated learning

论文作者

Mugunthan, Vaikkunth, Peraire-Bueno, Anton, Kagal, Lalana

论文摘要

联合学习是一种技术,它使分布式客户能够在保持培训数据的同时协作学习共享的机器学习模型。这降低了数据隐私风险,但是,由于可以从训练有素的模型的权重或参数中泄漏有关培训数据集的信息,因此仍然存在隐私问题。设置联合学习环境,尤其是具有安全性和隐私保证的环境,是一个耗时的过程,具有许多可以操纵的配置和参数。为了帮助客户确保协作是可行的,并检查其改善了模型的准确性,需要一个现实世界中的模拟器,以提供隐私和安全的联合学习。在本文中,我们介绍了Privacyfl,这是一个可扩展的,易于配置和可扩展的模拟器,用于联合学习环境。它的主要功能包括延迟模拟,对客户出发的鲁棒性,对集中式学习和分散学习的支持以及基于差异隐私和安全多方计算的可配置隐私和安全机制。在本文中,我们激励我们的研究,描述模拟器和相关协议的架构,并在许多情况下讨论其评估,以突出其广泛的功能及其优势。我们的论文解决了一个重大的现实问题:检查在各种情况下参与联邦学习环境的可行性。它也具有强大的实际影响,因为拥有大量敏感数据并希望协作的医院,银行和研究机构等组织将从拥有一个使他们以隐私保护和安全的方式进行的系统中受益匪浅。

Federated learning is a technique that enables distributed clients to collaboratively learn a shared machine learning model while keeping their training data localized. This reduces data privacy risks, however, privacy concerns still exist since it is possible to leak information about the training dataset from the trained model's weights or parameters. Setting up a federated learning environment, especially with security and privacy guarantees, is a time-consuming process with numerous configurations and parameters that can be manipulated. In order to help clients ensure that collaboration is feasible and to check that it improves their model accuracy, a real-world simulator for privacy-preserving and secure federated learning is required. In this paper, we introduce PrivacyFL, which is an extensible, easily configurable and scalable simulator for federated learning environments. Its key features include latency simulation, robustness to client departure, support for both centralized and decentralized learning, and configurable privacy and security mechanisms based on differential privacy and secure multiparty computation. In this paper, we motivate our research, describe the architecture of the simulator and associated protocols, and discuss its evaluation in numerous scenarios that highlight its wide range of functionality and its advantages. Our paper addresses a significant real-world problem: checking the feasibility of participating in a federated learning environment under a variety of circumstances. It also has a strong practical impact because organizations such as hospitals, banks, and research institutes, which have large amounts of sensitive data and would like to collaborate, would greatly benefit from having a system that enables them to do so in a privacy-preserving and secure manner.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源