论文标题
用于存储有效的私人联合学习的模型细分,并具有顶级$ r $ sparsifation
Model Segmentation for Storage Efficient Private Federated Learning with Top $r$ Sparsification
论文作者
论文摘要
在具有顶级$ r $稀疏期的联合学习(FL)中,数以百万计的用户在本地集体培训机器学习(ML)模型,仅通过传达最重要的$ r $ r $更新来降低通信成本来使用其个人数据。已经表明,这些选择(稀疏)更新的值以及有关用户个人数据的信息。在这项工作中,我们研究了在FL中进行用户数据库通信的不同方法,并有效地使用顶部$ r $稀疏,同时保证用户个人数据的信息理论隐私。这些方法产生了可观的存储成本。作为解决方案,我们提出了两个具有不同属性的方案,这些方案使用MDS编码存储以及模型分割机制以牺牲可控量的信息泄漏来降低存储成本,以使用顶部$ r $ sparsifiencation执行私人FL。
In federated learning (FL) with top $r$ sparsification, millions of users collectively train a machine learning (ML) model locally, using their personal data by only communicating the most significant $r$ fraction of updates to reduce the communication cost. It has been shown that the values as well as the indices of these selected (sparse) updates leak information about the users' personal data. In this work, we investigate different methods to carry out user-database communications in FL with top $r$ sparsification efficiently, while guaranteeing information theoretic privacy of users' personal data. These methods incur considerable storage cost. As a solution, we present two schemes with different properties that use MDS coded storage along with a model segmentation mechanism to reduce the storage cost at the expense of a controllable amount of information leakage, to perform private FL with top $r$ sparsification.