论文标题
屏蔽联邦学习系统免受ARM Trustzone的推理攻击
Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone
论文作者
论文摘要
联合学习(FL)为培训机器学习模型开辟了新的观点,同时将个人数据保留在用户的场所上。具体而言,在FL中,在用户设备上对模型进行了培训,并且仅将模型更新(即梯度)发送到中央服务器以进行聚合目的。但是,近年发表的一系列推理攻击泄漏了私人数据,这强调了需要设计有效的保护机制来激励FL的大规模采用。尽管有解决方案可以减轻服务器端的这些攻击,但几乎没有采取任何措施来保护用户免受客户端进行的攻击。在这种情况下,在客户端使用受信任的执行环境(TEE)是最建议的解决方案之一。但是,现有的框架(例如,Darknetz)需要静态地将机器学习模型的很大一部分放入T恤中,以有效防止复杂的攻击或攻击组合。我们提出了Gradsec,该解决方案允许在静态或动态上仅在机器学习模型的TEE上进行保护,从而将TCB的大小和整体训练时间分别降低了30%和56%,与最先进的竞争者相比。
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises. Specifically, in FL, models are trained on the users devices and only model updates (i.e., gradients) are sent to a central server for aggregation purposes. However, the long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms to incentivize the adoption of FL at scale. While there exist solutions to mitigate these attacks on the server side, little has been done to protect users from attacks performed on the client side. In this context, the use of Trusted Execution Environments (TEEs) on the client side are among the most proposing solutions. However, existing frameworks (e.g., DarkneTZ) require statically putting a large portion of the machine learning model into the TEE to effectively protect against complex attacks or a combination of attacks. We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model, either statically or dynamically, hence reducing both the TCB size and the overall training time by up to 30% and 56%, respectively compared to state-of-the-art competitors.