论文标题

Blaze:燃烧的快速隐私机器学习

BLAZE: Blazing Fast Privacy-Preserving Machine Learning

论文作者

Patra, Arpita, Suresh, Ajith

论文摘要

机器学习工具已经说明了在许多重要部门(例如医疗保健和金融)中的潜力,从而有助于得出有用的推论。数据的敏感和机密性质在此类部门中引起了对数据隐私的自然关注。这促使了保证数据隐私的机器学习(PPML)领域。通常,ML技术需要较大的计算能力,这会导致基础架构有限的客户依靠安全外包计算(SOC)的方法。在SOC环境中,该计算将外包到一组专业且功能强大的云服务器,并以每次使用付费使用。在这项工作中,我们在SOC环境中探讨了广泛使用的ML算法的PPML技术 - 线性回归,逻辑回归和神经网络。 我们提出了Blaze,这是三个服务器设置中的燃烧快速PPML框架,可容忍一个恶意损坏(\ z {\ ell})。 Blaze实现了公平性的更强大的安全保证(每当损坏的服务器获得相同时,所有诚实的服务器都会获得输出)。 Blaze利用独立于输入的预处理阶段,具有快速的输入依赖性在线阶段,依赖于有效的PPML原始词,例如:(i)在线阶段的点产品协议,在线阶段的通信与矢量大小无关,这是三个服务器设置中的第一个; (ii)一种截断方法,该方法回避评估昂贵的纹波携带加盖(RCA)的昂贵电路并达到恒定的圆形复杂性。这改善了ABY3的截断方法(Mohassel等,CCS 2018),该方法使用RCA并消耗了RCA深度的圆形复杂性。 在WAN和LAN设置中,上述ML算法上上述ML算法的广泛基准测试表明,ABY3的大大改进。

Machine learning tools have illustrated their potential in many significant sectors such as healthcare and finance, to aide in deriving useful inferences. The sensitive and confidential nature of the data, in such sectors, raise natural concerns for the privacy of data. This motivated the area of Privacy-preserving Machine Learning (PPML) where privacy of the data is guaranteed. Typically, ML techniques require large computing power, which leads clients with limited infrastructure to rely on the method of Secure Outsourced Computation (SOC). In SOC setting, the computation is outsourced to a set of specialized and powerful cloud servers and the service is availed on a pay-per-use basis. In this work, we explore PPML techniques in the SOC setting for widely used ML algorithms-- Linear Regression, Logistic Regression, and Neural Networks. We propose BLAZE, a blazing fast PPML framework in the three server setting tolerating one malicious corruption over a ring (\Z{\ell}). BLAZE achieves the stronger security guarantee of fairness (all honest servers get the output whenever the corrupt server obtains the same). Leveraging an input-independent preprocessing phase, BLAZE has a fast input-dependent online phase relying on efficient PPML primitives such as: (i) A dot product protocol for which the communication in the online phase is independent of the vector size, the first of its kind in the three server setting; (ii) A method for truncation that shuns evaluating expensive circuit for Ripple Carry Adders (RCA) and achieves a constant round complexity. This improves over the truncation method of ABY3 (Mohassel et al., CCS 2018) that uses RCA and consumes a round complexity that is of the order of the depth of RCA. An extensive benchmarking of BLAZE for the aforementioned ML algorithms over a 64-bit ring in both WAN and LAN settings shows massive improvements over ABY3.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源