论文标题

融合:有效且安全的推理对恶意服务器的韧性

Fusion: Efficient and Secure Inference Resilient to Malicious Servers

论文作者

Dong, Caiqin, Weng, Jian, Liu, Jia-Nan, Zhang, Yue, Tong, Yao, Yang, Anjia, Cheng, Yudan, Hu, Shun

论文摘要

在安全的机器学习推断中,大多数方案都假定服务器是半honest(老实说,遵循协议,但试图推断其他信息)。但是,服务器可能是恶意的(例如,使用低质量模型或与协议偏离协议)。尽管一些研究考虑了与协议偏离协议的恶意服务器,但它们忽略了模型准确性的验证(恶意服务器使用低质量模型)同时,保留了服务器模型和客户端输入的隐私。为了解决这些问题,我们提出了\ textit {fusion},其中客户将公共样本(已知查询结果)与自己的样本混合在一起,以将其查询为多方计算的输入,以共同执行安全推断。由于使用低质量模型或与协议偏离的服务器只能产生客户可以轻松识别的结果,因此\ textIt {Fusion {Fusion}迫使服务器诚实地行为,从而解决上述所有这些问题,而无需利用昂贵的加密技术。我们的评估表明,\ textit {fusion}的速度是48.06 $ \ times $,并且使用30.90 $ \ times $ $ \ times $少于现有的恶意安全推理协议(目前不支持模型准确性的验证)。此外,为了显示可伸缩性,我们对实用的RESNET50型号进行了成像级推断,并且在WAN设置中的沟通价格为8.678分钟和10.117 GIB,即1.18 $ \ times $ $ \ times $ QUAL的速度更快,$ \ timple $ \ timple $ \ timple $ \ timper $少于半霍尼斯特协议的通信。

In secure machine learning inference, most of the schemes assume that the server is semi-honest (honestly following the protocol but attempting to infer additional information). However, the server may be malicious (e.g., using a low-quality model or deviating from the protocol) in the real world. Although a few studies have considered a malicious server that deviates from the protocol, they ignore the verification of model accuracy (where the malicious server uses a low-quality model) meanwhile preserving the privacy of both the server's model and the client's inputs. To address these issues, we propose \textit{Fusion}, where the client mixes the public samples (which have known query results) with their own samples to be queried as the inputs of multi-party computation to jointly perform the secure inference. Since a server that uses a low-quality model or deviates from the protocol can only produce results that can be easily identified by the client, \textit{Fusion} forces the server to behave honestly, thereby addressing all those aforementioned issues without leveraging expensive cryptographic techniques. Our evaluation indicates that \textit{Fusion} is 48.06$\times$ faster and uses 30.90$\times$ less communication than the existing maliciously secure inference protocol (which currently does not support the verification of the model accuracy). In addition, to show the scalability, we conduct ImageNet-scale inference on the practical ResNet50 model and it costs 8.678 minutes and 10.117 GiB of communication in a WAN setting, which is 1.18$\times$ faster and has 2.64$\times$ less communication than those of the semi-honest protocol.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源