论文标题
Prive-HD:保存隐私的高维计算
Prive-HD: Privacy-Preserved Hyperdimensional Computing
论文作者
论文摘要
数据的隐私是机器学习的主要挑战,因为训练有素的模型可能会揭示封闭数据集的敏感信息。此外,边缘设备的计算能力和能力有限,使云构成的推理不可避免。将私人信息发送给远程服务器使推理的隐私也很容易受到攻击,因为易感通信渠道甚至不可信的主机。在本文中,我们针对具有脑启发的高维高维(HD)计算的推理,这是一种新的学习算法,该算法由于其轻巧的计算和鲁棒性而受到关注,尤其是对具有紧密约束的边缘设备的吸引力。确实,尽管高清计算具有有希望的属性,但由于其可逆计算,几乎没有隐私。我们通过细致的量化和修剪高清的基础来介绍一种准确的私人权衡方法,以实现差异化的私有模型,并混淆发送的信息以用于云构成的推理。最后,我们展示了如何利用提出的技术来实现有效的硬件。
The privacy of data is a major challenge in machine learning as a trained model may expose sensitive information of the enclosed dataset. Besides, the limited computation capability and capacity of edge devices have made cloud-hosted inference inevitable. Sending private information to remote servers makes the privacy of inference also vulnerable because of susceptible communication channels or even untrustworthy hosts. In this paper, we target privacy-preserving training and inference of brain-inspired Hyperdimensional (HD) computing, a new learning algorithm that is gaining traction due to its light-weight computation and robustness particularly appealing for edge devices with tight constraints. Indeed, despite its promising attributes, HD computing has virtually no privacy due to its reversible computation. We present an accuracy-privacy trade-off method through meticulous quantization and pruning of hypervectors, the building blocks of HD, to realize a differentially private model as well as to obfuscate the information sent for cloud-hosted inference. Finally, we show how the proposed techniques can be also leveraged for efficient hardware implementation.