论文标题
通过每个字母的隐私约束和非零泄漏的范围进行隐私 - 实用性权衡取舍
Bounds for Privacy-Utility Trade-off with Per-letter Privacy Constraints and Non-zero Leakage
论文作者
论文摘要
研究了两个方案的信息理论隐私机制设计问题,其中私人数据被隐藏或可观察到。在每种情况下,使用两种不同的措施考虑隐私泄漏约束。在这些情况下,私人数据是隐藏或可观察到的。在第一种情况下,代理观察到与私人数据$ x $相关的有用数据$ y $,并希望向用户披露有用的信息。隐私机制旨在生成披露的数据$ u $,该数据可最大程度地提高有关$ y $的揭示信息,同时满足人均隐私约束。在第二种情况下,代理还可以访问私人数据。首先,通过放宽独立性条件以找到第二种情况下的独立条件来扩展功能表示引理和强功能表示引理。接下来,对于两种情况,都会得出了隐私 - 实用性权衡方面的下限以及上限。特别是,对于$ x $是$ y $的确定性函数的情况,我们表明,考虑到第一种情况,我们的上限和下限在渐近上是最佳的。
An information theoretic privacy mechanism design problem for two scenarios is studied where the private data is either hidden or observable. In each scenario, privacy leakage constraints are considered using two different measures. In these scenarios the private data is hidden or observable. In the first scenario, an agent observes useful data $Y$ that is correlated with private data $X$, and wishes to disclose the useful information to a user. A privacy mechanism is designed to generate disclosed data $U$ which maximizes the revealed information about $Y$ while satisfying a per-letter privacy constraint. In the second scenario, the agent has additionally access to the private data. First, the Functional Representation Lemma and Strong Functional Representation Lemma are extended by relaxing the independence condition to find a lower bound considering the second scenario. Next, lower bounds as well as upper bounds on privacy-utility trade-off are derived for both scenarios. In particular, for the case where $X$ is deterministic function of $Y$, we show that our upper and lower bounds are asymptotically optimal considering the first scenario.