论文标题
最佳近似 - 软马克斯功能的平滑度权衡
Optimal Approximation -- Smoothness Tradeoffs for Soft-Max Functions
论文作者
论文摘要
软马克斯函数具有两个主要效率度量:(1)近似 - 对应于其近似值的最大函数,(2)平滑度 - 这表明了它对其输入的变化的敏感程度。我们的目标是确定不同近似和平滑度量的最佳近似平滑度权衡。这导致了新型的软效果功能,每个功能对于不同的应用都是最佳的。最常用的软马克斯功能(称为指数机制)在根据预期的添加剂近似和相对于rényi差异测得的平滑度来测量的近似之间具有最佳的权衡。我们引入了一个称为“分段线性软马克斯”的软马克斯功能,在近似之间具有最佳的权衡,以最差的案例添加剂近似和平滑度来衡量,以$ \ ell_q $ -norm来衡量。分段线性机制的最坏情况近似保证在我们的软马克斯功能的输出中实现了稀疏性,该属性在机器学习应用中很重要[Martins等。 '16,Laha等。 '18],并不满足指数机制。此外,$ \ ell_q $ -smoothness适用于机理设计和游戏理论中的应用,其中分段线性机制优于指数机制。最后,我们研究了另一个称为功率机制的软马克斯功能,相对于rényi差异,预期\ textit {乘法}近似{乘法}近似和平滑度之间具有最佳的权衡,从而在不同的私有下义优化方面提供了改进的理论和实践结果。
A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input. Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness. This leads to novel soft-max functions, each of which is optimal for a different application. The most commonly used soft-max function, called exponential mechanism, has optimal tradeoff between approximation measured in terms of expected additive approximation and smoothness measured with respect to Rényi Divergence. We introduce a soft-max function, called "piecewise linear soft-max", with optimal tradeoff between approximation, measured in terms of worst-case additive approximation and smoothness, measured with respect to $\ell_q$-norm. The worst-case approximation guarantee of the piecewise linear mechanism enforces sparsity in the output of our soft-max function, a property that is known to be important in Machine Learning applications [Martins et al. '16, Laha et al. '18] and is not satisfied by the exponential mechanism. Moreover, the $\ell_q$-smoothness is suitable for applications in Mechanism Design and Game Theory where the piecewise linear mechanism outperforms the exponential mechanism. Finally, we investigate another soft-max function, called power mechanism, with optimal tradeoff between expected \textit{multiplicative} approximation and smoothness with respect to the Rényi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization.