论文标题
关于参数贝叶斯学习游戏中沙普利价值的收敛
On the Convergence of the Shapley Value in Parametric Bayesian Learning Games
论文作者
论文摘要
在合作游戏理论中,衡量贡献是一个经典的问题,其中沙普利价值是最著名的解决方案概念。在本文中,我们在参数贝叶斯学习游戏中建立了沙普利价值的收敛属性,玩家使用其组合数据执行贝叶斯推理,而后端kl差异被用作特征函数。我们表明,对于任何两个玩家,在某些规律性的条件下,他们在Shapley价值上的差异与限制性游戏的Shapley值的差异有关,该游戏的特征功能与联合渔民信息的对数确定性成正比。作为一个应用程序,我们提出了一个在线协作学习框架,该框架是渐近的沙普利 - fair。我们的结果使得可以实现这一目标,而无需对后端KL差异的任何昂贵计算。仅需要对Fisher信息的一致估计器。使用现实世界数据的实验证明了我们框架的有效性。
Measuring contributions is a classical problem in cooperative game theory where the Shapley value is the most well-known solution concept. In this paper, we establish the convergence property of the Shapley value in parametric Bayesian learning games where players perform a Bayesian inference using their combined data, and the posterior-prior KL divergence is used as the characteristic function. We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information. As an application, we present an online collaborative learning framework that is asymptotically Shapley-fair. Our result enables this to be achieved without any costly computations of posterior-prior KL divergences. Only a consistent estimator of the Fisher information is needed. The effectiveness of our framework is demonstrated with experiments using real-world data.