论文标题
与随机网络分布式推断有关社会学习应用程序的不准确率
Inaccuracy rates for distributed inference over random networks with applications to social learning
论文作者
论文摘要
本文研究了共识+创新类型的概率收敛速率。对于每个节点,我们在较大的偏差率函数上找到了一个较低的和一个上限的家族,从而为迭代元素上感兴趣的事件的指数收敛速率计算。相关的应用包括分布式假设检验中的错误指数,社会学习中信念的收敛速度以及分布式估计中的不准确率。速率函数上的边界在每个节点上具有非常特殊的形式:它们被构造为假设融合中心的速率函数与速率函数之间的凸函数,而速率函数对应于节点的某种拓扑模式。我们进一步显示了在几种情况下发现的界限的紧密性,例如吊坠节点和常规网络,从而确立了在随机网络中共识+创新和社会学习的大偏差原则的第一个证明。
This paper studies probabilistic rates of convergence for consensus+innovations type of algorithms in random, generic networks. For each node, we find a lower and also a family of upper bounds on the large deviations rate function, thus enabling the computation of the exponential convergence rates for the events of interest on the iterates. Relevant applications include error exponents in distributed hypothesis testing, rates of convergence of beliefs in social learning, and inaccuracy rates in distributed estimation. The bounds on the rate function have a very particular form at each node: they are constructed as the convex envelope between the rate function of the hypothetical fusion center and the rate function corresponding to a certain topological mode of the node's presence. We further show tightness of the discovered bounds for several cases, such as pendant nodes and regular networks, thus establishing the first proof of the large deviations principle for consensus+innovations and social learning in random networks.