论文标题
平均共识:一点点学习有很长的路要走
Average Consensus: A Little Learning Goes A Long Way
论文作者
论文摘要
当自主代理的网络系统执行复杂的任务时,寻求的控制和协调通常取决于一些基本的控制基础。这些原语中的主要是共识,其中代理将在初始值范围内收敛到共同估计,当关节限制应成为初始值的平均值时,这将成为平均共识。为了提供易于部署的可靠服务,即使网络受到频繁且不可预测的更改,这些原语也应该运行。此外,他们应该动员很少的计算资源,以便低功率,确定性和匿名代理可以参与网络。在这种严格的对抗环境中,我们研究了这些原语的分布式实现,这些原语是通过双向但潜在的短期通信链接的网络对网络的分布式实现。受到多代理系统的经典平等和大都市协议规则的启发,我们设计了分布式的共识和平均共识,我们显示在同步时间模型中以多项式时间在多项式时间内运行。这些算法是完全分布的,既不需要诸如唯一标识符之类的对称性设备,也不需要对网络的全局控制或知识。我们的策略包括使代理商学习网络的简单结构参数(即它们的最大程度),该参数构成了足够的信息来构建简单的更新规则,可以在本地实施,而在本地实施的情况很少,而计算上很少。
When networked systems of autonomous agents carry out complex tasks, the control and coordination sought after generally depend on a few fundamental control primitives. Chief among these primitives is consensus, where agents are to converge to a common estimate within the range of initial values, which becomes average consensus when the joint limit should be the average of the initial values. To provide reliable services that are easy to deploy, these primitives should operate even when the network is subject to frequent and unpredictable changes. Moreover, they should mobilize few computational resources so that low powered, deterministic, and anonymous agents can partake in the network. In this stringent adversarial context, we investigate the distributed implementation of these primitives over networks with bidirectional, but potentially short-lived, communication links. Inspired by the classic EqualNeighbor and Metropolis agreement rules for multi-agent systems, we design distributed algorithms for consensus and average consensus, which we show to operate in polynomial time in a synchronous temporal model. These algorithms are fully distributed, requiring neither symmetry-breaking devices such as unique identifiers, nor global control or knowledge of the network. Our strategy consists in making agents learn simple structural parameters of the network -- namely, their largest degrees -- which constitutes enough information to build simple update rules, implementable locally with little computational and memory overhead.