论文标题
使用延迟的自我强化的凝聚力网络
Cohesive Networks using Delayed Self Reinforcement
论文作者
论文摘要
网络如何达到目标(共识值)可能与达成共识值一样重要。虽然先前的方法着重于迅速达到新的共识值,但在共识值或跟踪期间的过渡期间保持凝聚力仍然具有挑战性,但尚未得到解决。这项工作的主要贡献是解决保持凝聚力的问题:(i)提出一种新的延迟自我增强(DSR)方法; (ii)将其扩展为与具有高阶,异质动力学和(iii)基于DSR的方法开发稳定性条件的代理。使用DSR,每个代理使用邻居的当前和过去信息来推断总体目标,并修改更新定律以改善凝聚力。所提出的DSR方法的优点是,它仅需要从给定网络中已经可以提供的信息来改善凝聚力,并且不需要网络连接性修改(这可能总是可行的),也不需要增加系统的整体响应速度(这可能需要更大的输入)。此外,说明性模拟示例用于使用和不使用DSR的相对评估性能。模拟结果显示DSR的内聚力有了很大的改善。
How a network gets to the goal (a consensus value) can be as important as reaching the consensus value. While prior methods focus on rapidly getting to a new consensus value, maintaining cohesion, during the transition between consensus values or during tracking, remains challenging and has not been addressed. The main contributions of this work are to address the problem of maintaining cohesion by: (i) proposing a new delayed self-reinforcement (DSR) approach; (ii) extending it for use with agents that have higher-order, heterogeneous dynamics, and (iii) developing stability conditions for the DSR-based method. With DSR, each agent uses current and past information from neighbors to infer the overall goal and modifies the update law to improve cohesion. The advantages of the proposed DSR approach are that it only requires already-available information from a given network to improve the cohesion and does not require network-connectivity modifications (which might not be always feasible) nor increases in the system's overall response speed (which can require larger input). Moreover, illustrative simulation examples are used to comparatively evaluate the performance with and without DSR. The simulation results show substantial improvement in cohesion with DSR.