论文标题

消息感知的大型多机器人路径计划的图形注意网络

Message-Aware Graph Attention Networks for Large-Scale Multi-Robot Path Planning

论文作者

Li, Qingbiao, Lin, Weizhe, Liu, Zhe, Prorok, Amanda

论文摘要

运输和物流的领域越来越依赖自动移动机器人来处理和分配乘客或资源。在大型系统尺度上,找到分散的路径计划和协调解决方案是有效系统性能的关键。最近,图形神经网络(GNN)由于能够在分散的多机构系统中学习沟通政策的能力而变得流行。然而,香草gnns依靠简单的消息聚集机制,以防止代理商优先考虑重要信息。为了应对这一挑战,在本文中,我们扩展了以前的工作,该工作通过合并一种新型机制来允许信息依赖于信息的注意力,从而利用了多代理路径计划中的GNN。我们的消息感知图形注意力网络(MAGAT)基于类似钥匙问题的机制,该机制确定了从各种相邻机器人接收到的消息中特征的相对重要性。我们表明,MAGAT能够达到接近耦合集中专家算法的性能。此外,消融研究和与多种基准模型的比较表明,我们的注意力机制在不同的机器人密度之间非常有效,并且在通信带宽中的不同限制下表现稳定。实验表明,我们的模型能够在以前看不见的问题实例中很好地概括,并且它比基准成功率提高了47 \%的提高,即使在非常大的实例中,$ \ times $ \ times却比培训实例大100美元。

The domains of transport and logistics are increasingly relying on autonomous mobile robots for the handling and distribution of passengers or resources. At large system scales, finding decentralized path planning and coordination solutions is key to efficient system performance. Recently, Graph Neural Networks (GNNs) have become popular due to their ability to learn communication policies in decentralized multi-agent systems. Yet, vanilla GNNs rely on simplistic message aggregation mechanisms that prevent agents from prioritizing important information. To tackle this challenge, in this paper, we extend our previous work that utilizes GNNs in multi-agent path planning by incorporating a novel mechanism to allow for message-dependent attention. Our Message-Aware Graph Attention neTwork (MAGAT) is based on a key-query-like mechanism that determines the relative importance of features in the messages received from various neighboring robots. We show that MAGAT is able to achieve a performance close to that of a coupled centralized expert algorithm. Further, ablation studies and comparisons to several benchmark models show that our attention mechanism is very effective across different robot densities and performs stably in different constraints in communication bandwidth. Experiments demonstrate that our model is able to generalize well in previously unseen problem instances, and that it achieves a 47\% improvement over the benchmark success rate, even in very large-scale instances that are $\times$100 larger than the training instances.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源