论文标题

多代理决定性Q学习

Multi-Agent Determinantal Q-Learning

论文作者

Yang, Yaodong, Wen, Ying, Chen, Liheng, Wang, Jun, Shao, Kun, Mguni, David, Zhang, Weinan

论文摘要

通过分散执行的集中培训已成为多门学会学习的重要范式。尽管实用,但是当前的方法依赖于限制性假设来分解跨代理的集中价值函数以进行执行。在本文中,我们通过提出多代理决定性Q学习来消除这一限制。我们的方法是在Q-DPP上建立的,Q-DPP是确定点过程(DPP)的扩展,该过程具有分区 - 摩托病的限制到多代理设置。 Q-DPP促进代理商获得各种行为模型;这允许对关节Q函数进行自然分解,而无需在值函数或特殊网络体系结构上\ Emph {先验}结构约束。我们证明,Q-DPP在可去中心的合作任务上概括了包括VDN,QMIX和QTRAN在内的主要解决方案。为了有效地从Q-DPP中绘制样品,我们采用具有理论近似保证的现有样本样本采样器。采样器还有益于协调代理在多代理培训期间涵盖州空间中的正交方向的探索。我们评估了各种合作基准测试的算法;与最先进的工作相比,已经证明了其有效性。

Centralized training with decentralized execution has become an important paradigm in multi-agent learning. Though practical, current methods rely on restrictive assumptions to decompose the centralized value function across agents for execution. In this paper, we eliminate this restriction by proposing multi-agent determinantal Q-learning. Our method is established on Q-DPP, an extension of determinantal point process (DPP) with partition-matroid constraint to multi-agent setting. Q-DPP promotes agents to acquire diverse behavioral models; this allows a natural factorization of the joint Q-functions with no need for \emph{a priori} structural constraints on the value function or special network architectures. We demonstrate that Q-DPP generalizes major solutions including VDN, QMIX, and QTRAN on decentralizable cooperative tasks. To efficiently draw samples from Q-DPP, we adopt an existing sample-by-projection sampler with theoretical approximation guarantee. The sampler also benefits exploration by coordinating agents to cover orthogonal directions in the state space during multi-agent training. We evaluate our algorithm on various cooperative benchmarks; its effectiveness has been demonstrated when compared with the state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源