论文标题
QVMIX和QVMIX-MAX:将算法的深层质量价值系列扩展到合作多代理增强学习
QVMix and QVMix-Max: Extending the Deep Quality-Value Family of Algorithms to Cooperative Multi-Agent Reinforcement Learning
论文作者
论文摘要
本文介绍了四种新算法,这些算法可用于解决合作环境中出现的多代理增强学习(MARL)问题。所有算法均基于算法的深质质量值(DQV)家族,这是一套在处理单一代理增强学习问题(SARL)时已被证明是成功的。 DQV算法的关键思想是共同学习状态值函数$ V $的近似值,以及状态行动值函数$ Q $的近似值。我们遵循这一原理,并通过引入两种完全分散的MARL算法(IQV和IQV-MAX)和两种基于基于分散的执行培训范式的集中式培训(QVMIX和QVMIX和QVMIX-MAX)的算法来概括这些算法。我们将算法与最新的MARL技术进行比较,该技术在流行的星际争霸多代理挑战(SMAC)环境中进行了比较。当将QVMIX和QVMIX-MAX与QMIX和MAVEN等知名的MARL技术进行比较时,我们会显示出竞争性的结果,并且表明QVMix甚至可以在某些经过测试的环境中胜过它们,这是表现最佳整体的算法。我们假设这是由于QVMIX遭受$ Q $函数的高估偏差的事实。
This paper introduces four new algorithms that can be used for tackling multi-agent reinforcement learning (MARL) problems occurring in cooperative settings. All algorithms are based on the Deep Quality-Value (DQV) family of algorithms, a set of techniques that have proven to be successful when dealing with single-agent reinforcement learning problems (SARL). The key idea of DQV algorithms is to jointly learn an approximation of the state-value function $V$, alongside an approximation of the state-action value function $Q$. We follow this principle and generalise these algorithms by introducing two fully decentralised MARL algorithms (IQV and IQV-Max) and two algorithms that are based on the centralised training with decentralised execution training paradigm (QVMix and QVMix-Max). We compare our algorithms with state-of-the-art MARL techniques on the popular StarCraft Multi-Agent Challenge (SMAC) environment. We show competitive results when QVMix and QVMix-Max are compared to well-known MARL techniques such as QMIX and MAVEN and show that QVMix can even outperform them on some of the tested environments, being the algorithm which performs best overall. We hypothesise that this is due to the fact that QVMix suffers less from the overestimation bias of the $Q$ function.