论文标题
沟通高效的分散在线连续DR-submodular最大化
Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization
论文作者
论文摘要
最大化单调性次数功能是机器学习,经济学和统计数据中的一项基本任务。在本文中,我们为单调连续的DR-submodular最大化问题提出了两种通信有效的分散在线算法,这两者都将每个功能梯度评估的数量和每轮通信复杂性从$ t^{3/2} $减少到$ 1 $。第一个,单枪去中心化的元弗兰克 - 沃尔夫(Mono-dmfw),达到了$(1-1/e)$ - 遗憾的是$ o(t^{4/5})$。据我们所知,这是单调连续DR-submodular最大化的第一个单次和无投射分散的在线算法。接下来,灵感来自非合理的增强功能\ citep {zhang2022boosting},我们提出了分散的在线增强梯度上升(dobga)算法,该算法获得了$(1-1/e)$ - 遗憾的是$(\ sqrt {t})$。据我们所知,这是获得$(1-1/e)$ - 近似值的最佳$ O(\ sqrt {t})$的第一个结果,每个局部目标函数每个步骤仅一个梯度查询。最后,各种实验结果证实了所提出的方法的有效性。
Maximizing a monotone submodular function is a fundamental task in machine learning, economics, and statistics. In this paper, we present two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of per-function gradient evaluations and per-round communication complexity from $T^{3/2}$ to $1$. The first one, One-shot Decentralized Meta-Frank-Wolfe (Mono-DMFW), achieves a $(1-1/e)$-regret bound of $O(T^{4/5})$. As far as we know, this is the first one-shot and projection-free decentralized online algorithm for monotone continuous DR-submodular maximization. Next, inspired by the non-oblivious boosting function \citep{zhang2022boosting}, we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm, which attains a $(1-1/e)$-regret of $O(\sqrt{T})$. To the best of our knowledge, this is the first result to obtain the optimal $O(\sqrt{T})$ against a $(1-1/e)$-approximation with only one gradient inquiry for each local objective function per step. Finally, various experimental results confirm the effectiveness of the proposed methods.