论文标题
黑色还是白色?如何为基于内存的分析开发自动调节器[扩展版本]
Black or White? How to Develop an AutoTuner for Memory-based Analytics [Extended Version]
论文作者
论文摘要
如今,人们对建立自动驾驶(或自动驾驶)数据处理系统的兴趣很大。新兴的思想流派是为此目的利用AI驱动的“黑匣子”算法。在本文中,我们提出了逆势观点。我们研究了在现代分布式数据处理系统上运行的应用程序自动传输内存分配的问题。对于这个问题,我们表明,我们开发的一种经验驱动的“白盒”算法(称为RERM)提供了与最先进的AI-AI-AI-DRIEN“ Black-Drien's-Drien's Black-Drien”算法相比,在一小部分间接费用下提供了一个近距离的调整,即贝耶斯安(Bayesian)优化(BO)优化(BO)和深度分布策略级(BOSTAIL PLAINTANDER PLACTIST PLACTION PLACTION PLANICENT(DDPP))。 The main reason for RelM's superior performance is that the memory management in modern memory-based data analytics systems is an interplay of algorithms at multiple levels: (i) at the resource-management level across various containers allocated by resource managers like Kubernetes and YARN, (ii) at the container level among the OS, pods, and processes such as the Java Virtual Machine (JVM), (iii) at the application level for caching,聚集,数据洗牌和应用数据结构,以及(iv)在JVM级别的各个池,例如年轻人和老一代。 RERM了解这些相互作用,并将它们用于构建分析解决方案以自动设置内存管理旋钮。在称为GBO的另一个贡献中,我们使用RELM的分析模型来加快贝叶斯优化。通过基于Apache Spark的评估,我们展示了RERM的建议明显好于常用的火花部署提供的建议,并且与Brute-Force Exploration获得的建议相近;尽管GBO提供了更高的最佳保证,但与最先进的AI驱动政策(成本开销)相比,GBO提供的最佳保证。
There is a lot of interest today in building autonomous (or, self-driving) data processing systems. An emerging school of thought is to leverage AI-driven "black box" algorithms for this purpose. In this paper, we present a contrarian view. We study the problem of autotuning the memory allocation for applications running on modern distributed data processing systems. For this problem, we show that an empirically-driven "white-box" algorithm, called RelM, that we have developed provides a close-to-optimal tuning at a fraction of the overheads compared to state-of-the-art AI-driven "black box" algorithms, namely, Bayesian Optimization (BO) and Deep Distributed Policy Gradient (DDPG). The main reason for RelM's superior performance is that the memory management in modern memory-based data analytics systems is an interplay of algorithms at multiple levels: (i) at the resource-management level across various containers allocated by resource managers like Kubernetes and YARN, (ii) at the container level among the OS, pods, and processes such as the Java Virtual Machine (JVM), (iii) at the application level for caching, aggregation, data shuffles, and application data structures, and (iv) at the JVM level across various pools such as the Young and Old Generation. RelM understands these interactions and uses them in building an analytical solution to autotune the memory management knobs. In another contribution, called GBO, we use the RelM's analytical models to speed up Bayesian Optimization. Through an evaluation based on Apache Spark, we showcase that RelM's recommendations are significantly better than what commonly-used Spark deployments provide, and are close to the ones obtained by brute-force exploration; while GBO provides optimality guarantees for a higher, but still significantly lower compared to the state-of-the-art AI-driven policies, cost overhead.