论文标题
ML编译器的可转移图优化器
Transferable Graph Optimizers for ML Compilers
论文作者
论文摘要
大多数用于机器学习的编译器(ML)框架需要解决许多相关的优化问题,以生成有效的机器代码。当前的ML编译器依靠基于启发式的算法一次解决这些优化问题。但是,这种方法不仅很难维护,而且通常会导致次优的解决方案,尤其是对于新的模型体系结构。文献中现有的基于学习的方法是效率低下的样本,解决了一个单一的优化问题,并且不会概括地看不见的图表,从而使它们无法在实践中部署。为了解决这些局限性,我们基于电感图神经网络上可扩展的顺序注意机制,提出了一种计算图优化(GO)的端到端,可转移的深入强化学习方法。 GO会在整个图表上生成决策,而不是在每个单个节点自动加压上,与先前的方法相比,搜索大幅加速搜索。此外,我们提出了反复的注意层,以共同优化相关图优化任务,并与TensorFlow默认优化相比,在三个图形优化任务上证明了33%-60%的加速。在一组不同的代表性图表上,包括Inpection-V3,Transformer-XL和WaveNet在内的80,000个节点,在实际系统中评估的设备放置任务上,速度平均提高了21%的改善,比以前的艺术状态提高了15倍的ART状态。
Most compilers for machine learning (ML) frameworks need to solve many correlated optimization problems to generate efficient machine code. Current ML compilers rely on heuristics based algorithms to solve these optimization problems one at a time. However, this approach is not only hard to maintain but often leads to sub-optimal solutions especially for newer model architectures. Existing learning based approaches in the literature are sample inefficient, tackle a single optimization problem, and do not generalize to unseen graphs making them infeasible to be deployed in practice. To address these limitations, we propose an end-to-end, transferable deep reinforcement learning method for computational graph optimization (GO), based on a scalable sequential attention mechanism over an inductive graph neural network. GO generates decisions on the entire graph rather than on each individual node autoregressively, drastically speeding up the search compared to prior methods. Moreover, we propose recurrent attention layers to jointly optimize dependent graph optimization tasks and demonstrate 33%-60% speedup on three graph optimization tasks compared to TensorFlow default optimization. On a diverse set of representative graphs consisting of up to 80,000 nodes, including Inception-v3, Transformer-XL, and WaveNet, GO achieves on average 21% improvement over human experts and 18% improvement over the prior state of the art with 15x faster convergence, on a device placement task evaluated in real systems.