论文标题

prefixRL:使用深度加固学习对并行前缀电路进行优化

PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning

论文作者

Roy, Rajarshi, Raiman, Jonathan, Kant, Neel, Elkin, Ilyas, Kirby, Robert, Siu, Michael, Oberman, Stuart, Godil, Saad, Catanzaro, Bryan

论文摘要

在这项工作中,我们提出了一种基于强化学习(RL)的方法来设计平行前缀电路,例如加法器或优先编码器,这些编码器是高性能数字设计基础的。与先前的方法不同,我们的方法设计解决方案tabula rasa纯粹是通过在循环中进行合成的。我们设计了基于网格的州行动表示和一个用于构建法律前缀电路的RL环境。在这种环境中训练的深度卷积RL代理会产生前缀加法电路,该电路为现有的基线占主导地位,分别在32B和64B设置中,相同延迟的面积高达16.0%和30.2%。我们观察到,经过开源合成工具和细胞库训练的代理可以设计比工业单元格库中的商业工具加成器更低的加法电路。

In this work, we present a reinforcement learning (RL) based approach to designing parallel prefix circuits such as adders or priority encoders that are fundamental to high-performance digital design. Unlike prior methods, our approach designs solutions tabula rasa purely through learning with synthesis in the loop. We design a grid-based state-action representation and an RL environment for constructing legal prefix circuits. Deep Convolutional RL agents trained on this environment produce prefix adder circuits that Pareto-dominate existing baselines with up to 16.0% and 30.2% lower area for the same delay in the 32b and 64b settings respectively. We observe that agents trained with open-source synthesis tools and cell library can design adder circuits that achieve lower area and delay than commercial tool adders in an industrial cell library.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源