论文标题
使用前馈神经网络预测记忆编译器的性能输出
Predicting Memory Compiler Performance Outputs using Feed-Forward Neural Networks
论文作者
论文摘要
典型的半导体芯片包括成千上万的小记忆。由于记忆估计对芯片的整体功率,性能和面积(PPA)估计贡献了25%至40%,因此必须仔细设计记忆以满足系统的要求。内存阵列高度均匀,可以通过大约10个参数描述,这主要取决于外围的复杂性。因此,为了改善PPA利用率,记忆通常由内存编译器生成。芯片设计流中的一个关键任务是找到最佳的内存编译器参数化,一方面满足系统要求,而另一方面则优化了PPA。尽管大多数编译器供应商也为此任务提供优化者,但这些供应商通常是缓慢或不准确的。为了实现有效的优化,尽管编译器运行时间很长,我们提出了完全连接的馈电神经网络,以预测给定存储器参数化的PPA输出。使用详尽的基于搜索的优化器框架,该框架获得神经网络预测,在芯片设计人员指定其要求之后,在几秒钟内发现了PPA优势参数化。平均模型预测误差小于3%,超过99%的决策可靠性以及对成功的大型芯片设计项目的优化器的生产性用法说明了方法的有效性。
Typical semiconductor chips include thousands of mostly small memories. As memories contribute an estimated 25% to 40% to the overall power, performance, and area (PPA) of a chip, memories must be designed carefully to meet the system's requirements. Memory arrays are highly uniform and can be described by approximately 10 parameters depending mostly on the complexity of the periphery. Thus, to improve PPA utilization, memories are typically generated by memory compilers. A key task in the design flow of a chip is to find optimal memory compiler parametrizations which on the one hand fulfill system requirements while on the other hand optimize PPA. Although most compiler vendors also provide optimizers for this task, these are often slow or inaccurate. To enable efficient optimization in spite of long compiler run times, we propose training fully connected feed-forward neural networks to predict PPA outputs given a memory compiler parametrization. Using an exhaustive search-based optimizer framework which obtains neural network predictions, PPA-optimal parametrizations are found within seconds after chip designers have specified their requirements. Average model prediction errors of less than 3%, a decision reliability of over 99% and productive usage of the optimizer for successful, large volume chip design projects illustrate the effectiveness of the approach.