论文标题
快速,记忆效率的神经代码完成
Fast and Memory-Efficient Neural Code Completion
论文作者
论文摘要
代码完成是现代集成开发环境(IDE)最广泛使用的功能之一。尽管深度学习在源代码的统计预测中取得了重大进展,但最先进的神经网络模型消耗了数百兆字节的记忆,使开发环境膨胀。我们分两个步骤解决了这一点:首先,我们提出一个模块化神经框架以完成代码。这使我们能够探索设计空间并评估不同的技术。其次,在此框架内,我们设计了一种新颖的重新播放神经完成模型,该模型将静态分析与颗粒状令牌编码相结合。最佳的神经播种模型仅消耗6 MB的RAM,比以前的型号少19倍 - 在8 ms内计算单个完成,并在其前五名中获得90%的精度。
Code completion is one of the most widely used features of modern integrated development environments (IDEs). While deep learning has made significant progress in the statistical prediction of source code, state-of-the-art neural network models consume hundreds of megabytes of memory, bloating the development environment. We address this in two steps: first we present a modular neural framework for code completion. This allows us to explore the design space and evaluate different techniques. Second, within this framework we design a novel reranking neural completion model that combines static analysis with granular token encodings. The best neural reranking model consumes just 6 MB of RAM, - 19x less than previous models - computes a single completion in 8 ms, and achieves 90% accuracy in its top five suggestions.