论文标题

使用深度学习的自动源代码生成和自动完成:比较和讨论当前与语言模型相关的方法

Automated Source Code Generation and Auto-completion Using Deep Learning: Comparing and Discussing Current Language-Model-Related Approaches

论文作者

Cruz-Benito, Juan, Vishwakarma, Sanjay, Martin-Fernandez, Francisco, Faro, Ismael

论文摘要

近年来,在语言模型中使用深度学习引起了很多关注。一些研究项目声称,他们可以生成可以解释为人类写作的文本,从而在许多应用领域中实现了新的可能性。在与语言处理有关的不同领域中,应用这种建模最著名的是编程语言。多年来,机器学习社区一直在研究该软件工程领域,追求诸如采用不同的方法来自动完成,生成,修复或评估人类编程的代码。考虑到深入学习的语言模型方法的日益普及,我们发现缺乏经验论文,这些论文比较了不同的深度学习体系结构,以根据编程代码创建和使用语言模型。本文比较了使用转移学习和不同的图形,以比较了不同的神经网络体系结构,例如AWD-LSTMS,AWD-QRNNS和Transformer,以查看它们在使用Python数据集中如何在构建语言模型中表现出来,以生成代码生成和填充掩蔽任务。考虑到结果,我们讨论了每种方法的不同优势和劣势,以及我们发现的差距来评估语言模型或将其应用于真实的编程环境中。

In recent years, the use of deep learning in language models gained much attention. Some research projects claim that they can generate text that can be interpreted as human-writing, enabling new possibilities in many application areas. Among the different areas related to language processing, one of the most notable in applying this type of modeling is programming languages. For years, the Machine Learning community has been researching this software engineering area, pursuing goals like applying different approaches to auto-complete, generate, fix, or evaluate code programmed by humans. Considering the increasing popularity of the Deep-Learning-enabled language models approach, we detected a lack of empirical papers that compare different deep learning architectures to create and use language models based on programming code. This paper compares different neural network architectures like AWD-LSTMs, AWD-QRNNs, and Transformer while using transfer learning and different tokenizations to see how they behave in building language models using a Python dataset for code generation and filling mask tasks. Considering the results, we discuss each approach's different strengths and weaknesses and what gaps we find to evaluate the language models or apply them in a real programming context.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源