论文标题
通过适应域改善自动化程序维修
Improving Automated Program Repair with Domain Adaptation
论文作者
论文摘要
自动化程序维修(APR)定义为通过自动工具在源代码中修复错误/缺陷的过程。 APR工具最近通过利用最先进的神经语言处理(NLP)技术经历了有希望的结果。如今,TFIX和Codexglue等APR工具将文本到文本变压器与特定于软件的技术相结合,如今优于替代方案。但是,在大多数APR研究中,火车和测试组是从同一组项目中选择的。但是,实际上,APR模型本来可以推广到新的和不同的项目。因此,当新项目的特征或其错误与训练集(域移动)不同时,有一个潜在的威胁报告了APR模型的高效效果很差。 在这项研究中,我们首先定义并测量自动化程序维修中的域移位问题。然后,我们提出了一个可以适应给定目标项目的APR模型的域适应框架。我们使用三种域适应方法进行了实证研究,使用两种最先进的域适应工具(TFIX和Codexglue)和两个APR模型在19个项目中的611个错误上使用两个最新的域适应工具(TFIX和Codexglue)和两个APR模型。结果表明,我们提出的框架可以将TFIX的有效性提高13.05%,而法典则可以提高23.4%。这项研究的另一个贡献是提出数据合成方法,以解决APR中缺乏标记数据的建议。我们利用变形金刚创建一个错误生成器模型。我们使用生成的合成数据来域对项目适应tfix和codexglue,而没有数据(零射击学习),这会导致TFIX和Codexglue的平均提高5.76%和24.42%。
Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool. APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques. APR tools such as TFix and CodeXGLUE combine text-to-text transformers with software-specific techniques are outperforming alternatives, these days. However, in most APR studies the train and test sets are chosen from the same set of projects. In reality, however, APR models are meant to be generalizable to new and different projects. Therefore, there is a potential threat that reported APR models with high effectiveness perform poorly when the characteristics of the new project or its bugs are different than the training set's(Domain Shift). In this study, we first define and measure the domain shift problem in automated program repair. Then, we then propose a domain adaptation framework that can adapt an APR model for a given target project. We conduct an empirical study with three domain adaptation methods FullFineTuning, TuningWithLightWeightAdapterLayers, and CurriculumLearning using two state-of-the-art domain adaptation tools (TFix and CodeXGLUE) and two APR models on 611 bugs from 19 projects. The results show that our proposed framework can improve the effectiveness of TFix by 13.05% and CodeXGLUE by 23.4%. Another contribution of this study is the proposal of a data synthesis method to address the lack of labelled data in APR. We leverage transformers to create a bug generator model. We use the generated synthetic data to domain adapt TFix and CodeXGLUE on the projects with no data (Zero-shot learning), which results in an average improvement of 5.76% and 24.42% for TFix and CodeXGLUE, respectively.