论文标题
在神经代码模型中揭示项目特定的偏见
Unveiling Project-Specific Bias in Neural Code Models
论文作者
论文摘要
深度学习在许多软件分析任务中都引入了重大改进。尽管基于大型语言模型(LLM)的神经代码模型在对项目内独立且分布相同(IID)设置中进行了训练和测试时表现出值得称赞的性能,但它们通常很难有效地将其推广到现实的项目间分布(OOD)数据。在这项工作中,我们表明这种现象是由于对预测而不是基础证据的项目特定捷径的严重依赖而引起的。我们提出了一个COND-IDF测量来解释这种行为,该行为量化了令牌与标签及其项目特定性的相关性。模型行为与提出的测量之间的强相关性表明,如果没有适当的正则化,模型倾向于利用虚假的统计提示进行预测。配备了这些观察结果,我们提出了一种新型的缓解偏见机制,该机制通过利用样本之间的潜在逻辑关系来规范模型的学习行为。两项代表性程序分析任务的实验结果表明,我们的缓解框架可以改善项目间的概括和对抗性鲁棒性,同时又不牺牲对项目内IID数据的准确性。
Deep learning has introduced significant improvements in many software analysis tasks. Although the Large Language Models (LLMs) based neural code models demonstrate commendable performance when trained and tested within the intra-project independent and identically distributed (IID) setting, they often struggle to generalize effectively to real-world inter-project out-of-distribution (OOD) data. In this work, we show that this phenomenon is caused by the heavy reliance on project-specific shortcuts for prediction instead of ground-truth evidence. We propose a Cond-Idf measurement to interpret this behavior, which quantifies the relatedness of a token with a label and its project-specificness. The strong correlation between model behavior and the proposed measurement indicates that without proper regularization, models tend to leverage spurious statistical cues for prediction. Equipped with these observations, we propose a novel bias mitigation mechanism that regularizes the model's learning behavior by leveraging latent logic relations among samples. Experimental results on two representative program analysis tasks indicate that our mitigation framework can improve both inter-project OOD generalization and adversarial robustness, while not sacrificing accuracy on intra-project IID data.