论文标题

研究使用石灰和塑造对错误和非泡问题自动预测的解释

Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP

论文作者

Ledel, Benjamin, Herbold, Steffen

论文摘要

上下文:在问题跟踪器中报告的问题中对错误的识别对于问题的分类至关重要。机器学习模型显示出有关自动化问题类型预测的性能的有希望的结果。但是,除了我们的假设如何识别错误之外,我们只有有限的知识。石灰和外形是解释分类器预测的流行技术。 目的:我们想了解机器学习模型是否为人类合理的分类提供了解释,并与我们对模型应该学习的知识保持一致。我们还想知道预测质量是否与解释的质量相关。 方法:我们进行了一项研究,根据解释问题类型预测模型的结果的质量,我们对石灰和塑造的解释进行评分。为此,我们对解释本身的质量进行评分,即,如果它们与我们的期望保持一致,并帮助我们了解基础机器学习模型。

Context: The identification of bugs within the reported issues in an issue tracker is crucial for the triage of issues. Machine learning models have shown promising results regarding the performance of automated issue type prediction. However, we have only limited knowledge beyond our assumptions how such models identify bugs. LIME and SHAP are popular technique to explain the predictions of classifiers. Objective: We want to understand if machine learning models provide explanations for the classification that are reasonable to us as humans and align with our assumptions of what the models should learn. We also want to know if the prediction quality is correlated with the quality of explanations. Method: We conduct a study where we rate LIME and SHAP explanations based on their quality of explaining the outcome of an issue type prediction model. For this, we rate the quality of the explanations themselves, i.e., if they align with our expectations and if they help us to understand the underlying machine learning model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源