论文标题

使用加权正则化修复DNN的组级别错误

Repairing Group-Level Errors for DNNs Using Weighted Regularization

论文作者

Zhong, Ziyuan, Tian, Yuchi, Sweeney, Conor J., Ordonez, Vicente, Ray, Baishakhi

论文摘要

深度神经网络(DNN)已被广泛用于制定影响人们生活的决策。但是,已经发现它们表现出严重的错误行为,可能导致不幸的结果。先前的工作表明,这种不当行为通常是由于违反阶级财产的原因而不是单个图像上的错误。尽管已经提出了用于检测此类错误的方法,但迄今尚未对其进行修复。在这里,我们提出了一种称为加权正则化(WR)的通用方法,该方法由五种针对误差生产类固定DNN的混凝土方法组成。特别是,它可以修复单标签和多标签图像分类的DNN模型的混乱误差和偏差误差。当给定的DNN模型倾向于在两个类之间混淆时,就会发生混乱错误。 WR中的每种方法在DNN重新培训或推理的阶段分配了更多权重,以减轻目标对之间的混淆。偏差误差可以类似地固定。我们在六个广泛使用的数据集和架构组合上评估并比较了所提出的方法以及基线。结果表明,WR方法具有不同的权衡,但是在每个设置下,至少一种WR方法可以大大减少混乱/偏见错误,总体绩效成本非常有限。

Deep Neural Networks (DNNs) have been widely used in software making decisions impacting people's lives. However, they have been found to exhibit severe erroneous behaviors that may lead to unfortunate outcomes. Previous work shows that such misbehaviors often occur due to class property violations rather than errors on a single image. Although methods for detecting such errors have been proposed, fixing them has not been studied so far. Here, we propose a generic method called Weighted Regularization (WR) consisting of five concrete methods targeting the error-producing classes to fix the DNNs. In particular, it can repair confusion error and bias error of DNN models for both single-label and multi-label image classifications. A confusion error happens when a given DNN model tends to confuse between two classes. Each method in WR assigns more weights at a stage of DNN retraining or inference to mitigate the confusion between target pair. A bias error can be fixed similarly. We evaluate and compare the proposed methods along with baselines on six widely-used datasets and architecture combinations. The results suggest that WR methods have different trade-offs but under each setting at least one WR method can greatly reduce confusion/bias errors at a very limited cost of the overall performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源