论文标题
Dira:动态域增量正规化适应
DIRA: Dynamic Domain Incremental Regularised Adaptation
论文作者
论文摘要
自主系统(AS)通常使用深神经网络(DNN)分类器,以使它们可以在复杂,高维,非线性和动态变化的环境中运行。由于这些环境的复杂性,DNN分类器在操作过程中可能会在开发过程中未识别出域时输出错误分类。从操作中删除系统以进行重新培训变得不切实际。为了提高可靠性并克服了这一限制,DNN分类器需要在使用几个样品(例如2至100个样本)面对不同操作域时具有适应性的能力。然而,已知在一些样品上的重新培训DNN会导致灾难性遗忘和泛化不良。在本文中,我们引入了动态增量正规化适应性(DIRA),这是一种使用正则化技术的DNN动态操作域适应的方法。我们表明,在使用来自目标域中的一些样本进行重新培训时,Dira会改善遗忘和实现绩效的巨大提高问题。我们的方法显示了不同图像分类基准的改进,旨在评估分布变化的鲁棒性(例如Cifar-10c/100c,Imagenet-C),并与文献中的其他方法相比,产生最先进的性能。
Autonomous systems (AS) often use Deep Neural Network (DNN) classifiers to allow them to operate in complex, high-dimensional, non-linear, and dynamically changing environments. Due to the complexity of these environments, DNN classifiers may output misclassifications during operation when they face domains not identified during development. Removing a system from operation for retraining becomes impractical as the number of such AS increases. To increase AS reliability and overcome this limitation, DNN classifiers need to have the ability to adapt during operation when faced with different operational domains using a few samples (e.g. 2 to 100 samples). However, retraining DNNs on a few samples is known to cause catastrophic forgetting and poor generalisation. In this paper, we introduce Dynamic Incremental Regularised Adaptation (DIRA), an approach for dynamic operational domain adaption of DNNs using regularisation techniques. We show that DIRA improves on the problem of forgetting and achieves strong gains in performance when retraining using a few samples from the target domain. Our approach shows improvements on different image classification benchmarks aimed at evaluating robustness to distribution shifts (e.g.CIFAR-10C/100C, ImageNet-C), and produces state-of-the-art performance in comparison with other methods from the literature.