论文标题
部分可观测时空混沌系统的无模型预测
Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners
论文作者
论文摘要
在持续学习的SSLAD-TRACK 3B挑战中,我们提出了通过变压器(Colt)进行持续学习的方法。我们发现,与卷积神经网络相比,变形金刚因灾难性遗忘而遭受的损失更少。我们方法的主要原理是为基于变压器的特征提取器配备旧的知识蒸馏,并扩大策略,以竞争灾难性的遗忘。在本报告中,我们首先介绍了持续学习以进行对象检测的整体框架。然后,我们分析了在解决方案中遭受灾难性遗忘的关键元素对。我们的方法在SSLAD-TRACK 3B挑战测试集上实现了70.78地图。
In the SSLAD-Track 3B challenge on continual learning, we propose the method of COntinual Learning with Transformer (COLT). We find that transformers suffer less from catastrophic forgetting compared to convolutional neural network. The major principle of our method is to equip the transformer based feature extractor with old knowledge distillation and head expanding strategies to compete catastrophic forgetting. In this report, we first introduce the overall framework of continual learning for object detection. Then, we analyse the key elements' effect on withstanding catastrophic forgetting in our solution. Our method achieves 70.78 mAP on the SSLAD-Track 3B challenge test set.