论文标题
持续学习的显着记忆完成
Saliency-Augmented Memory Completion for Continual Learning
论文作者
论文摘要
持续学习被认为是迈向下一代人工智能的关键步骤。在各种方法中,基于重播的方法来维护和重播以前样本的少量情节记忆是防止灾难性遗忘的最成功的策略之一。但是,由于给定有界内的内存和无界任务的忘记是不可避免的,因此如何忘记是一个问题,必须解决一个问题。因此,除了简单地避免灾难性遗忘之外,一个不足的问题是如何合理地忘记,同时确保人类记忆的优点,包括1。储存效率,2。概括性和3。一些解释性。为了同时实现这些目标,我们的论文提出了一个新的显着记忆完成框架,以持续学习,这是受认知神经科学中记忆完成分离的最新发现的启发。具体而言,我们创新建议通过显着图提取和内存编码来存储在情节内存中最重要的图像的一部分。在学习新任务时,从内存中的先前数据被自适应数据生成模块所覆盖,该模块的灵感来自于人类如何完成情节内存。该模块的参数在所有任务中共享,并且可以通过连续学习分类器作为双层优化对其进行共同培训。关于几个持续学习和图像分类基准的广泛实验证明了该方法的有效性和效率。
Continual Learning is considered a key step toward next-generation Artificial Intelligence. Among various methods, replay-based approaches that maintain and replay a small episodic memory of previous samples are one of the most successful strategies against catastrophic forgetting. However, since forgetting is inevitable given bounded memory and unbounded tasks, how to forget is a problem continual learning must address. Therefore, beyond simply avoiding catastrophic forgetting, an under-explored issue is how to reasonably forget while ensuring the merits of human memory, including 1. storage efficiency, 2. generalizability, and 3. some interpretability. To achieve these simultaneously, our paper proposes a new saliency-augmented memory completion framework for continual learning, inspired by recent discoveries in memory completion separation in cognitive neuroscience. Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding. When learning new tasks, previous data from memory are inpainted by an adaptive data generation module, which is inspired by how humans complete episodic memory. The module's parameters are shared across all tasks and it can be jointly trained with a continual learning classifier as bilevel optimization. Extensive experiments on several continual learning and image classification benchmarks demonstrate the proposed method's effectiveness and efficiency.