论文标题
比例:在线自学终身学习没有先验知识
SCALE: Online Self-Supervised Lifelong Learning without Prior Knowledge
论文作者
论文摘要
无监督的终身学习是指随着时间的流逝学习的能力,同时在没有监督的情况下记住以前的模式。尽管朝这个方向取得了巨大进展,但现有工作通常在传入数据(例如,了解阶级边界)上具有丰富的先验知识,这是在复杂且不可预测的环境中无法获得的。在本文中,以现实世界的情况为动机,我们提出了一个更实用的问题设置,称为在线自学终身学习,而没有先验知识。由于非IID和单通行数据,缺乏外部监督以及没有先验知识,因此提出的设置具有挑战性。为了应对挑战,我们建议没有先验知识(比例尺)的自我监管的对比终身学习(比例),这可以纯粹从数据连续体中提取和记住苍蝇上的表示形式。比例尺围绕三个主要组成部分进行设计:伪监督的对比损失,自我监督的遗忘损失以及统一子集选择的在线记忆更新。这三个组件旨在协作以最大程度地提高学习表现。我们在IID和四个非IID数据流下进行了全面的规模实验。结果表明,在所有设置中,规模的表现都优于最先进的算法,在CIFAR-10,CIFAR-100和Tinyimagenet数据集上,提高了3.83%,2.77%和5.86%的KNN准确性。
Unsupervised lifelong learning refers to the ability to learn over time while memorizing previous patterns without supervision. Although great progress has been made in this direction, existing work often assumes strong prior knowledge about the incoming data (e.g., knowing the class boundaries), which can be impossible to obtain in complex and unpredictable environments. In this paper, motivated by real-world scenarios, we propose a more practical problem setting called online self-supervised lifelong learning without prior knowledge. The proposed setting is challenging due to the non-iid and single-pass data, the absence of external supervision, and no prior knowledge. To address the challenges, we propose Self-Supervised ContrAstive Lifelong LEarning without Prior Knowledge (SCALE) which can extract and memorize representations on the fly purely from the data continuum. SCALE is designed around three major components: a pseudo-supervised contrastive loss, a self-supervised forgetting loss, and an online memory update for uniform subset selection. All three components are designed to work collaboratively to maximize learning performance. We perform comprehensive experiments of SCALE under iid and four non-iid data streams. The results show that SCALE outperforms the state-of-the-art algorithm in all settings with improvements up to 3.83%, 2.77% and 5.86% in terms of kNN accuracy on CIFAR-10, CIFAR-100, and TinyImageNet datasets.