论文标题

在线班级信息持续学习与对抗性沙普利价值

Online Class-Incremental Continual Learning with Adversarial Shapley Value

论文作者

Shim, Dongsub, Mai, Zheda, Jeong, Jihwan, Sanner, Scott, Kim, Hyunwoo, Jang, Jongseong

论文摘要

随着基于图像的深度学习在每个设备上都普遍存在,从手机到智能手表,越来越需要开发不断从数据中学习的方法,同时最大程度地减少了内存足迹和功耗。尽管内存重播技术已经显示出对持续学习任务的非凡希望,但选择哪些缓冲图像重播的最佳方法仍然是一个悬而未决的问题。在本文中,我们专门关注在线课程设置,其中模型需要从在线数据流中不断学习新课程。为此,我们贡献了一种新颖的对抗性Shapley价值评分方法,该方法根据记忆数据样本得分,根据其保留先前观察到的类别的潜在决策边界的能力(以保持学习稳定性和避免学习稳定性和忘记),同时干扰当前课程的潜在决策边界(以鼓励对新课程的可变性和最佳学习新类边界)。总体而言,我们观察到,与在各种数据集上的基于最新重播的持续学习方法相比,我们提出的ASER方法提供了竞争性或提高的性能。

As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption. While memory replay techniques have shown exceptional promise for this task of continual learning, the best method for selecting which buffered images to replay is still an open question. In this paper, we specifically focus on the online class-incremental setting where a model needs to learn new classes continually from an online data stream. To this end, we contribute a novel Adversarial Shapley value scoring method that scores memory data samples according to their ability to preserve latent decision boundaries for previously observed classes (to maintain learning stability and avoid forgetting) while interfering with latent decision boundaries of current classes being learned (to encourage plasticity and optimal learning of new class boundaries). Overall, we observe that our proposed ASER method provides competitive or improved performance compared to state-of-the-art replay-based continual learning methods on a variety of datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源