论文标题
自我监督学习中的动态渠道选择
Dynamic Channel Selection in Self-Supervised Learning
论文作者
论文摘要
尽管现在使用自我监督方法构建的计算机视觉模型现在很普遍,但仍然存在一些重要问题。自我监督的模型是否学习高度冗余的频道功能?如果一个自我监管的网络可以动态选择重要的渠道并摆脱不必要的渠道怎么办?目前,与计算机视觉中的有监督的对手相比,通过自我训练预先训练的Convnet在下游任务上获得了可比的性能。但是,有一些自我监督模型的缺点,包括大量参数,计算昂贵的培训策略以及对下游任务更快推断的明确需求。在这项工作中,我们的目标是通过研究如何将用于监督学习的标准渠道选择方法应用于经过自学训练的网络。我们在一系列目标预算上验证了我们的发现$ t_ {d} $,以跨不同数据集的图像分类任务进行频道计算,特别是CIFAR-10,CIFAR-100和Imagenet-100,在选择所有频道时,在选择所有通道时都可以在计算中获得与原始网络的可比性能相当的性能。
Whilst computer vision models built using self-supervised approaches are now commonplace, some important questions remain. Do self-supervised models learn highly redundant channel features? What if a self-supervised network could dynamically select the important channels and get rid of the unnecessary ones? Currently, convnets pre-trained with self-supervision have obtained comparable performance on downstream tasks in comparison to their supervised counterparts in computer vision. However, there are drawbacks to self-supervised models including their large numbers of parameters, computationally expensive training strategies and a clear need for faster inference on downstream tasks. In this work, our goal is to address the latter by studying how a standard channel selection method developed for supervised learning can be applied to networks trained with self-supervision. We validate our findings on a range of target budgets $t_{d}$ for channel computation on image classification task across different datasets, specifically CIFAR-10, CIFAR-100, and ImageNet-100, obtaining comparable performance to that of the original network when selecting all channels but at a significant reduction in computation reported in terms of FLOPs.