论文标题

使用图像转换的自我监督表示学习的调查

Survey on Self-supervised Representation Learning Using Image Transformations

论文作者

Ali, Muhammad, Hashim, Sayed

论文摘要

深度神经网络需要大量的培训数据,而在现实世界中,有大量用于培训目的的数据。为了解决这些问题,使用了自我监督的学习(SSL)方法。使用几何变换(GT)的SSL是一种简单而强大的技术,用于无监督的表示学习。尽管多个调查论文审查了SSL技术,但没有一个仅专注于使用几何转换的技术。此外,在对它们进行审查的论文中,这种方法尚未深入涵盖。我们介绍这项工作的动机是,几何转换已证明在无监督的表示学习中是强大的监督信号。此外,许多这样的作品已经取得了巨大的成功,但并没有得到太多关注。我们对使用几何变换的SSL方法进行了简洁的调查。我们入围使用图像转换的六个代表性模型,包括基于预测和自动编码转换的模型。我们回顾他们的架构以及学习方法。我们还比较了CIFAR-10和Imagenet数据集的对象识别任务中这些模型的性能。我们的分析表明AETV2在大多数设置中都表现最好。具有特征脱钩的旋转在某些设置中也表现良好。然后,我们从观察到的结果中得出见解。最后,我们总结了结果和见解,并突出了要解决的开放问题,并指出了未来的各个方向。

Deep neural networks need huge amount of training data, while in real world there is a scarcity of data available for training purposes. To resolve these issues, self-supervised learning (SSL) methods are used. SSL using geometric transformations (GT) is a simple yet powerful technique used in unsupervised representation learning. Although multiple survey papers have reviewed SSL techniques, there is none that only focuses on those that use geometric transformations. Furthermore, such methods have not been covered in depth in papers where they are reviewed. Our motivation to present this work is that geometric transformations have shown to be powerful supervisory signals in unsupervised representation learning. Moreover, many such works have found tremendous success, but have not gained much attention. We present a concise survey of SSL approaches that use geometric transformations. We shortlist six representative models that use image transformations including those based on predicting and autoencoding transformations. We review their architecture as well as learning methodologies. We also compare the performance of these models in the object recognition task on CIFAR-10 and ImageNet datasets. Our analysis indicates the AETv2 performs the best in most settings. Rotation with feature decoupling also performed well in some settings. We then derive insights from the observed results. Finally, we conclude with a summary of the results and insights as well as highlighting open problems to be addressed and indicating various future directions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源