论文标题

自我监督的学习:生成或对比

Self-supervised Learning: Generative or Contrastive

论文作者

Liu, Xiao, Zhang, Fanjin, Hou, Zhenyu, Wang, Zhaoyu, Mian, Li, Zhang, Jing, Tang, Jie

论文摘要

在过去的十年中,深入监督的学习取得了巨大的成功。但是,它依赖手动标签和攻击脆弱性的缺陷使人们探索了更好的解决方案。作为替代方案,自我监督的学习吸引了许多研究人员在过去几年中其代表性学习方面的飙升。自我监督的表示学习利用输入数据本身作为监督,并使几乎所有类型的下游任务受益。在这项调查中,我们研究了用于计算机视觉,自然语言处理和图形学习的新型自学学习方法。我们全面回顾了现有的经验方法,并根据其目标将它们汇总为三个主要类别:生成性,对比和生成对比度(对抗性)。我们进一步调查了相关的理论分析工作,以提供有关自我监督学习方式的更深入的想法。最后,我们简要讨论了自我监督学习的开放问题和未来的方向。提供了调查的轮廓幻灯片。

Deep supervised learning has achieved great success in the last decade. However, its deficiencies of dependence on manual labels and vulnerability to attacks have driven people to explore a better solution. As an alternative, self-supervised learning attracts many researchers for its soaring performance on representation learning in the last several years. Self-supervised representation learning leverages input data itself as supervision and benefits almost all types of downstream tasks. In this survey, we take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning. We comprehensively review the existing empirical methods and summarize them into three main categories according to their objectives: generative, contrastive, and generative-contrastive (adversarial). We further investigate related theoretical analysis work to provide deeper thoughts on how self-supervised learning works. Finally, we briefly discuss open problems and future directions for self-supervised learning. An outline slide for the survey is provided.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源