论文标题
自然语言处理的预训练模型:调查
Pre-trained Models for Natural Language Processing: A Survey
论文作者
论文摘要
最近,预训练模型(PTM)的出现将自然语言处理(NLP)带到了一个新时代。在这项调查中,我们对NLP的PTM进行了全面评论。我们首先简要介绍语言表示学习及其研究进展。然后,我们根据具有四个观点的分类法系统地对现有的PTM进行了分类。接下来,我们描述如何使PTM的知识适应下游任务。最后,我们概述了PTM的一些潜在方向,以供将来研究。这项调查旨在成为用于理解,使用和开发各种NLP任务的PTM的动手指南。
Recently, the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy with four perspectives. Next, we describe how to adapt the knowledge of PTMs to the downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks.