论文标题

预先训练的基于变压器的模型的多任务主动学习

Multi-task Active Learning for Pre-trained Transformer-based Models

论文作者

Rotman, Guy, Reichart, Roi

论文摘要

多任务学习,其中几个任务是通过单个模型共同学习的,允许NLP模型共享来自多个注释的信息,并在相关任务相关时可以促进更好的预测。但是,这项技术需要用多个注释方案对相同的文本进行注释,这可能是昂贵且费力的。活跃学习(AL)已被证明是通过迭代选择对NLP模型最有价值的未标记示例来优化注释过程的。但是,多任务主动学习(MT-AL)尚未应用于最先进的基于预训练的变压器的NLP模型。本文旨在缩小这一差距。我们在三个现实的多任务场景中探讨了各种多任务选择标准,反映了参与任务之间的不同关系,并与单任务选择相比演示了多任务的有效性。我们的结果表明,可以有效地使用MT-AL,以最大程度地减少多任务NLP模型的注释工作。

Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes which may be costly and laborious. Active learning (AL) has been demonstrated to optimize annotation processes by iteratively selecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源