论文标题

VLSP 2020上使用预训练的BERT模型用于越南关系提取任务的实证研究

An Empirical Study of Using Pre-trained BERT Models for Vietnamese Relation Extraction Task at VLSP 2020

论文作者

Minh, Pham Quang Nhat

论文摘要

在本文中,我们介绍了一项关于使用预训练的BERT模型在VLSP 2020评估活动中使用预训练的提取任务的实证研究。我们应用了两个基于BERT的最先进的模型:具有实体启动的R-Bert和Bert模型。对于每个模型,我们比较了两个预训练的BERT模型:FPTAI/Vibert和Nlphust/Vibert4news。我们发现,NLPHUST/VIBERT4NEWS模型在越南关系提取任务方面大大优于FPTAI/VIBERT。最后,我们提出了一个合奏模型,将R-Bert和Bert与实体开始。我们提出的合奏模型对开发数据和任务组织者提供的测试数据的两个单个模型略有改进。

In this paper, we present an empirical study of using pre-trained BERT models for the relation extraction task at the VLSP 2020 Evaluation Campaign. We applied two state-of-the-art BERT-based models: R-BERT and BERT model with entity starts. For each model, we compared two pre-trained BERT models: FPTAI/vibert and NlpHUST/vibert4news. We found that NlpHUST/vibert4news model significantly outperforms FPTAI/vibert for the Vietnamese relation extraction task. Finally, we proposed an ensemble model that combines R-BERT and BERT with entity starts. Our proposed ensemble model slightly improved against two single models on the development data and the test data provided by the task organizers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源