论文标题
对阿拉伯信息提取的预训练变压器的实证研究
An Empirical Study of Pre-trained Transformers for Arabic Information Extraction
论文作者
论文摘要
多语言预训练的变压器,例如Mbert(Devlin等,2019)和XLM-Roberta(Conneau等,2020a),已显示可实现有效的跨语言零拍传输。但是,他们在阿拉伯信息提取(IE)任务上的表现不是很好的研究。在本文中,我们预先训练了一种定制的双语Bert,称为Gigabert,专为阿拉伯语NLP和英语至阿拉伯的零拍传输学习而设计。我们研究了吉格伯特对零短任务的零短传输的有效性:命名实体识别,词性标记,参数角色标签和关系提取。我们的最佳模型在监督和零拍传输设置中都大大优于Mbert,XLM-Roberta和Arabert(Antoun等,2020)。我们已经在https://github.com/lanwuwei/gigabert上公开培训了预培训的模型。
Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied. In this paper, we pre-train a customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT's effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the supervised and zero-shot transfer settings. We have made our pre-trained models publicly available at https://github.com/lanwuwei/GigaBERT.