论文标题
编码器模型可以从语法校正中的预训练的蒙版语言模型中受益
Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction
论文作者
论文摘要
本文研究了如何有效地将预先训练的蒙版语言模型(MLM)(例如BERT)纳入语法误差校正(GEC)的编码器模型(ECDEC)模型。这个问题的答案并不像人们期望的那样简单,因为将MLM纳入ENCDEC模型的先前常见方法将其应用于GEC时具有潜在的缺点。例如,将输入到GEC模型的分布与用于预训练MLMS的语料库的分布可能有很大不同(错误,笨拙等)。但是,以前的方法中未解决此问题。我们的实验表明,我们提出的方法,首先使用给定的GEC语料库微调MLM,然后将微型MLM的输出用作GEC模型中的其他功能,从而最大程度地提高了MLM的好处。表现最佳的模型在BEA-2019和Conll-2014基准测试中实现了最先进的表演。我们的代码可在以下网址公开获取:https://github.com/kanekomasahiro/bert-gec。
This paper investigates how to effectively incorporate a pre-trained masked language model (MLM), such as BERT, into an encoder-decoder (EncDec) model for grammatical error correction (GEC). The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC. For example, the distribution of the inputs to a GEC model can be considerably different (erroneous, clumsy, etc.) from that of the corpora used for pre-training MLMs; however, this issue is not addressed in the previous methods. Our experiments show that our proposed method, where we first fine-tune a MLM with a given GEC corpus and then use the output of the fine-tuned MLM as additional features in the GEC model, maximizes the benefit of the MLM. The best-performing model achieves state-of-the-art performances on the BEA-2019 and CoNLL-2014 benchmarks. Our code is publicly available at: https://github.com/kanekomasahiro/bert-gec.