论文标题
使用预训练的石墨素模型的神经素至音素转换
Neural Grapheme-to-Phoneme Conversion with Pre-trained Grapheme Models
论文作者
论文摘要
神经网络模型已在字符至phoneme(G2P)转换方面达到了最先进的性能。但是,它们的性能依赖于大规模发音词典,这可能无法用于很多语言。受预先训练的语言模型BERT的成功的启发,本文提出了一种名为Chapereme Bert(Gbert)的预训练的石墨素模型,该模型是由仅包含具有Grapheme信息的大型语言特定单词列表上的自我监督培训而构建的。此外,开发了两种方法是将Gbert纳入基于最先进的变压器的G2P模型,即通过注意力调整Gbert或将Gbert融合到变压器模型中。 Sigmorphon 2021 G2P任务的荷兰语,塞尔博 - 克罗地亚,保加利亚和韩国数据集的实验结果证实了我们基于Gert的G2P模型在中等资源和低资源数据条件下的有效性。
Neural network models have achieved state-of-the-art performance on grapheme-to-phoneme (G2P) conversion. However, their performance relies on large-scale pronunciation dictionaries, which may not be available for a lot of languages. Inspired by the success of the pre-trained language model BERT, this paper proposes a pre-trained grapheme model called grapheme BERT (GBERT), which is built by self-supervised training on a large, language-specific word list with only grapheme information. Furthermore, two approaches are developed to incorporate GBERT into the state-of-the-art Transformer-based G2P model, i.e., fine-tuning GBERT or fusing GBERT into the Transformer model by attention. Experimental results on the Dutch, Serbo-Croatian, Bulgarian and Korean datasets of the SIGMORPHON 2021 G2P task confirm the effectiveness of our GBERT-based G2P models under both medium-resource and low-resource data conditions.