论文标题
用整数向前和向后传播来微调预训练的语言模型
Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation
论文作者
论文摘要
一些突出的语言模型(例如Bert)的大量参数使它们在下游任务上进行了微调,并渴望计算量。以前的研究人员专注于较低的位宽度整数数据类型,以促进语言模型的正向传播,以节省记忆和计算。但是,至于向后的传播,仅使用16位浮点数据类型用于BERT的微调。在这项工作中,我们将整数算术用于BERT的微调进行前后传播。我们研究改变整数位宽度对模型度量性能的影响。我们的整数微调使用整数算术来执行BERT的线性,层和层层的正向传播和梯度计算。我们使用我们的整数训练方法对小队V1.1和小队V2。和GLUE BECHMARC进行微调。我们证明,微调16位整数BERT的度量性能匹配16位和32位浮点基线。此外,与FP32基线相比,使用更快,更高的内存有效的8位整数数据类型,Bert的整数微调平均损失3.1点。
The large number of parameters of some prominent language models, such as BERT, makes their fine-tuning on downstream tasks computationally intensive and energy hungry. Previously researchers were focused on lower bit-width integer data types for the forward propagation of language models to save memory and computation. As for the backward propagation, however, only 16-bit floating-point data type has been used for the fine-tuning of BERT. In this work, we use integer arithmetic for both forward and back propagation in the fine-tuning of BERT. We study the effects of varying the integer bit-width on the model's metric performance. Our integer fine-tuning uses integer arithmetic to perform forward propagation and gradient computation of linear, layer-norm, and embedding layers of BERT. We fine-tune BERT using our integer training method on SQuAD v1.1 and SQuAD v2., and GLUE benchmark. We demonstrate that metric performance of fine-tuning 16-bit integer BERT matches both 16-bit and 32-bit floating-point baselines. Furthermore, using the faster and more memory efficient 8-bit integer data type, integer fine-tuning of BERT loses an average of 3.1 points compared to the FP32 baseline.