论文标题

Twinbert:将知识蒸馏到双结构的BERT模型以进行有效检索

TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval

论文作者

Lu, Wenhao, Jiao, Jian, Zhang, Ruofei

论文摘要

像BERT这样的预训练的语言模型在各种NLP任务中取得了巨大的成功,而卓越的性能则具有很高的计算资源需求,这阻碍了低延迟IR系统的应用。我们提出了有效和高效检索的Twinbert模型,该模型具有双结构化的Bert样编码器,分别表示查询和文档和交叉层,以结合嵌入并产生相似性分数。与Bert不同,BERT将两个输入句子串联并编码在一起,Twinbert在编码过程中将它们分解为它们,并独立生产查询和文档的嵌入,这允许文档嵌入在离线和存储中的文档嵌入。因此,运行时剩下的计算仅来自查询编码和查询文档交叉。这种单个变化可以节省大量的计算时间和资源,因此可以显着提高服务效率。此外,提出了一些精心设计的网络层和培训策略,以进一步降低计算成本,同时保持性能与BERT模型一样出色。最后,我们相应地为检索和相关任务开发了两个版本的Twinbert版本,并且它们都可以与Bert-Base模型实现近距离或ON-PAR性能。 该模型经过教师框架的培训,并使用了主要搜索引擎之一的数据进行了评估。实验结果表明,推理时间显着减少,首先在CPU上控制了大约20ms,同时大部分保留了微调BERT基本模型的性能增长。将模型集成到生产系统中还表现出对相关指标的显着改善,对潜伏期的影响忽略不计。

Pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, while the superior performance comes with high demand in computational resources, which hinders the application in low-latency IR systems. We present TwinBERT model for effective and efficient retrieval, which has twin-structured BERT-like encoders to represent query and document respectively and a crossing layer to combine the embeddings and produce a similarity score. Different from BERT, where the two input sentences are concatenated and encoded together, TwinBERT decouples them during encoding and produces the embeddings for query and document independently, which allows document embeddings to be pre-computed offline and cached in memory. Thereupon, the computation left for run-time is from the query encoding and query-document crossing only. This single change can save large amount of computation time and resources, and therefore significantly improve serving efficiency. Moreover, a few well-designed network layers and training strategies are proposed to further reduce computational cost while at the same time keep the performance as remarkable as BERT model. Lastly, we develop two versions of TwinBERT for retrieval and relevance tasks correspondingly, and both of them achieve close or on-par performance to BERT-Base model. The model was trained following the teacher-student framework and evaluated with data from one of the major search engines. Experimental results showed that the inference time was significantly reduced and was firstly controlled around 20ms on CPUs while at the same time the performance gain from fine-tuned BERT-Base model was mostly retained. Integration of the models into production systems also demonstrated remarkable improvements on relevance metrics with negligible influence on latency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源