论文标题

预训练通过对比度跨度预测进行致密检索的歧视性文本编码器

Pre-train a Discriminative Text Encoder for Dense Retrieval via Contrastive Span Prediction

论文作者

Ma, Xinyu, Guo, Jiafeng, Zhang, Ruqing, Fan, Yixing, Cheng, Xueqi

论文摘要

在许多信息检索(IR)相关任务中,密集的检索显示出令人鼓舞的结果,其基础是高质量的文本表示学习,以进行有效的搜索。最近的一些研究表明,基于自动编码器的语言模型能够使用弱解码器来提高密集的检索性能。但是,我们认为1)解码所有输入文本没有歧视性,2)即使是弱解码器也对编码器具有旁路效应。因此,在这项工作中,我们引入了一项新颖的对比预测任务,以预先培训编码器,但仍然保留自动编码器的瓶颈能力。因此,在这项工作中,我们建议退出解码器并引入新颖的对比跨度预测任务,以预先培训编码器。关键思想是迫使编码器生成文本表示,而靠近其自己的随机跨度,而远离他人的文本表示形式则使用群体对比损失。通过这种方式,我们可以1)通过对跨度的小组对比学习有效地学习歧视性文本表示形式,以及2)避免彻底避免解码器的旁路效应。对公开检索基准数据集进行的全面实验表明,我们的方法可以胜过现有的预训练方法,以显着检索。

Dense retrieval has shown promising results in many information retrieval (IR) related tasks, whose foundation is high-quality text representation learning for effective search. Some recent studies have shown that autoencoder-based language models are able to boost the dense retrieval performance using a weak decoder. However, we argue that 1) it is not discriminative to decode all the input texts and, 2) even a weak decoder has the bypass effect on the encoder. Therefore, in this work, we introduce a novel contrastive span prediction task to pre-train the encoder alone, but still retain the bottleneck ability of the autoencoder. % Therefore, in this work, we propose to drop out the decoder and introduce a novel contrastive span prediction task to pre-train the encoder alone. The key idea is to force the encoder to generate the text representation close to its own random spans while far away from others using a group-wise contrastive loss. In this way, we can 1) learn discriminative text representations efficiently with the group-wise contrastive learning over spans and, 2) avoid the bypass effect of the decoder thoroughly. Comprehensive experiments over publicly available retrieval benchmark datasets show that our approach can outperform existing pre-training methods for dense retrieval significantly.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源