论文标题

密集文本检索的近似最近的邻居负面学习

Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval

论文作者

Xiong, Lee, Xiong, Chenyan, Li, Ye, Tang, Kwok-Fung, Liu, Jialin, Bennett, Paul, Ahmed, Junaid, Overwijk, Arnold

论文摘要

在茂密的学会代表空间中进行文本检索具有许多有趣的优势,而不是稀疏的检索。然而,茂密检索(DR)的有效性通常需要与稀疏的检索结合。在本文中,我们确定主要瓶颈是在训练机制中,其中训练中使用的负面实例不能代表测试中无关的文档。本文介绍了大约最近的邻居负面对比估计(ANCE),这是一种训练机制,该机制构建了来自近似最近的邻居(ANN)索引的负面因素,该指数与学习过程相当更新,以选择更现实的负面训练实例。这从根本上解决了DR培训和测试中使用的数据分布之间的差异。在我们的实验中,ANCE增强了Bert-Siamese DR模型,以优于所有竞争性密集和稀疏的检索基线。它几乎与Ance学习的代表空间中的Dot-rapotucts使用DOT产品的稀疏返回和伯特级的准确性相匹配,并提供了几乎100倍的加速。

Conducting text retrieval in a dense learned representation space has many intriguing advantages over sparse retrieval. Yet the effectiveness of dense retrieval (DR) often requires combination with sparse retrieval. In this paper, we identify that the main bottleneck is in the training mechanisms, where the negative instances used in training are not representative of the irrelevant documents in testing. This paper presents Approximate nearest neighbor Negative Contrastive Estimation (ANCE), a training mechanism that constructs negatives from an Approximate Nearest Neighbor (ANN) index of the corpus, which is parallelly updated with the learning process to select more realistic negative training instances. This fundamentally resolves the discrepancy between the data distribution used in the training and testing of DR. In our experiments, ANCE boosts the BERT-Siamese DR model to outperform all competitive dense and sparse retrieval baselines. It nearly matches the accuracy of sparse-retrieval-and-BERT-reranking using dot-product in the ANCE-learned representation space and provides almost 100x speed-up.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源