论文标题
救援的语义外!零拍混合检索模型
Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models
论文作者
论文摘要
在许多段落检索任务中,基于预训练的语言模型(例如,BERT)的深度检索模型比词汇检索模型(例如,BM25)实现了优越的性能。但是,已经完成了有限的工作,将深度检索模型推广到其他任务和域。在这项工作中,我们仔细选择了五个数据集,包括两个域内数据集和三个具有不同域移动级别的室外数据集,并在零弹位设置中研究了深层模型的概括。我们的发现表明,当目标域与对模型训练的源域非常不同时,深层检索模型的性能会显着恶化。相反,词汇模型在整个领域更强大。因此,我们提出了一个简单而有效的框架,以整合词汇和深层检索模型。我们的实验表明,即使在室外设置中,深层模型较弱,这两个模型也是互补的。与深层检索模型相比,混合模型平均获得了20.4%的相对增益,在三个室外数据集中,词汇模型平均获得9.54%。
The pre-trained language model (eg, BERT) based deep retrieval models achieved superior performance over lexical retrieval models (eg, BM25) in many passage retrieval tasks. However, limited work has been done to generalize a deep retrieval model to other tasks and domains. In this work, we carefully select five datasets, including two in-domain datasets and three out-of-domain datasets with different levels of domain shift, and study the generalization of a deep model in a zero-shot setting. Our findings show that the performance of a deep retrieval model is significantly deteriorated when the target domain is very different from the source domain that the model was trained on. On the contrary, lexical models are more robust across domains. We thus propose a simple yet effective framework to integrate lexical and deep retrieval models. Our experiments demonstrate that these two models are complementary, even when the deep model is weaker in the out-of-domain setting. The hybrid model obtains an average of 20.4% relative gain over the deep retrieval model, and an average of 9.54% over the lexical model in three out-of-domain datasets.