论文标题
引导变压器:利用多个外部资源在对话搜索中学习
Guided Transformer: Leveraging Multiple External Sources for Representation Learning in Conversational Search
论文作者
论文摘要
问澄清问题以响应模棱两可或面积的查询已被认为是各种信息检索系统的有用技术,尤其是带宽界面有限的对话搜索系统。最近已经研究了分析和生成澄清问题,但是对澄清问题的准确利用相对较少探索。在本文中,我们使用来自外部信息源的新颖注意机制来丰富变形金刚网络所学的表示,该机制加权了对话中每个术语。我们在包括澄清问题的对话搜索方案中评估了这种指导的变压器模型。在我们的实验中,我们使用两个独立的外部资源,包括最高检索的文档以及一组其他可能的查询问题。我们在对话搜索中为两个下游任务实施了建议的表示学习模型;文件检索和下一个澄清问题选择。我们的实验使用公共数据集进行搜索澄清,并显示出与竞争基线相比的显着改善。
Asking clarifying questions in response to ambiguous or faceted queries has been recognized as a useful technique for various information retrieval systems, especially conversational search systems with limited bandwidth interfaces. Analyzing and generating clarifying questions have been studied recently but the accurate utilization of user responses to clarifying questions has been relatively less explored. In this paper, we enrich the representations learned by Transformer networks using a novel attention mechanism from external information sources that weights each term in the conversation. We evaluate this Guided Transformer model in a conversational search scenario that includes clarifying questions. In our experiments, we use two separate external sources, including the top retrieved documents and a set of different possible clarifying questions for the query. We implement the proposed representation learning model for two downstream tasks in conversational search; document retrieval and next clarifying question selection. Our experiments use a public dataset for search clarification and demonstrate significant improvements compared to competitive baselines.