论文标题

减少暂时句子接地的视觉和语言偏见

Reducing the Vision and Language Bias for Temporal Sentence Grounding

论文作者

Liu, Daizong, Qu, Xiaoye, Hu, Wei

论文摘要

时间句子接地(TSG)是多媒体信息检索的重要但艰巨的任务。尽管以前的TSG方法已经达到了不错的性能,但它们倾向于捕获数据集中经常出现的视频问题对的选择偏差,而不是呈现强大的多模式推理能力,尤其是对于很少出现的对。在本文中,我们研究了上述选择偏见的问题,并因此提出了一个偏见-TSG(D-TSG)模型,以过滤和消除视觉和语言方式中的负面偏见,以增强模型的概括能力。具体来说,我们建议从两个角度来减轻问题:1)特征蒸馏。我们构建了一个多模式的辩论分支,以首先捕获视觉和语言偏见,然后应用偏置识别模块以明确识别真正的负偏见并将其从良性多模式表示中删除。 2)对比样品产生。我们构建两种类型的负样本来强制执行模型,以准确学习对齐的多模式语义并做出完整的语义推理。我们将提出的模型应用于通常和很少出现的TSG案例,并通过在三个基准数据集(ActivityNet标题,Tacos和Charades-STA)上实现最先进的性能来证明其有效性。

Temporal sentence grounding (TSG) is an important yet challenging task in multimedia information retrieval. Although previous TSG methods have achieved decent performance, they tend to capture the selection biases of frequently appeared video-query pairs in the dataset rather than present robust multimodal reasoning abilities, especially for the rarely appeared pairs. In this paper, we study the above issue of selection biases and accordingly propose a Debiasing-TSG (D-TSG) model to filter and remove the negative biases in both vision and language modalities for enhancing the model generalization ability. Specifically, we propose to alleviate the issue from two perspectives: 1) Feature distillation. We built a multi-modal debiasing branch to firstly capture the vision and language biases, and then apply a bias identification module to explicitly recognize the true negative biases and remove them from the benign multi-modal representations. 2) Contrastive sample generation. We construct two types of negative samples to enforce the model to accurately learn the aligned multi-modal semantics and make complete semantic reasoning. We apply the proposed model to both commonly and rarely appeared TSG cases, and demonstrate its effectiveness by achieving the state-of-the-art performance on three benchmark datasets (ActivityNet Caption, TACoS, and Charades-STA).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源