论文标题

视觉对话的历史记录:我们真的需要吗?

History for Visual Dialog: Do we really need it?

论文作者

Agarwal, Shubham, Bui, Trung, Lee, Joon-Young, Konstas, Ioannis, Rieser, Verena

论文摘要

视觉对话框涉及“理解”对话记录(前面已经讨论过的内容)和当前的问题(要求的内容),除了将图像中的信息接地之外,还可以生成正确的响应。在本文中,我们显示了明确编码对话框历史记录的共同注意模型优于未达到最新性能的模型(Val Set上的72%NDCG)。但是,我们还通过证明历史记录确实仅对于少量数据需要,并且当前的评估指标鼓励通用答复,从而揭示了众包数据集收集程序的缺点。为此,我们提出了一个具有挑战性的子集(VisdialConv),并提供了63%NDCG的基准。

Visual Dialog involves "understanding" the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response. In this paper, we show that co-attention models which explicitly encode dialog history outperform models that don't, achieving state-of-the-art performance (72 % NDCG on val set). However, we also expose shortcomings of the crowd-sourcing dataset collection procedure by showing that history is indeed only required for a small amount of the data and that the current evaluation metric encourages generic replies. To that end, we propose a challenging subset (VisDialConv) of the VisDial val set and provide a benchmark of 63% NDCG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源