论文标题
视频对话在谈论时空的对话中
Video Dialog as Conversation about Objects Living in Space-Time
论文作者
论文摘要
能够创建一个可以与人类观看的系统进行有意义的对话的系统,这将是一项技术壮举。针对该目标的设置作为视频对话任务表示,要求系统在正在进行的对话框中对问题产生自然话语。该任务提出了伟大的视觉,语言和推理挑战,如果没有适当的表示方案,而不是支持高级推理的视频和对话,这些挑战就无法轻易克服。为了应对这些挑战,我们提出了一个新的以对象为中心的视频对话框架,该框架支持神经推理配音成本 - 代表时空中有关对象的对话。在这里,视频中的动态时空视觉内容首先解析为对象轨迹。鉴于此视频抽象,成本维护并跟踪与对象相关的对话框状态,这些对话框在收到新问题后会更新。对象相互作用是动态和条件地推断出每个问题的,并且它们是它们之间关系推理的基础。成本还保持了以前答案的历史记录,这允许检索相关的以对象为中心的信息来丰富答案形成过程。然后,语言生产以逐步进行,进入当前话语,现有对话和当前问题的背景。我们评估了DSTC7和DSTC8基准的成本,证明了其对最先进的竞争力。
It would be a technological feat to be able to create a system that can hold a meaningful conversation with humans about what they watch. A setup toward that goal is presented as a video dialog task, where the system is asked to generate natural utterances in response to a question in an ongoing dialog. The task poses great visual, linguistic, and reasoning challenges that cannot be easily overcome without an appropriate representation scheme over video and dialog that supports high-level reasoning. To tackle these challenges we present a new object-centric framework for video dialog that supports neural reasoning dubbed COST - which stands for Conversation about Objects in Space-Time. Here dynamic space-time visual content in videos is first parsed into object trajectories. Given this video abstraction, COST maintains and tracks object-associated dialog states, which are updated upon receiving new questions. Object interactions are dynamically and conditionally inferred for each question, and these serve as the basis for relational reasoning among them. COST also maintains a history of previous answers, and this allows retrieval of relevant object-centric information to enrich the answer forming process. Language production then proceeds in a step-wise manner, taking into the context of the current utterance, the existing dialog, the current question. We evaluate COST on the DSTC7 and DSTC8 benchmarks, demonstrating its competitiveness against state-of-the-arts.