论文标题
通过离散推理来理解复杂的文档
Towards Complex Document Understanding By Discrete Reasoning
论文作者
论文摘要
文档视觉问题回答(VQA)旨在了解视觉上富裕的文档,以自然语言回答问题,这是自然语言处理和计算机视觉的新兴研究主题。在这项工作中,我们介绍了一个名为TAT-DQA的新文档VQA数据集,该数据集由3,067个文档页面组成,其中包含半结构化表和非结构化文本以及16,558个问答,通过扩展Tat-QA数据集。这些文档是从现实世界中的财务报告中取样的,并包含大量数字,这意味着要求离散的推理能力回答该数据集上的问题。基于TAT-DQA,我们进一步开发了一个名为MHST的新型模型,该模型在包括文本,布局和视觉图像在内的多模式中考虑了信息,以智能地以相应的策略(即提取或推理)智能地解决不同类型的问题。广泛的实验表明,MHST模型明显胜过基线方法,证明其有效性。但是,表现仍然远远落后于专家人类的表现。我们预计,我们的新Tat-DQA数据集将有助于对将视觉和语言结合的视觉文档进行深入了解,尤其是对于需要离散推理的场景。另外,我们希望所提出的模型能够激发研究人员将来设计更高级的文档VQA模型。我们的数据集将在https://nextplusplus.github.io/tat-dqa/上公开可用。
Document Visual Question Answering (VQA) aims to understand visually-rich documents to answer questions in natural language, which is an emerging research topic for both Natural Language Processing and Computer Vision. In this work, we introduce a new Document VQA dataset, named TAT-DQA, which consists of 3,067 document pages comprising semi-structured table(s) and unstructured text as well as 16,558 question-answer pairs by extending the TAT-QA dataset. These documents are sampled from real-world financial reports and contain lots of numbers, which means discrete reasoning capability is demanded to answer questions on this dataset. Based on TAT-DQA, we further develop a novel model named MHST that takes into account the information in multi-modalities, including text, layout and visual image, to intelligently address different types of questions with corresponding strategies, i.e., extraction or reasoning. Extensive experiments show that the MHST model significantly outperforms the baseline methods, demonstrating its effectiveness. However, the performance still lags far behind that of expert humans. We expect that our new TAT-DQA dataset would facilitate the research on deep understanding of visually-rich documents combining vision and language, especially for scenarios that require discrete reasoning. Also, we hope the proposed model would inspire researchers to design more advanced Document VQA models in future. Our dataset will be publicly available for non-commercial use at https://nextplusplus.github.io/TAT-DQA/.