论文标题
了解具有不同职位感知的长文档
Understanding Long Documents with Different Position-Aware Attentions
论文作者
论文摘要
尽管在文档理解方面取得了一些成功,但由于计算中的几个挑战以及如何有效吸收长的多模式输入,因此长期文档理解的实际任务在很大程度上尚未探索。大多数基于变压器的方法仅处理简短的文档,并且由于其过度的计算和内存限制,因此仅使用文本信息来引起注意。为了在长期的文档理解中解决这些问题,我们探索了处理1D和新的2D位置引人入胜的不同方法,并以本质上的背景缩短了。实验结果表明,我们提出的模型基于各种评估指标具有此任务的优势。此外,我们的模型仅对注意力进行更改,因此很容易适应任何基于变压器的体系结构。
Despite several successes in document understanding, the practical task for long document understanding is largely under-explored due to several challenges in computation and how to efficiently absorb long multimodal input. Most current transformer-based approaches only deal with short documents and employ solely textual information for attention due to its prohibitive computation and memory limit. To address those issues in long document understanding, we explore different approaches in handling 1D and new 2D position-aware attention with essentially shortened context. Experimental results show that our proposed models have advantages for this task based on various evaluation metrics. Furthermore, our model makes changes only to the attention and thus can be easily adapted to any transformer-based architecture.