论文标题
恒定记忆中的增量神经核心分辨率
Incremental Neural Coreference Resolution in Constant Memory
论文作者
论文摘要
我们通过扩展增量聚类算法来利用上下文化的编码器和神经组件来研究在固定内存约束下建模的核心分辨率。给定一个新句子,我们的端到端算法提出并分数每个提及的跨度范围针对从早期文档上下文(如果有)创建的显式实体表示。然后将这些跨度用于在被遗忘之前更新实体的表示。我们仅保留整个文档中固定的显着实体。在这项工作中,我们成功地转换了一个高性能模型(Joshi等,2020),渐近地将其记忆使用量减少为恒定空间,而Ontonotes 5.0的F1中仅相对损失仅为0.3%。
We investigate modeling coreference resolution under a fixed memory constraint by extending an incremental clustering algorithm to utilize contextualized encoders and neural components. Given a new sentence, our end-to-end algorithm proposes and scores each mention span against explicit entity representations created from the earlier document context (if any). These spans are then used to update the entity's representations before being forgotten; we only retain a fixed set of salient entities throughout the document. In this work, we successfully convert a high-performing model (Joshi et al., 2020), asymptotically reducing its memory usage to constant space with only a 0.3% relative loss in F1 on OntoNotes 5.0.