论文标题
产生事实检查解释
Generating Fact Checking Explanations
论文作者
论文摘要
关于自动化事实检查的大多数现有工作都涉及预测基于元数据,社交网络传播,索赔中使用的语言以及最近支持或拒绝索赔的证据的索赔的真实性。仍然缺少的难题的关键部分是了解如何自动化该过程中最精致的部分 - 对索赔的判决产生理由。本文提供了有关如何根据可用的索赔上下文自动生成这些解释的首次研究,以及如何以真实性预测共同建模此任务。我们的结果表明,同时优化两个目标,而不是分别训练它们,可以提高事实检查系统的性能。手动评估的结果进一步表明,在多任务模型中,生成解释的信息,覆盖范围和总体质量也得到了改善。
Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process -- generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.