论文标题
关于作者归因和作者身份验证的艺术状况
On the State of the Art in Authorship Attribution and Authorship Verification
论文作者
论文摘要
尽管对作者身份归因(AA)和作者身份验证(AV)进行了数十年的研究,但数据集拆分/过滤和不匹配的评估方法不一致,因此很难评估艺术的状态。在本文中,我们介绍了对领域的调查,解决混乱点,介绍了标准化和基准测试AA/AV数据集和指标的Valla,提供了大规模的经验评估,并提供现有方法之间的苹果与苹果对比较。我们在15个数据集(包括分配偏移的挑战集)上评估了八种有希望的方法,并根据Project Gutenberg归档的文本引入了新的大规模数据集。令人惊讶的是,我们发现基于NGRAM的传统模型在5(7个)AA任务上表现最佳,达到了$ 76.50 \%$的平均宏观准确性(相比之下,基于BERT的型号为66.71美元\%$)。但是,在两个AA数据集上,每个作者单词数量最多,并且在AV数据集上,基于BERT的模型表现最好。虽然AV方法很容易应用于AA,但它们很少作为AA论文中的基线包含。我们表明,通过应用硬性采矿,AV方法是AA方法的竞争替代方法。可以在此处找到Valla和所有实验代码:https://github.com/jacobtyo/valla
Despite decades of research on authorship attribution (AA) and authorship verification (AV), inconsistent dataset splits/filtering and mismatched evaluation methods make it difficult to assess the state of the art. In this paper, we present a survey of the fields, resolve points of confusion, introduce Valla that standardizes and benchmarks AA/AV datasets and metrics, provide a large-scale empirical evaluation, and provide apples-to-apples comparisons between existing methods. We evaluate eight promising methods on fifteen datasets (including distribution-shifted challenge sets) and introduce a new large-scale dataset based on texts archived by Project Gutenberg. Surprisingly, we find that a traditional Ngram-based model performs best on 5 (of 7) AA tasks, achieving an average macro-accuracy of $76.50\%$ (compared to $66.71\%$ for a BERT-based model). However, on the two AA datasets with the greatest number of words per author, as well as on the AV datasets, BERT-based models perform best. While AV methods are easily applied to AA, they are seldom included as baselines in AA papers. We show that through the application of hard-negative mining, AV methods are competitive alternatives to AA methods. Valla and all experiment code can be found here: https://github.com/JacobTyo/Valla