论文标题

LRG在Semeval-2020任务7:评估BERT和衍生模型执行基于简短的幽默分级的能力

LRG at SemEval-2020 Task 7: Assessing the Ability of BERT and Derivative Models to Perform Short-Edits based Humor Grading

论文作者

Mahurkar, Siddhant, Patil, Rajaswa

论文摘要

在本文中,我们评估了BERT及其导数模型(Roberta,Distilbert和Albert)的能力,以基于简短的幽默分级。我们测试了这些模型的幽默评分和分类任务,并在筹款数据集上进行了幽默分类任务。我们对这些模型进行了广泛的实验,以通过零射推理和基于跨数据库推理的方法来测试其语言建模和概括能力。此外,我们还通过对受过训练的BERT模型的最后一层的自我发项权重进行定性分析来检查自我发场层在幽默分级中的作用。我们的实验表明,所有预训练的BERT衍生模型都显示出与幽默分级相关任务的重要概括能力。

In this paper, we assess the ability of BERT and its derivative models (RoBERTa, DistilBERT, and ALBERT) for short-edits based humor grading. We test these models for humor grading and classification tasks on the Humicroedit and the FunLines dataset. We perform extensive experiments with these models to test their language modeling and generalization abilities via zero-shot inference and cross-dataset inference based approaches. Further, we also inspect the role of self-attention layers in humor-grading by performing a qualitative analysis over the self-attention weights from the final layer of the trained BERT model. Our experiments show that all the pre-trained BERT derivative models show significant generalization capabilities for humor-grading related tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源