论文标题

强大的自然语言推断的歧视性生成分类器

Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference

论文作者

Ding, Xiaoan, Liu, Tianyu, Chang, Baobao, Sui, Zhifang, Gimpel, Kevin

论文摘要

尽管通常优选歧视性神经网络分类器,但最近的工作显示了生成分类器在数据效率和鲁棒性方面的优势。在本文中,我们关注自然语言推论(NLI)。我们提出了用于NLI任务的生成分类器Gennli,并通过将其与五个基准进行比较来表征其性能,包括歧视模型和大规模的预审计的语言表示模型,例如Bert。我们探索培训目标,以判断生成分类器的微调,从而显示出对先前工作的对数量损失的改进。特别是,我们通过简单的无界修改对数损失进行了强劲的结果,我们称之为“ Infinilog损失”。我们的实验表明,Gennli在几个具有挑战性的NLI实验环境中均优于判别性和预算基线,包括小型训练集,不平衡的标签分布和标签噪声。

While discriminative neural network classifiers are generally preferred, recent work has shown advantages of generative classifiers in term of data efficiency and robustness. In this paper, we focus on natural language inference (NLI). We propose GenNLI, a generative classifier for NLI tasks, and empirically characterize its performance by comparing it to five baselines, including discriminative models and large-scale pretrained language representation models like BERT. We explore training objectives for discriminative fine-tuning of our generative classifiers, showing improvements over log loss fine-tuning from prior work . In particular, we find strong results with a simple unbounded modification to log loss, which we call the "infinilog loss". Our experiments show that GenNLI outperforms both discriminative and pretrained baselines across several challenging NLI experimental settings, including small training sets, imbalanced label distributions, and label noise.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源