论文标题
是什么使预训练的语言模型更好地零射门学习者?
What Makes Pre-trained Language Models Better Zero-shot Learners?
论文作者
论文摘要
当前在Zeroshot方案中提示学习的方法广泛依赖于具有足够人类通知数据的开发集来选择最佳表现的提示模板A后验。这不是理想的选择,因为在实用相关性的现实世界零摄像场景中,没有标记的数据可用。因此,我们提出了一种简单但有效的方法,用于筛选零弹性文本分类中合理的提示模板:困惑选择(perplection)。我们假设可以使用语言差异来衡量及时模板的功效,从而制定基于证实的困惑方案,以预测提前及时模板的性能。实验表明,我们的方法会在现实的零拍设置中改善预测性能,从而消除了对任何标记示例的需求。
Current methods for prompt learning in zeroshot scenarios widely rely on a development set with sufficient human-annotated data to select the best-performing prompt template a posteriori. This is not ideal because in a realworld zero-shot scenario of practical relevance, no labelled data is available. Thus, we propose a simple yet effective method for screening reasonable prompt templates in zero-shot text classification: Perplexity Selection (Perplection). We hypothesize that language discrepancy can be used to measure the efficacy of prompt templates, and thereby develop a substantiated perplexity-based scheme allowing for forecasting the performance of prompt templates in advance. Experiments show that our method leads to improved prediction performance in a realistic zero-shot setting, eliminating the need for any labelled examples.