论文标题
稀薄的空气:零射击的跨语性关键字检测是否比无监督更好?
Out of Thin Air: Is Zero-Shot Cross-Lingual Keyword Detection Better Than Unsupervised?
论文作者
论文摘要
关键字提取是检索对给定文档内容至关重要的单词的任务。研究人员提出了解决此问题的各种方法。在最高级别上,方法分为需要培训的方法 - 不受监督的培训。在这项研究中,我们对设置感兴趣,在该设置中,对于正在调查的语言,没有培训数据可用。更具体地说,我们探讨了是否可以使用有限或没有可用标记的培训数据的低资源语言上的零摄 - 跨语性关键字提取,以及它们是否胜过最先进的无用的关键字提取器,用于零摄像的跨语性关键字提取。该比较是在六个新闻文章数据集上进行的,其中涵盖了两种高源语言,即英语和俄语,以及四种低资源的语言,即克罗地亚语,爱沙尼亚语,拉脱维亚语和斯洛文尼亚语。我们发现,经过预告片的模型在测试集中未出现的多语言语料库上进行了微调(即,在零拍设置中),在所有六种语言中始终超出了无监督的模型。
Keyword extraction is the task of retrieving words that are essential to the content of a given document. Researchers proposed various approaches to tackle this problem. At the top-most level, approaches are divided into ones that require training - supervised and ones that do not - unsupervised. In this study, we are interested in settings, where for a language under investigation, no training data is available. More specifically, we explore whether pretrained multilingual language models can be employed for zero-shot cross-lingual keyword extraction on low-resource languages with limited or no available labeled training data and whether they outperform state-of-the-art unsupervised keyword extractors. The comparison is conducted on six news article datasets covering two high-resource languages, English and Russian, and four low-resource languages, Croatian, Estonian, Latvian, and Slovenian. We find that the pretrained models fine-tuned on a multilingual corpus covering languages that do not appear in the test set (i.e. in a zero-shot setting), consistently outscore unsupervised models in all six languages.