论文标题

自然语言推论和语义相似性任务的元限制任务

Meta-Embeddings for Natural Language Inference and Semantic Similarity tasks

论文作者

R, Shree Charran, Dubey, Rahul Kumar

论文摘要

单词表示形式构成了几乎所有先进的自然语言处理(NLP)应用程序(例如文本挖掘,提问和文本摘要等)的核心组成部分。在过去的二十年中,进行了巨大的研究以提出一个单个模型来解决所有主要的NLP任务。目前的主要问题是,针对不同的NLP任务有很多选择。因此,对于NLP从业人员来说,选择合适的模型本身的任务成为挑战。因此,结合多个预训练的单词嵌入和形成元嵌入已成为改善解决NLP任务的可行方法。元素嵌入学习是从给定的一组预训练的输入单词嵌入的过程中产生单个单词嵌入的过程。在本文中,我们建议使用少数最先进(SOTA)模型的元嵌入来有效地处理主流NLP任务,例如分类,语义相关性和文本相似性。我们已经比较了合奏和动态变体,以识别有效的方法。获得的结果表明,即使是最好的最新模型也可以得到改善。因此,向我们表明,通过利用几种单个表示的力量来利用元嵌入方式。

Word Representations form the core component for almost all advanced Natural Language Processing (NLP) applications such as text mining, question-answering, and text summarization, etc. Over the last two decades, immense research is conducted to come up with one single model to solve all major NLP tasks. The major problem currently is that there are a plethora of choices for different NLP tasks. Thus for NLP practitioners, the task of choosing the right model to be used itself becomes a challenge. Thus combining multiple pre-trained word embeddings and forming meta embeddings has become a viable approach to improve tackle NLP tasks. Meta embedding learning is a process of producing a single word embedding from a given set of pre-trained input word embeddings. In this paper, we propose to use Meta Embedding derived from few State-of-the-Art (SOTA) models to efficiently tackle mainstream NLP tasks like classification, semantic relatedness, and text similarity. We have compared both ensemble and dynamic variants to identify an efficient approach. The results obtained show that even the best State-of-the-Art models can be bettered. Thus showing us that meta-embeddings can be used for several NLP tasks by harnessing the power of several individual representations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源