论文标题
将文本信息性措施扩展到通过兴趣评估(语言模型与单词嵌入)
Extending Text Informativeness Measures to Passage Interestingness Evaluation (Language Model vs. Word Embedding)
论文作者
论文摘要
用于评估自动文本摘要的标准信息措施主要取决于自动摘要和参考摘要之间的n-gram重叠。这些度量与它们使用的度量(余弦,胭脂,kullback-leibler,对数相似性等)和它们所考虑的术语袋(单词,单词n-gram,实体,掘金等)不同。最近的单词嵌入方法为基于文本单元的存在/不存在提供了离散方法的连续替代方法。信息措施已扩展到集中信息检索评估,这些评估涉及用户的信息需求,以简短的查询表示。特别是对于CLEF-INEX推文上下文化的任务,推文内容已被视为查询。在本文中,我们将有趣性的概念定义为信息性的概括,因此信息需求多样化,正式化为一组未知的隐式查询集。然后,我们研究了最新信息措施应对这种概括的能力。最近我们表明,使用这个新框架,标准单词嵌入仅在Uni-gram上优于离散度量,但是Bi-gram似乎是兴趣评估的关键点。最后,我们证明Clef-Inlex Tweet上下文化2012对数相似度度量提供了最佳结果。
Standard informativeness measures used to evaluate Automatic Text Summarization mostly rely on n-gram overlapping between the automatic summary and the reference summaries. These measures differ from the metric they use (cosine, ROUGE, Kullback-Leibler, Logarithm Similarity, etc.) and the bag of terms they consider (single words, word n-grams, entities, nuggets, etc.). Recent word embedding approaches offer a continuous alternative to discrete approaches based on the presence/absence of a text unit. Informativeness measures have been extended to Focus Information Retrieval evaluation involving a user's information need represented by short queries. In particular for the task of CLEF-INEX Tweet Contextualization, tweet contents have been considered as queries. In this paper we define the concept of Interestingness as a generalization of Informativeness, whereby the information need is diverse and formalized as an unknown set of implicit queries. We then study the ability of state of the art Informativeness measures to cope with this generalization. Lately we show that with this new framework, standard word embeddings outperforms discrete measures only on uni-grams, however bi-grams seems to be a key point of interestingness evaluation. Lastly we prove that the CLEF-INEX Tweet Contextualization 2012 Logarithm Similarity measure provides best results.