论文标题

部分可观测时空混沌系统的无模型预测

Exploiting Rich Textual User-Product Context for Improving Sentiment Analysis

论文作者

Lyu, Chenyang, Yang, Linyi, Zhang, Yue, Graham, Yvette, Foster, Jennifer

论文摘要

与评论相关的用户和产品信息对于情感极性预测很有用。典型的方法将这些信息集中在模拟用户和产品上是隐式学习的代表向量。大多数人不利用历史评论的潜力,或者当前确实需要不必要的修改来建模架构或不充分利用用户/产品协会。这项工作的贡献是双重的:i)一种明确采用属于同一用户/产品的历史评论来初始化表示的方法,ii)有效地通过用户产品交叉秘密模块在用户和产品之间进行文本关联。 IMDB,YELP-2013和Yelp-2014基准的实验表明,我们的方法显着优于先前的最先前。由于我们采用Bert-Base作为编码器,因此我们还提供了实验,其中我们的方法在Span-Bert和Longformer中表现良好。此外,对培训数据中每个用户/产品进行审查的审查的实验证明了我们在低资源环境下方法的有效性。

User and product information associated with a review is useful for sentiment polarity prediction. Typical approaches incorporating such information focus on modeling users and products as implicitly learned representation vectors. Most do not exploit the potential of historical reviews, or those that currently do require unnecessary modifications to model architecture or do not make full use of user/product associations. The contribution of this work is twofold: i) a method to explicitly employ historical reviews belonging to the same user/product to initialize representations, and ii) efficient incorporation of textual associations between users and products via a user-product cross-context module. Experiments on IMDb, Yelp-2013 and Yelp-2014 benchmarks show that our approach substantially outperforms previous state-of-the-art. Since we employ BERT-base as the encoder, we additionally provide experiments in which our approach performs well with Span-BERT and Longformer. Furthermore, experiments where the reviews of each user/product in the training data are downsampled demonstrate the effectiveness of our approach under a low-resource setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源