论文标题

代码:零击专家链接的对比预训练与对抗性微调

CODE: Contrastive Pre-training with Adversarial Fine-tuning for Zero-shot Expert Linking

论文作者

Chen, Bo, Zhang, Jing, Zhang, Xiaokang, Tang, Xiaobin, Cai, Lingfan, Chen, Hong, Li, Cuiping, Zhang, Peng, Tang, Jie

论文摘要

专家发现是由许多在线网站提供的流行服务,例如专业知识发现者,LinkedIn和Aminer,对寻求候选资格,顾问和合作者提供了有益的服务。但是,其质量由于缺乏足够的专家信息来源而受到影响。本文采用Aminer作为基础,目的是将任何外部专家与Aminer的同行联系起来。由于从任意外部来源获得足够的链接是不可行的,因此我们探讨了零击专家链接的问题。在本文中,我们提出了代码,该代码首先通过对Aminer上的对比度学习进行了专家链接模型,以便它可以捕获没有监督信号的专家的表示和匹配模式,然后在Aminer和外部来源之间进行了微调,以增强模型以对抗性方式可传递。为了进行评估,我们首先设计了两个固有的任务:作者识别和纸群,以验证对比度学习的表示形式和匹配能力。然后,最终的外部专家连接在两个类型的外部来源上的性能也意味着对抗性微调方法的优势。此外,我们展示了代码的在线部署,并通过主动学习不断提高其在线表现。

Expert finding, a popular service provided by many online websites such as Expertise Finder, LinkedIn, and AMiner, is beneficial to seeking candidate qualifications, consultants, and collaborators. However, its quality is suffered from lack of ample sources of expert information. This paper employs AMiner as the basis with an aim at linking any external experts to the counterparts on AMiner. As it is infeasible to acquire sufficient linkages from arbitrary external sources, we explore the problem of zero-shot expert linking. In this paper, we propose CODE, which first pre-trains an expert linking model by contrastive learning on AMiner such that it can capture the representation and matching patterns of experts without supervised signals, then it is fine-tuned between AMiner and external sources to enhance the models transferability in an adversarial manner. For evaluation, we first design two intrinsic tasks, author identification and paper clustering, to validate the representation and matching capability endowed by contrastive learning. Then the final external expert linking performance on two genres of external sources also implies the superiority of the adversarial fine-tuning method. Additionally, we show the online deployment of CODE, and continuously improve its online performance via active learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源