CODE: Contrastive Pre-Training with Adversarial Fine-Tuning for Zero-Shot Expert Linking

Bo Chen, Jing Zhang, Xiaokang Zhang, Xiaobin Tang, Lingfan Cai, Hong Chen, Cuiping Li, Peng Zhang, Jie Tang

[AAAI-22] AI for Social Impact Track
Abstract: Expert finding, a popular service provided by many onlinewebsites such as Expertise Finder, LinkedIn, and AMiner,is beneficial to seeking candidate qualifications, consultants,and collaborators. However, its quality is suffered from lackof ample sources of expert information. This paper employsAMiner as the basis with an aim at linking any external experts to the counterparts on AMiner. As it is infeasible to acquire sufficient linkages from arbitrary external sources, weexplore the problem of zero-shot expert linking. In this paper,we propose CODE, which first pre-trains an expert linkingmodel by contrastive learning on AMiner such that it can capture the representation and matching patterns of experts without supervised signals, then it is fine-tuned between AMinerand external sources to enhance the model’s transferabilityin an adversarial manner. For evaluation, we first design twointrinsic tasks, author identification and paper clustering, tovalidate the representation and matching capability endowedby contrastive learning. Then the final external expert linkingperformance on two genres of external sources also impliesthe superiority of adversarial fine-tuning method. Additionally, we show the online deployment of CODE, and continuously improve its online performance via active learning.

Introduction Video

Sessions where this paper appears

  • Poster Session 2

    Fri, February 25 12:45 AM - 2:30 AM (+00:00)
    Red 6
    Add to Calendar

  • Poster Session 5

    Sat, February 26 12:45 AM - 2:30 AM (+00:00)
    Red 6
    Add to Calendar