AutoGCL: Automated Graph Contrastive Learning via Learnable View Generators

Yihang Yin, Qingzhong Wang, Siyu Huang, Haoyi Xiong, Xiang Zhang

[AAAI-22] Main Track
Abstract: Contrastive learning has been widely applied to graph representation learning, where the view generators play a vital role in generating effective contrastive samples. Most of the existing contrastive learning methods employ pre-defined view generation methods, e.g., node drop or edge perturbation, which are independent of the input data and cannot well maintain the original semantic structures. To address this issue, we propose a novel framework named Automated Graph Contrastive Learning (AutoGCL) in this paper. The core of AutoGCL is a set of learnable graph view generators that learn a probability distribution over the contrastive samples conditioned on the input graphs. It is able to preserve the most discriminative structures of the original graphs as well as providing adequate augmentation variances for contrastive learning. Moreover, we propose a joint training strategy to train the learnable view generators, the graph encoder, and the classifier in an end-to-end manner, forcing the generations of the view generators to be topologically different but semantically similar. Extensive experiments on semi-supervised learning, unsupervised learning, and transfer learning demonstrate that our AutoGCL framework is superior to the state-of-the-art graph contrastive learning methods. In addition, the visualization results show that the learnable view generators can deliver more compact and semantically meaningful contrastive samples compared against the existing view generation methods.

Introduction Video

Sessions where this paper appears

  • Poster Session 1

    Thu, February 24 4:45 PM - 6:30 PM (+00:00)
    Blue 1
    Add to Calendar

  • Poster Session 8

    Sun, February 27 12:45 AM - 2:30 AM (+00:00)
    Blue 1
    Add to Calendar