A graph contrastive learning framework based on negative-sample-free loss and adaptive augmentation was proposed to address the problems of random enhancement of the input graph and the need to construct losses using negative samples in graph contrastive learning methods. In the framework, the centrality of the node degree in the input graph was used to generate two views by adaptive enhancement, which avoided the deletion of important nodes and edges by random enhancement and thus improved the robustness of the framework . The embedding matrix of the two views was obtained using the same weight encoder network without specifying. A cross-correlation-based loss function which did not rely on non-symmetric neural network architectures was used to guide the framework learning. Negative samples were not required in this loss function, avoiding that negative samples became more challenging to define in the case of graphs and that negative samples increased the computational and storage burden of constructing losses. Results showed that the proposed framework outperformed many baseline methods in terms of classification accuracy in the node classification experiments on three citation datasets.
ZHOU Tian-qi, YANG Yan, ZHANG Ji-jie, YIN Shao-wei, GUO Zeng-qiang. Graph contrastive learning based on negative-sample-free loss and adaptive augmentation. Journal of Zhejiang University(Engineering Science)[J], 2023, 57(2): 259-266 doi:10.3785/j.issn.1008-973X.2023.02.006
目前,研究人员在计算机视觉领域研究中已经解决了需要负样本的问题,例如BGRL[11]、Barlow Twins[12]和孪生网络架构等. 为了进一步解决对比学习中使用负样本构造损失会增大计算和存储负担的问题,本研究提出简单而有效的对比框架——基于无负样本损失和自适应增强的图对比学习框架(graph contrastive learning framework based on negative-sample-free loss and adaptive augmentation,GNSA),将自适应数据增强与Barlow Twins损失函数相结合. 该框架计算一个图的2个变体视图的嵌入互相关矩阵;所采用的网络结构是完全对称的,不需要任何特殊的技术来构造特殊的嵌入向量;使用同一个编码器进行传递.
XU K, HU W, LESKOVEC J, et al. How powerful are graph neural networks [C]// Proceedings of the 7th International Conference on Learning Representations. New Orleans: [s.n.], 2019: 1-17.
ABU-EL-HAIJA S, PEROZZI B, KAPOOR A, et al. Mixhop: higher-order graph convolutional architectures via sparsified neighborhood mixing [C]// Proceedings of the 36th International Conference on Machine Learning. Long Beach: PMLR , 2019: 21-29.
YOU J, YING R, LESKOVEC J. Position-aware graph neural networks [C]// Proceedings of the 36th International Conference on Machine Learning. Long Beach: PMLR, 2019: 7134-7143.
KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks [C]// Proceedings of the 5th International Conference on Learning Representations. Toulon: [s. n. ], 2017: 1-14.
VELICKOVIC P, FEDUS W, HAMILTON W L, et al. Deep graph Infomax [C]// Proceedings of the 7th International Conference on Learning Representations. New Orleans: [s.n.], 2019: 1-17.
YOU Y, CHEN T L, SUI Y D, et al. Graph contrastive learning with augmentations [C]// Advances in Neural Information Processing Systems. [s.l.]: MIT Press, 2020: 1-12.
HASSANI K, AHMADI A H K. Contrastive multi-view representation learning on graphs [C]// Proceedings of the 37th International Conference on Machine Learning. [s.l.]: PMLR, 2020: 4116-4126.
ZHU Y Q, XU Y C, LIU Q, et al. Graph contrastive learning with adaptive augmentation [C]// Proceedings of the 2021 World Wide Web Conference. [s.l.]: ACM, 2021: 2069-2080.
BIELAK P, KAJDANOWICZ T, CHAWLA N V. Graph Barlow Twins: a self-supervised representation learning framework for graphs [EB/OL]. [2021-06-10]. https://arxiv.org/abs/2106.02466.
PEROZZI B, AL-RFOU R, SKIENA S. Deepwalk: online learning of social representations [C]// Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. NewYork: ACM, 2014: 701-710.
GROVER A, LESKOVEC J. Node2vec: scalable feature learning for networks [C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco: ACM, 2016: 855-864.
GRILL J B, ALTCHE F, TALLEC C, et al. Bootstrap Your Own Latent: a new approach to self-supervised learning [C]// Advances in Neural Information Proceedings Systems. [s.l.]: MIT Press, 2020: 1-35.
HJELM R, FEDOROV A, LAVOIE-MARCHILDON S, et al. Learning deep representations by mutual information estimation and maximization [C]// Proceedings of the 7th International Conference on Learning Representations. New Orleans: [s.n.], 2019: 1-24.
VELICKOVIC P, CUCURULL G, CASANOVA A, et al. Graph attention networks [C]// Proceedings of the 6th International Conference on Learning Representations. Vancouver: [s.n.], 2018: 1-12.
TSAI Y H, BAI S J, MORENCY L P, et al. A note on connecting Barlow Twins with negative-sample-free contrastive learning [EB/OL]. [2021-05-04]. https://arxiv.org/abs/2104.13712.
HAMILTON W L, YING Z, LESKOVEC J. Inductive representation learning on large graphs [C]// Advances in Neural Information Processing Systems. Long Beach: MIT Press, 2017: 1024-1034.
PENG Z, HUANG W, LUO M, et al. Graph representation learning via graphical mutual information maximization [C]// Proceedings of the 2020 World Wide Web Conference. Taipei: ACM, 2020: 259-270.
WAN S, PAN S, YANG J, et al. Contrastive and generative graph convolutional networks for graph-based semi-supervised learning [C]// Proceedings of the AAAI Conference on Artificial Intelligence. [s.l.]: AAAI, 2021: 10049-10057.
YANG Z W, COHEN W, SALAKHUTDINOV R. Revisiting semi-supervised learning with graph embeddings [C]// Proceedings of 33nd International Conference on Machine Learning. New York: [s.n.], 2016: 40-48.
DEFFERRARD M, BRESSON X, VANDERGHEYNST P. Convolutional neural networks on graphs with fast localized spectral filtering [C]// Advances in Neural Information Proceedings Systems. Barcelona: MIT Press, 2016: 3837-3845.
... 目前,研究人员在计算机视觉领域研究中已经解决了需要负样本的问题,例如BGRL[11]、Barlow Twins[12]和孪生网络架构等. 为了进一步解决对比学习中使用负样本构造损失会增大计算和存储负担的问题,本研究提出简单而有效的对比框架——基于无负样本损失和自适应增强的图对比学习框架(graph contrastive learning framework based on negative-sample-free loss and adaptive augmentation,GNSA),将自适应数据增强与Barlow Twins损失函数相结合. 该框架计算一个图的2个变体视图的嵌入互相关矩阵;所采用的网络结构是完全对称的,不需要任何特殊的技术来构造特殊的嵌入向量;使用同一个编码器进行传递. ...
4
... 目前,研究人员在计算机视觉领域研究中已经解决了需要负样本的问题,例如BGRL[11]、Barlow Twins[12]和孪生网络架构等. 为了进一步解决对比学习中使用负样本构造损失会增大计算和存储负担的问题,本研究提出简单而有效的对比框架——基于无负样本损失和自适应增强的图对比学习框架(graph contrastive learning framework based on negative-sample-free loss and adaptive augmentation,GNSA),将自适应数据增强与Barlow Twins损失函数相结合. 该框架计算一个图的2个变体视图的嵌入互相关矩阵;所采用的网络结构是完全对称的,不需要任何特殊的技术来构造特殊的嵌入向量;使用同一个编码器进行传递. ...