Computer Technology and Control Engineering |
|
|
|
|
Aggregate graph embedding method based on non-uniform neighbor nodes sampling |
Si CHEN( ),Xiao-dong CAI*( ),Zhen-zhen HOU,Bo LI |
School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China |
|
|
Abstract Aiming at the problem that the uniform sampling function is widely used in aggregate graph embedding methods to construct neighborhoods for nodes in a graph, i.e. neighbor nodes are sampled randomly, and the differences of their properties are neglected, a non-uniform neighbor nodes sampling method was proposed. The neighbor nodes of the target node with larger degrees are sampled preferentially. Some neighbor nodes with lower degrees are hidden so that they do not appear during the sampling process. The remaining nodes in the neighbor node set are randomly sampled to preserve sampling randomness, then these randomly sampled nodes and the preferentially sampled nodes form the neighborhood of the target node. The proposed non-uniform neighbor nodes sampling method was applied to the graph embedding process. Experimental results showed that the proposed method can be used to improve the classification F1 score of graph embedding to 91.7% on the Reddit dataset, and the results was superior than that of several known graph embedding methods. Experiments on the overlapping community dataset PPI also confirmed that the proposed method can be used to generate embedding with higher quality for graph data.
|
Received: 19 September 2018
Published: 21 November 2019
|
|
Corresponding Authors:
Xiao-dong CAI
E-mail: 646299680@qq.com;caixiaodong@guet.edu.cn
|
基于非均匀邻居节点采样的聚合式图嵌入方法
针对已有聚合式图嵌入方法多采用均匀采样函数为图中节点构建邻域,即仅随机采样邻居节点,而忽略各邻居节点自身性质的差异的问题,提出基于度值的非均匀邻居节点采样方法. 针对目标节点,优先采样其度值较大的邻居节点;隐藏一批度值较小的邻居节点,使它们在采样过程中不出现;在邻居节点集中随机采样剩余的节点以保留一定的采样随机性,这些随机采样的节点与优先采样的节点组成目标节点的邻域. 将所提出的非均匀邻居节点采样方法应用于图嵌入过程,在Reddit数据集上的图嵌入分类F1分数为91.7%,该结果优于几个知名的图嵌入方法的结果. 在重叠社团数据集PPI上的实验证实提出方法能够为图数据生成更高质量的嵌入.
关键词:
图嵌入,
网络嵌入,
非均匀采样,
图卷积网络,
邻域聚合
|
|
[1] |
HAMILTON W L, YING R, LESKOVEC J Representation learning on graphs: methods and applications[J]. IEEE Data Engineering Bulletin, 2017, 40 (3): 52- 74
|
|
|
[2] |
GOYAL P, FERRARA E Graph embedding techniques, applications, and performance: a survey[J]. Knowledge Based Systems, 2017, 151: 78- 94
|
|
|
[3] |
BELKIN M, NIYOGI P. Laplacian eigenmaps and spectral techniques for embedding and clustering [C] // Advances in Neural Information Processing Systems 14. Vancouver: NIPS, 2001: 585-591.
|
|
|
[4] |
ROWEIS S T, SAUL L K Nonlinear dimensionality reduction by locally linear embedding[J]. Science, 2000, 290 (5500): 2323- 2326
doi: 10.1126/science.290.5500.2323
|
|
|
[5] |
CAO S, LU W, XU Q. GraRep: learning graph representations with global structural information [C]// ACM International Conference on Information and Knowledge Management. Melbourne: ACM, 2015: 891-900.
|
|
|
[6] |
PEROZZI B, AL-RFOU R, SKIENA S. DeepWalk: online learning of social representations [C]// ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2014: 701-710.
|
|
|
[7] |
GROVER A, LESKOVEC J. Node2vec: scalable feature learning for networks [C]// ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco: ACM, 2016: 855-864.
|
|
|
[8] |
KIPFT N, WELLING M. Semi-supervised classification with graph convolutional networks [C]// International Conference on Learning Representations. San Juan: ICLR, 2016.
|
|
|
[9] |
DEFFERRARD M, BRESSON X, VANDERGHEYNST P. Convolutional neural networks on graphs with fast localized spectral filtering [C]// Advances in Neural Information Processing Systems 29. Barcelona: NIPS, 2016: 3837-3845.
|
|
|
[10] |
WANG D, CUI P, ZHU W. Structural deep network embedding [C]// ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco: ACM, 2016: 1225-1234.
|
|
|
[11] |
HAMILTON W L, YING R, LESKOVEC J. Inductive representation learning on large graphs [C]// Advances in Neural Information Processing Systems 30. Long Beach: NIPS, 2017.
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|