1. School of Information Engineering, Zhejiang University of Technology, Hangzhou 310012, China 2. Hangzhou Fengjing Technology Company, Hangzhou 310000, China
A static-historical network (Sta-HisNet) method combining static facts and repeating historical facts was proposed, aiming at the problem that existing dynamic knowledge graph reasoning methods tend to overlook the vast amount of static information and repeating historical facts present in the dynamic knowledge graphs. The hidden static connections between entities in the dynamic knowledge graph were used to form static facts, assisting in the inference of the dynamic knowledge graph. Historical facts were employed to construct a historical vocabulary, and the historical vocabulary was queried when predicting the future. Facts that had not occurred in history were punished, and the probability of predicting duplicate historical facts was increased. Experiments were conducted on two public datasets for dynamic knowledge graph reasoning. Comparative experiments were performed using five mainstream models as baselines. In entity prediction experiments, the mean reciprocal rank (MRR) was 0.489 1 and 0.530 3, and Hits@10 reached 0.588 7 and 0.616 5 respectively, demonstrating the effectiveness of the proposed method.
Fig.2Partial static knowledge graph of ICEWS18 data set
Fig.3Historical pattern flow chart for Sta-HisNet
数据集
实体数
关系数
训练集数
验证集数
测试集数
ICEWS18
23 033
256
373 018
45 995
69 514
GDELT
7 691
240
1734 399
238 765
305 241
Tab.1Statistical information on number of publicly available datasets by different knowledge graphs
模型
MRR
Hits@1
Hits@3
Hits@10
ConvE
0.366 7
0.285 1
0.398 0
0.506 9
RE-NET
0.429 3
0.361 9
0.454 7
0.558 0
RE-GCN
0.463 1
0.391 2
0.497 3
0.569 3
CyGNET
0.466 9
0.405 8
0.498 2
0.571 4
CEN
0.472 6
0.418 6
0.506 1
0.579 1
Sta-HisNet
0.489 1
0.429 4
0.515 3
0.588 7
Tab.2Entity prediction results of different models on ICEWS18 dataset
模型
MRR
Hits@1
Hits@3
Hits@10
ConvE
0.359 9
0.270 5
0.393 2
0.494 4
RE-NET
0.401 2
0.324 3
0.434 0
0.538 0
RE-GCN
0.481 4
0.421 6
0.523 7
0.583 5
CyGNET
0.509 2
0.445 3
0.546 9
0.609 9
CEN
0.516 8
0.457 6
0.549 7
0.612 3
Sta-HisNet
0.530 5
0.475 5
0.560 1
0.616 5
Tab.3Entity prediction results of different models on GDELT dataset
模型
MRR
Hits@1
Hits@3
Hits@10
Sta-HisNet-NON-EMB
0.479 4
0.421 2
0.506 3
0.573 5
Sta-HisNet-NON-STA
0.482 2
0.494 5
0.509 8
0.578 1
Sta-HisNet-NON-CONV
0.473 1
0.411 3
0.503 2
0.571 3
Sta-HisNet-NON-LSTM
0.463 9
0.402 1
0.498 6
0.574 2
Sta-HisNet-NON-PUN
0.458 9
0.396 5
0.483 8
0.559 2
Sta-HisNet
0.489 1
0.429 4
0.515 3
0.588 7
Tab.4Ablation experiment results of different modules on ICEWS18 dataset
模型
MRR
Hits@1
Hits@3
Hits@10
Sta-HisNet-NON-EMB
0.526 3
0.471 0
0.555 2
0.611 2
Sta-HisNet-NON-STA
0.518 6
0.463 6
0.546 9
0.603 9
Sta-HisNet-NON-CONV
0.523 2
0.469 1
0.556 6
0.610 5
Sta-HisNet-NON-LSTM
0.514 5
0.459 8
0.546 3
0.601 2
Sta-HisNet-NON-PUN
0.509 2
0.445 3
0.536 9
0.589 9
Sta-HisNet
0.530 5
0.475 5
0.560 1
0.616 5
Tab.5Ablation experiment results of different modules on GDELT dataset
Fig.4Comparison of optimal number of rounds between two methods on two datasets
Fig.5Time required for three methods to run one round on two datasets
[1]
LIU Z, XIONG C, SUN M, et al. Entity-duet neural ranking: Understanding the role of knowledge graph semantics in neural information retrieval [C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Columbus: ACL, 2008: 2395-2405.
[2]
JIANG T, LIU T, GE T, et al. Encoding temporal information for time-aware link prediction [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing. Austin: ACL, 2016: 2350-2354.
[3]
BORDES A, USUNIER N, GARCIADURAN A, et al. Translating embeddings for modeling multi-relational data [C]// Proceedings ofthe Neural Information Processing Systems. Lake Tahoe: NIP, 2013: 2787-2795.
[4]
WANG Z, ZHANG J, FENG J, et al. Knowledge graph embedding by translating on hyperplanes [C]// Proceedings of the AAAI Conference on Artificial Intelligence. Quebec: AAAI, 2014, 28(1): 1112-1119.
[5]
DAI S, LIANG Y, LIU S, et al. Learning entity and relation embeddings with entity description for knowledge graph completion [C]// Proceedings of the 4th International Conference on Artificial Intelligence Technologies and Applications. Chengdu: JPCS, 2018: 202-205.
[6]
TROUILLON T, WELBL J, RIEDEL S, et al. Complex embeddings for simple link prediction [C]// Proceedings of the International Conference on Machine Learning. Hong Kong: ACM, 2016: 2071-2080.
[7]
SOCHER R, CHEN D, MANNING C D, et al. Reasoning with neural tensor networks for knowledge base completion [C]// Proceedings of the Neural Information Processing Systems. Lake Tahoe: NIP, 2013: 926-934.
[8]
SCHLICHTKRULL M, KIPF T N, BLOEM P, et al. Modeling relational data with graph convolutional networks [C]// Proceedings of the European Semantic Web Conference. Heraklion: ESWC, 2018: 593-607.
[9]
KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks [EB/OL]. [2022-09-01]. https://arxiv.org/abs/1609.02907.
[10]
LEBLAY J, CHEKOL M W. Deriving validity time in knowledge graph [C]// Proceedings of the 27th Internation Conference on World Wide Web. Lyons: ACM, 2018: 1771-1776.
[11]
DASGUPTA S S, RAY S N, TALUKDAR P. Hyte: hyperplane-based temporally aware knowledge graph embedding [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing. Brussels: ACL, 2018: 2001-2011.
[12]
TRIVEDI R, DAI H, WANG Y, et al. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs [C]// Proceedings of the 34th International Conference on Machine Learning-Volume 70. Sydney: ACM, 2017: 3462-3471.
[13]
JIN W, ZHANG C, SZEKELY P, et al. Recurrent event network for reasoning over temporal knowledge graphs [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing. Hong Kong: ACL, 2019: 8352-8364.
[14]
LI Z, GUAN S, JIN X, et al. Complex evolutional pattern learning for temporal knowledge graph reasoning [C]// Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin: ACL, 2022: 290-296.
[15]
LI Z X, JIN X L, LI W, et al. Temporal knowledge graph reasoning based on evolutional representation learning [C]// Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. Montréal: ACM, 2021: 408-417.
[16]
ZHU C, CHEN M, FAN C, et al. Learning from History: modeling temporal knowledge graphs with sequential copy-generation networks [EB/OL]. [2022-09-01]. https://arxiv.org/abs/2012.08492.
[17]
WARD M D, BEGER A, CUTLER J, et al. Comparing GDELT and ICEWS event data [C]// Proceedings of the ISA Annual Convention. San Francisco: ISA, 2013: 1-49.
[18]
DETTMERS T, MINERVINI P, STENETORP P, et al. Convolutional 2d knowledge graph embeddings [C]// Proceedings of the 32th AAAI Conference on Artificial Intelligence. New Orleans: AAAI, 2018: 1811-1818.
[19]
HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780.