Please wait a minute...
浙江大学学报(工学版)  2025, Vol. 59 Issue (7): 1394-1402    DOI: 10.3785/j.issn.1008-973X.2025.07.007
计算机技术与控制工程     
基于结构感知的少样本知识补全
杨荣泰(),邵玉斌*(),杜庆治
昆明理工大学 信息工程与自动化学院,云南 昆明 650500
Structure-aware model for few-shot knowledge completion
Rongtai YANG(),Yubin SHAO*(),Qingzhi DU
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
 全文: PDF(857 KB)   HTML
摘要:

现有的知识补全模型在知识表示过程中不能较好地感知邻域拓扑结构,为此提出新的少样本知识补全模型. 在知识表示过程中采用基于注意力机制的结构感知编码器进行三元组编码;该编码器以拓扑结构对邻域内的节点进行集群划分,通过融合邻域内各个集群的结构信息来增强模型的结构感知能力. 为了提升模型预测的稳定性,采用自适应预测网络计算待预测三元组的得分. 在NELL-One 和 Wiki-One 数据集中开展模型性能对比实验. 结果表明,对比基线模型,所提模型的平均倒数排名和命中排名在前10位、前5位、前1位的正确结果的平均占比分别提升了0.018、0.021、0.024、0.016 以及 0.019、0.055、0.039、0.038. 所提模型能够有效利用邻域的拓扑信息,提升知识补全的准确性.

关键词: 少样本知识补全知识表示结构感知拓扑结构注意力机制    
Abstract:

A new model for few-shot knowledge completion was proposed to address the problem that existing knowledge completion models fail to adequately perceive the neighborhood topological structure during knowledge representation. A structure-aware encoder based on an attention mechanism was employed to encode triples in the process of knowledge representation. The cluster partitioning of neighboring nodes was performed by the encoder based on topological structure, and the structural information from each cluster in the neighborhood was integrated to reinforce the model’s structure-aware ability. An adaptive prediction network was adopted to compute the scores of triples to be predicted, for enhancing the stability of the model’s predictions. Experiments in the NELL-One and Wiki-One datasets were carried out. Results show that, compared with the baseline models, the proposed model outperforms by 0.018, 0.021, 0.024, 0.016 and 0.019, 0.055, 0.039, 0.038 in terms of mean reciprocal rank and the hit rate of correct results ranked in the top 10, top 5, and top 1 metrics, respectively. The proposed model effectively leverages topological information from neighborhoods, thereby enhancing the accuracy of knowledge completion.

Key words: few-shot knowledge completion    knowledge representation    structure-aware    topological structure    attention mechanism
收稿日期: 2024-05-04 出版日期: 2025-07-25
CLC:  TP 391  
基金资助: 云南省媒体融合重点实验室项目(220235205).
通讯作者: 邵玉斌     E-mail: rongtaiyangmse@163.com;shaoyubin999@qq.com
作者简介: 杨荣泰(1999—),男,硕士生,从事自然语言与知识图谱研究. orcid.org/0009-0004-7321-8358. E-mail:rongtaiyangmse@163.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
杨荣泰
邵玉斌
杜庆治

引用本文:

杨荣泰,邵玉斌,杜庆治. 基于结构感知的少样本知识补全[J]. 浙江大学学报(工学版), 2025, 59(7): 1394-1402.

Rongtai YANG,Yubin SHAO,Qingzhi DU. Structure-aware model for few-shot knowledge completion. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1394-1402.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2025.07.007        https://www.zjujournals.com/eng/CN/Y2025/V59/I7/1394

图 1  实体的邻域节点语义分布和结构特点
图 2  基于结构感知的少样本知识补全模型框架
数据集$N_{\mathrm{S}} $
实体关系三元组
NELL-One68 545358181 109
Wiki-One4 838 2448225 859 240
表 1  实验数据集样本
模型类型模型名称NELL-OneWiki-One
MRRHits@10Hits@5Hits@1MRRHits@10Hits@5Hits@1
基于度量学习MFEN0.2260.4010.2870.1790.2680.3720.3220.201
TransAM0.2150.3480.2450.1640.3020.3460.3410.264
APINet0.3050.4960.4050.2080.3420.4730.4190.283
基于元学习GANA0.3110.4810.4130.2210.3220.4180.3790.276
ADK-KG0.3020.3790.3010.2240.2650.3460.2830.231
DARL0.2130.3740.3260.1380.3450.4460.4000.290
基于度量学习SAM0.3290.5170.4370.2400.3640.5280.4580.328
表 2  不同模型在2个数据集上的少样本链接预测结果
M1M2MRRHits@10Hits@5Hits@1
0.3640.5280.4580.328
0.3380.4420.4060.281
0.3540.4890.4430.317
表 3  核心模块对知识补全的影响
图 3  不同最大采样邻居数下的拓扑图稠密程度
图 4  不同最大采样邻居数下的知识补全效果
1 杨东华, 何涛, 王宏志, 等 面向知识图谱的图嵌入学习研究进展[J]. 软件学报, 2022, 33 (9): 3370- 3390
YANG Donghua, HE Tao, WANG Hongzhi, et al Survey on knowledge graph embedding learning[J]. Journal of Software, 2022, 33 (9): 3370- 3390
2 张天成, 田雪, 孙相会, 等 知识图谱嵌入技术研究综述[J]. 软件学报, 2023, 34 (1): 277- 311
ZHANG Tiancheng, TIAN Xue, SUN Xianghui, et al Overview on knowledge graph embedding technology research[J]. Journal of Software, 2023, 34 (1): 277- 311
3 ZHONG L, WU J, LI Q, et al A comprehensive survey on automatic knowledge graph construction[J]. ACM Computing Surveys, 2024, 56 (4): 1- 62
4 ZHANG N, DENG S, SUN Z, et al. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks [C]// Proceedings of the 2019 Conference of the North. [S.l.]: ACL, 2019: 3016–3025.
5 WU T, MA H, WANG C, et al Heterogeneous representation learning and matching for few-shot relation prediction[J]. Pattern Recognition, 2022, 131: 108830
doi: 10.1016/j.patcog.2022.108830
6 赵凯琳, 靳小龙, 王元卓 小样本学习研究综述[J]. 软件学报, 2021, 32 (2): 349- 369
ZHAO Kailin, JIN Xiaolong, WANG Yuanzhuo Survey on few-shot learning[J]. Journal of Software, 2021, 32 (2): 349- 369
7 LU J, GONG P, YE J, et al A survey on machine learning from few samples[J]. Pattern Recognition, 2023, 139: 109480
doi: 10.1016/j.patcog.2023.109480
8 HUANG Q, REN H, LESKOVEC J. Few-shot relational reasoning via connection subgraph pretraining [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. New York: Curran Associates, Inc., 2022: 6397–6409.
9 YUAN X, XU C, LI P, et al. Relational learning with hierarchical attention encoder and recoding validator for few-shot knowledge graph completion [C]// Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing. [S.l.]: ACM, 2022: 786–794.
10 LIANG Y, ZHAO S, CHENG B, et al TransAM: transformer appending matcher for few-shot knowledge graph completion[J]. Neurocomputing, 2023, 537: 61- 72
doi: 10.1016/j.neucom.2023.03.049
11 LI Y, YU K, ZHANG Y, et al Adaptive prototype interaction network for few-shot knowledge graph completion[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35 (11): 15237- 15250
doi: 10.1109/TNNLS.2023.3283545
12 NIU G, LI Y, TANG C, et al. Relational learning with gated and attentive neighbor aggregator for few-shot knowledge graph completion [C]// Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. [S. l.]: ACM, 2021: 213–222.
13 ZHANG Y, QIAN Y, YE Y, et al. Adapting distilled knowledge for few-shot relation reasoning over knowledge graphs [C]// Proceedings of the 2022 SIAM International Conference on Data Mining. Philadelphia: Society for Industrial and Applied Mathematics, 2022: 666–674.
14 CAI L, WANG L, YUAN R, et al Meta-learning based dynamic adaptive relation learning for few-shot knowledge graph completion[J]. Big Data Research, 2023, 33: 100394
doi: 10.1016/j.bdr.2023.100394
15 DWIVEDI V P, BRESSON X. A generalization of transformer networks to graphs [EB/OL]. (2021−01−24) [2024−05−03]. https://arxiv.org/pdf/2012.09699.
16 YING C, CAI T, LUO S, et al. Do transformers really perform badly for graph representation [C]// 35th Conference on Neural Information Processing Systems. Sydney: [s.n.], 2021: 1–19.
17 CHEN D, O’BRAY L, BORGWARDT K. Structure-aware transformer for graph representation learning [C]// Proceedings of the 39th International Conference on Machine Learning. Baltimore: PMLR, 2022: 3469–3489.
18 LUO Y, THOST V, SHI L. Transformers over directed acyclic graphs [C]// 37th Conference on Neural Information Processing Systems. [S.l.]: Curran Associates, Inc., 2023: 47764–47782.
19 LI Y, YU K, HUANG X, et al. Learning inter-entity-interaction for few-shot knowledge graph completion [C]// Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. [S. l.]: ACL, 2022: 7691–7700.
20 SHENG J, GUO S, CHEN Z, et al. Adaptive attentional network for few-shot knowledge graph completion [C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. [S. l.]: ACL, 2020: 1681–1691.
21 陈成, 张皞, 李永强, 等 关系生成图注意力网络的知识图谱链接预测[J]. 浙江大学学报: 工学版, 2022, 56 (5): 1025- 1034
CHEN Cheng, ZHANG Hao, LI Yongqiang, et al Knowledge graph link prediction based on relational generative graph attention network[J]. Journal of Zhejiang University: Engineering Science, 2022, 56 (5): 1025- 1034
22 VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks [C]// International Conference on Learning Representations. Vancouver: [s.n.], 2018: 1–12
23 WANG X, HE X, CAO Y, et al. KGAT: knowledge graph attention network for recommendation [C]// Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. [S. l.]: ACM, 2019: 950–958.
24 YE Y, JI S Sparse graph attention networks[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35 (1): 905- 916
25 SU G, WANG H, ZHANG Y, et al Simple and deep graph attention networks[J]. Knowledge-Based Systems, 2024, 293: 111649
doi: 10.1016/j.knosys.2024.111649
26 YUAN H, JI S. Structpool: structured graph pooling via conditional random fields [C]// International Conference on Learning Representations. Addis Ababa: [s.n.], 2020: 1–12.
27 BIANCHI F M, GRATTAROLA D, ALIPPI C. Spectral clustering with graph neural networks for graph pooling [C]// Proceedings of the 37th International Conference on Machine Learning. Vienna: [s. n.], 2020: 1–13.
28 BAEK J, KANG M, HWANG S J. Accurate learning of graph representations with graph multiset pooling [EB/OL]. (2021−06−28)[2024−05−03]. https://arxiv.org/pdf/2102.11533.
[1] 蔡永青,韩成,权巍,陈兀迪. 基于注意力机制的视觉诱导晕动症评估模型[J]. 浙江大学学报(工学版), 2025, 59(6): 1110-1118.
[2] 鞠文博,董华军. 基于上下文信息融合与动态采样的主板缺陷检测方法[J]. 浙江大学学报(工学版), 2025, 59(6): 1159-1168.
[3] 周翔宇,刘毅志,赵肄江,廖祝华,张德城. 面向目的地预测的层次化空间嵌入BiGRU模型[J]. 浙江大学学报(工学版), 2025, 59(6): 1211-1218.
[4] 李宗民,徐畅,白云,鲜世洋,戎光彩. 面向点云理解的双邻域图卷积方法[J]. 浙江大学学报(工学版), 2025, 59(5): 879-889.
[5] 刘洪伟,王磊,刘阳,张鹏超,乔石. 基于重组二次分解及LSTNet-Atten的短期负荷预测[J]. 浙江大学学报(工学版), 2025, 59(5): 1051-1062.
[6] 刘登峰,郭文静,陈世海. 基于内容引导注意力的车道线检测网络[J]. 浙江大学学报(工学版), 2025, 59(3): 451-459.
[7] 姚明辉,王悦燕,吴启亮,牛燕,王聪. 基于小样本人体运动行为识别的孪生网络算法[J]. 浙江大学学报(工学版), 2025, 59(3): 504-511.
[8] 尹向雷,屈少鹏,解永芳,苏妮. 基于渐进特征融合及多尺度空洞注意力的遮挡鸟巢检测[J]. 浙江大学学报(工学版), 2025, 59(3): 535-545.
[9] 薛雅丽,贺怡铭,崔闪,欧阳权. 基于改进YOLOv5的SAR图像有向舰船目标检测算法[J]. 浙江大学学报(工学版), 2025, 59(2): 261-268.
[10] 杨冰,徐楚阳,姚金良,向学勤. 基于单目RGB图像的三维手部姿态估计方法[J]. 浙江大学学报(工学版), 2025, 59(1): 18-26.
[11] 李灿林,王新玥,马利庄,邵志文,张文娇. 融合注意力机制和结构线提取的图像卡通化[J]. 浙江大学学报(工学版), 2024, 58(8): 1728-1737.
[12] 李忠良,陈麒,石琳,杨朝,邹先明. 时间感知组合的动态知识图谱补全[J]. 浙江大学学报(工学版), 2024, 58(8): 1738-1747.
[13] 吴书晗,王丹,陈远方,贾子钰,张越棋,许萌. 融合注意力的滤波器组双视图图卷积运动想象脑电分类[J]. 浙江大学学报(工学版), 2024, 58(7): 1326-1335.
[14] 马现伟,范朝辉,聂为之,李东,朱逸群. 对失效传感器具备鲁棒性的故障诊断方法[J]. 浙江大学学报(工学版), 2024, 58(7): 1488-1497.
[15] 杨军,张琛. 基于边界点估计与稀疏卷积神经网络的三维点云语义分割[J]. 浙江大学学报(工学版), 2024, 58(6): 1121-1132.