大模型知识引导的复合多注意力文档级关系抽取方法
|
竹志超,李建强,齐宏智,赵青,高齐,李思颖,蔡嘉怡,沈金炎
|
Large model knowledge-guided composite multi-attention method for document-level relation extraction
|
Zhichao ZHU,Jianqiang LI,Hongzhi QI,Qing ZHAO,Qi GAO,Siying LI,Jiayi CAI,Jinyan SHEN
|
|
表 1 与先进模型的性能对比结果 |
Tab.1 Performance comparison results with advanced models |
|
类别 | 模型 | P /% | R /% | F1/% | Sequence-based | RoBERTa-base | 79.92±0.72 | 78.60±1.02 | 79.25±0.75 | SSAN | 80.61±1.22 | 81.06±0.81 | 80.83±0.99 | Graph-based | GCGCN | 80.98±0.46 | 81.54±0.77 | 81.26±0.51 | GLRE | 82.42±0.80 | 82.27±0.63 | 82.34±0.70 | GRACR | 83.01±1.34 | 83.13±0.61 | 83.57±0.68 | Knowledge-based | DISCO | 83.66±0.25 | 84.28±0.46 | 83.97±0.42 | K-BiOnt | 85.14±0.39 | 84.36±0.50 | 84.75±0.39 | KIRE | 84.90±0.44 | 85.23±0.38 | 85.06±0.41 | KRC | 85.06±0.22 | 86.00±0.08 | 85.53±0.09 | GECANet | 86.31±0.15 | 85.55±0.14 | 85.93±0.14 | LLM-based | ChatGLM2-6B | 37.42±4.32 | 41.21±3.48 | 39.22±3.90 | LLaMA3-8B | 40.56±2.79 | 43.75±3.50 | 42.09±2.83 | Qwen-32B | 45.76±3.68 | 48.20±4.61 | 46.95±4.46 | — | LKCM | 87.26±0.16 | 87.69±0.09 | 87.47±0.12 |
|
|
|