Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2021, Vol. 55 Issue (12): 2352-2358    DOI: 10.3785/j.issn.1008-973X.2021.12.015
    
Text matching model based on dense connection networkand multi-dimensional feature fusion
Yue-lin CHEN1(),Wen-jing TIAN1,Xiao-dong CAI2,*(),Shu-ting ZHENG2
1. School of Mechanical and Electrical Engineering, Guilin University of Electronic Technology, Guilin 541004, China
2. School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China
Download: HTML     PDF(894KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A text matching method was proposed based on the dense connection network and the multi-dimensional feature fusion, aiming at the problems of the semantic loss and insufficient information on the interaction for sentence pairs in the text matching process. The BiLSTM network was used to encode the sentence in order to obtain the semantic features of the sentence in the encoding end of the model. The word embedding feature at the bottom and the dense module feature at the top were connected by the dense connection network, and the semantic features of sentences were enriched. The similarity features, the difference features and the key features of sentence pairs were fused with multi-dimensional features based on the information interaction of word-level for attention mechanism, and large amounts of the semantic relationships between sentence pairs were captured by the model. The model evaluation was performed on four benchmark datasets. Compared with other strong benchmark models, the text matching accuracy of the proposed model was significantly improved by 0.3%, 0.3%, 0.6% and 1.81%, respectively. The validity verification experiment on the Quora dataset of paraphrase recognition showed that the proposed method had an accurate matching effect on the semantic similarity of sentences.



Key wordssemantic loss      information interaction      BiLSTM network      dense connection network      attention mechanism      multi-dimensional feature fusion     
Received: 22 March 2021      Published: 31 December 2021
CLC:  TP 391.1  
Fund:  广西科技重大专项(AA20302001);桂林市科学研究与技术开发技术课题(20190412)
Corresponding Authors: Xiao-dong CAI     E-mail: 370883566@qq.com;caixiaodong@guet.edu.cn
Cite this article:

Yue-lin CHEN,Wen-jing TIAN,Xiao-dong CAI,Shu-ting ZHENG. Text matching model based on dense connection networkand multi-dimensional feature fusion. Journal of ZheJiang University (Engineering Science), 2021, 55(12): 2352-2358.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2021.12.015     OR     https://www.zjujournals.com/eng/Y2021/V55/I12/2352


基于密集连接网络和多维特征融合的文本匹配模型

针对文本匹配过程中存在语义损失和句子对间信息交互不充分的问题,提出基于密集连接网络和多维特征融合的文本匹配方法. 模型的编码端使用BiLSTM网络对句子进行编码,获取句子的上下文语义特征;密集连接网络将最底层的词嵌入特征和最高层的密集模块特征连接,丰富句子的语义特征;基于注意力机制单词级的信息交互,将句子对间的相似性特征、差异性特征和关键性特征进行多维特征融合,使模型捕获更多句子对间的语义关系. 在4个基准数据集上对模型进行评估,与其他强基准模型相比,所提模型的文本匹配准确率显著提升,准确率分别提高0.3%、0.3%、0.6%和1.81%. 在释义识别Quora数据集上的有效性验证实验结果表明,所提方法对句子语义相似度具有精准的匹配效果.


关键词: 语义损失,  信息交互,  BiLSTM网络,  密集连接网络,  注意力机制,  多维特征融合 
Fig.1 DCN-MDFF model frame structure diagram
数据集 分类 数量 例句 标签
SNLI train 549367 p: a man playing an electric guitar on stage.
q: a man playing guitar on stage.
蕴涵、中立、矛盾
dev 9842
test 9824
SciTail train 23596 p: He grabs at the wheel to turn the car.
q: The turning driveshaft causes the wheels of the car to turn.
蕴涵、中立
dev 1304
test 2126
Quora train 384348 p: What is the best way of living life?
q: What is the best way to live a life?
释义、未释义
dev 10000
test 10000
蚂蚁金融 train 92500 p: 蚂蚁借呗多长时间可以审核通过?
q: 借呗申请多久可以审核通过?
是、否
dev 4000
test 4000
Tab.1 Size and examples of different data sets
Fig.2 Comparison results of matching accuracy with different models on SNLI dataset
Fig.3 Comparison results of matching accuracy with different models on SciTail dataset
Fig.4 Comparison results of matching accuracy with different models on Quora dataset
Fig.5 Comparison results of matching accuracy with different models on ant financial data set
模型 Acc/% 模型 Acc/%
KFF 89.6 SRC 89.3
DF 89.5 ARC 89.4
SimiF 89.2 DCN-MDFF 90.0
SF 89.2 ? ?
Tab.2 Results of ablation experiments on Quora dataset
Fig.6 Robustness experimental performance comparison on each verification set
[1]   张鹏飞, 李冠宇, 贾彩燕 面向自然语言推理的基于截断高斯距离的自注意力机制[J]. 计算机科学, 2020, 47 (4): 178- 183
ZHANG Peng-fei, LI Guan-yu, JIA Cai-yan Truncated Gaussian distance-based self-attention mechanism for natural language inference[J]. Computer Science, 2020, 47 (4): 178- 183
doi: 10.11896/jsjkx.190600149
[2]   BOWMAN S R, ANGEL G, POTTS C, et al. A large annotated corpus for learning natural language inference [C]// 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon: EMNLP, 2015: 632–642.
[3]   KHOT T, SABHARWAL A, CLARK P. SCITAIL: a textual entailment dataset from science question answering [C]// The Thirty-Second AAAI Conference on Artificial Intelligence. New Orleans: AAAI, 2018: 5189-5197.
[4]   WANG S, JING J. A compare-aggregate model for matching text sequences [C]// 5th International Conference on Learning Representations. Toulon: ICLR, 2017: 1-11.
[5]   YANG Y , YIH W T, MEEK C. WikiQA: a challenge dataset for open-domain question answering [C]// 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon: EMNLP, 2015: 2013–2018.
[6]   RAO J, YANG W, ZHANG Y, et al. Multi-perspective relevance matching with hierarchical ConvNets for social media search [C]// . The 33rd AAAI Conference on Artificial Intelligence. Hawaii: AAAI, 2019: 232-240.
[7]   DUAN C Q, CUI L, CHEN X C, et al. Attention-fused deep matching network for natural language inference [C]// 2018 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI, 2018: 4033–4040.
[8]   WANG Z G, HAMZA W, FLORIAN R. Bilateral multi-perspective matching for natural language sentences [C]// 2017 26th International Joint Conference on Artificial Intelligence. Melbourne: IJCAI, 2017: 4144-4150.
[9]   CONNEAU A, KIELA D, SCHWENK H, et al. Supervised learning of universal sentence representations from natural language inference data [C]// 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen: EMNLP, 2017: 670-680.
[10]   NIE Y, BANSAL M. Shortcut-stacked sentence encoders for multi-domain inference [C]// 2017 2nd Workshop on Evaluating Vector Space Representations for NLP. Copenhagen: EMNLP, 2017: 41-45.
[11]   TAO S, ZHOU T, LONG G, et al. Reinforced self-attention network: a hybrid of hard and soft attention for sequence modeling [C]// 2018 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI, 2018: 4345-4352.
[12]   WANG B, LIU K, ZHAO J. Inner attention based recurrent neural networks for answer selection [C]// 2016 54th Annual Meeting of the Association for Computational Linguistics. Berlin: ACL, 2016: 1288-1297.
[13]   TAY Y, LUU A, HUI S C. Hermitian co-attention networks for text matching in asymmetrical domains [C]// 2018 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI, 2018: 4425–4431.
[14]   YANG R, ZHANG J, GAO X, et al. Simple and effective text matching with richer alignment features [C]// 2019 57th Conference of the Association for Computational Linguistics. Florence: ACL, 2019: 4699-4709.
[15]   HUANG G, LIU Z, MAATEN L V D, et al. Densely connected convolutional networks [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2261-2269.
[16]   GERS F A, SCHMIDHUBER E LSTM recurrent networks learn simple context-free and context-sensitive languages[J]. IEEE Transactions on Neural Networks, 2001, 12 (6): 1333- 1340
doi: 10.1109/72.963769
[17]   PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation [C]// 2014 Conference on Empirical Methods in Natural Language Processing. Doha: EMNLP, 2014: 1532-1543.
[18]   COLLOBERT R, WESTON J, BOTTOU L, et al Natural language processing (almost) from scratch[J]. Journal of Machine Learning Research, 2011, 12 (1): 2493- 2537
[19]   PARIKH A P, TÄCKSTRÖM O, DAS D, et al. A decomposable attention model for natural language inference [C]// In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin: EMNLP, 2016: 2249–2255.
[20]   SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15 (1): 1929- 1958
[21]   GAO Y, CHANG H J, DEMIRIS Y. Iterative path optimisation for personalised dressing assistance using vision and force information [C]// 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon: IEEE, 2016: 4398-4403.
[22]   LIU X , DUH K , GAO J . Stochastic answer networks for natural language inference [EB/OL]. [2021-03-13]. https://arxiv.org/abs/1804.07888.
[23]   PETERS M, NEUMANN M, IYYER M, et al. Deep contextualized word representations [C]// 2018 Conference of the North American Chapter of the Association for Computational Linguistics. [S.l.]: NAACL-HLT, 2018: 2227-2237.
[24]   YI T, LUU A T, HUI S C. Compare, compress and propagate: enhancing neural architectures with alignment factorization for natural language inference [C]// 2018 Conference on Empirical Methods in Natural Language Processing. Brussels: EMNLP, 2018: 1565-1575.
[25]   LIU M , ZHANG Y , XU J , et al. Original semantics-oriented attention and deep fusion network for sentence matching [C]// 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing . Hong Kong: EMNLP-IJCNLP, 2019: 2652-2661.
[26]   TAY Y , TUAN L A , HUI S C. Co-stack residual affinity networks with multi-level attention refinement for matching text sequences [C]// 2018 Conference on Empirical Methods in Natural Language Processing, Brussels : EMMLP, 2018: 4492–4502.
[1] Xiao-chen JU,Xin-xin ZHAO,Sheng-sheng QIAN. Self-attention mechanism based bridge bolt detection algorithm[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(5): 901-908.
[2] You-wei WANG,Shuang TONG,Li-zhou FENG,Jian-ming ZHU,Yang LI,Fu CHEN. New inductive microblog rumor detection method based on graph convolutional network[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(5): 956-966.
[3] Xue-qin ZHANG,Tian-ren LI. Breast cancer pathological image classification based on Cycle-GAN and improved DPN network[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 727-735.
[4] Meng XU,Dan WANG,Zhi-yuan LI,Yuan-fang CHEN. IncepA-EEGNet: P300 signal detection method based on fusion of Inception network and attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 745-753, 782.
[5] Chang-yuan LIU,Xian-ping HE,Xiao-jun BI. Efficient network vehicle recognition combined with attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 775-782.
[6] Qiao-hong CHEN,Hao-lei PEI,Qi SUN. Image caption based on relational reasoning and context gate mechanism[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(3): 542-549.
[7] Yuan-jun NONG,Jun-jie WANG,Hong CHEN,Wen-han SUN,Hui GENG,Shu-yue LI. A image caption method of construction scene based on attention mechanism and encoding-decoding architecture[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(2): 236-244.
[8] Ying-li LIU,Rui-gang WU,Chang-hui YAO,Tao SHEN. Construction method of extraction dataset of Al-Si alloy entity relationship[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(2): 245-253.
[9] Xin WANG,Qiao-hong CHEN,Qi SUN,Yu-bo JIA. Visual question answering method based on relational reasoning and gating mechanism[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(1): 36-46.
[10] Zhi-chao CHEN,Hai-ning JIAO,Jie YANG,Hua-fu ZENG. Garbage image classification algorithm based on improved MobileNet v2[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(8): 1490-1499.
[11] Zi-ye YONG,Ji-chang GUO,Chong-yi LI. weakly supervised underwater image enhancement algorithm incorporating attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(3): 555-562.
[12] Han-juan CHEN,Fei-peng DA,Shao-yan GAI. Deep 3D point cloud classification network based on competitive attention fusion[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(12): 2342-2351.
[13] Wen-bin XIN,Hui-min HAO,Ming-long BU,Yuan LAN,Jia-hai HUANG,Xiao-yan XIONG. Static gesture real-time recognition method based on ShuffleNetv2-YOLOv3 model[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(10): 1815-1824.
[14] Chuang LIU,Jun LIANG. Vehicle motion trajectory prediction based on attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(6): 1156-1163.
[15] Yan ZHANG,Bin GUO,Qian-ru WANG,Jing ZHANG,Zhi-wen YU. SeqRec: sequential-based recommendation model with long-term preference and instant interest[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(6): 1177-1184.