|
|
Text matching model based on dense connection networkand multi-dimensional feature fusion |
Yue-lin CHEN1(),Wen-jing TIAN1,Xiao-dong CAI2,*(),Shu-ting ZHENG2 |
1. School of Mechanical and Electrical Engineering, Guilin University of Electronic Technology, Guilin 541004, China 2. School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China |
|
|
Abstract A text matching method was proposed based on the dense connection network and the multi-dimensional feature fusion, aiming at the problems of the semantic loss and insufficient information on the interaction for sentence pairs in the text matching process. The BiLSTM network was used to encode the sentence in order to obtain the semantic features of the sentence in the encoding end of the model. The word embedding feature at the bottom and the dense module feature at the top were connected by the dense connection network, and the semantic features of sentences were enriched. The similarity features, the difference features and the key features of sentence pairs were fused with multi-dimensional features based on the information interaction of word-level for attention mechanism, and large amounts of the semantic relationships between sentence pairs were captured by the model. The model evaluation was performed on four benchmark datasets. Compared with other strong benchmark models, the text matching accuracy of the proposed model was significantly improved by 0.3%, 0.3%, 0.6% and 1.81%, respectively. The validity verification experiment on the Quora dataset of paraphrase recognition showed that the proposed method had an accurate matching effect on the semantic similarity of sentences.
|
Received: 22 March 2021
Published: 31 December 2021
|
|
Fund: 广西科技重大专项(AA20302001);桂林市科学研究与技术开发技术课题(20190412) |
Corresponding Authors:
Xiao-dong CAI
E-mail: 370883566@qq.com;caixiaodong@guet.edu.cn
|
基于密集连接网络和多维特征融合的文本匹配模型
针对文本匹配过程中存在语义损失和句子对间信息交互不充分的问题,提出基于密集连接网络和多维特征融合的文本匹配方法. 模型的编码端使用BiLSTM网络对句子进行编码,获取句子的上下文语义特征;密集连接网络将最底层的词嵌入特征和最高层的密集模块特征连接,丰富句子的语义特征;基于注意力机制单词级的信息交互,将句子对间的相似性特征、差异性特征和关键性特征进行多维特征融合,使模型捕获更多句子对间的语义关系. 在4个基准数据集上对模型进行评估,与其他强基准模型相比,所提模型的文本匹配准确率显著提升,准确率分别提高0.3%、0.3%、0.6%和1.81%. 在释义识别Quora数据集上的有效性验证实验结果表明,所提方法对句子语义相似度具有精准的匹配效果.
关键词:
语义损失,
信息交互,
BiLSTM网络,
密集连接网络,
注意力机制,
多维特征融合
|
|
[1] |
张鹏飞, 李冠宇, 贾彩燕 面向自然语言推理的基于截断高斯距离的自注意力机制[J]. 计算机科学, 2020, 47 (4): 178- 183 ZHANG Peng-fei, LI Guan-yu, JIA Cai-yan Truncated Gaussian distance-based self-attention mechanism for natural language inference[J]. Computer Science, 2020, 47 (4): 178- 183
doi: 10.11896/jsjkx.190600149
|
|
|
[2] |
BOWMAN S R, ANGEL G, POTTS C, et al. A large annotated corpus for learning natural language inference [C]// 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon: EMNLP, 2015: 632–642.
|
|
|
[3] |
KHOT T, SABHARWAL A, CLARK P. SCITAIL: a textual entailment dataset from science question answering [C]// The Thirty-Second AAAI Conference on Artificial Intelligence. New Orleans: AAAI, 2018: 5189-5197.
|
|
|
[4] |
WANG S, JING J. A compare-aggregate model for matching text sequences [C]// 5th International Conference on Learning Representations. Toulon: ICLR, 2017: 1-11.
|
|
|
[5] |
YANG Y , YIH W T, MEEK C. WikiQA: a challenge dataset for open-domain question answering [C]// 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon: EMNLP, 2015: 2013–2018.
|
|
|
[6] |
RAO J, YANG W, ZHANG Y, et al. Multi-perspective relevance matching with hierarchical ConvNets for social media search [C]// . The 33rd AAAI Conference on Artificial Intelligence. Hawaii: AAAI, 2019: 232-240.
|
|
|
[7] |
DUAN C Q, CUI L, CHEN X C, et al. Attention-fused deep matching network for natural language inference [C]// 2018 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI, 2018: 4033–4040.
|
|
|
[8] |
WANG Z G, HAMZA W, FLORIAN R. Bilateral multi-perspective matching for natural language sentences [C]// 2017 26th International Joint Conference on Artificial Intelligence. Melbourne: IJCAI, 2017: 4144-4150.
|
|
|
[9] |
CONNEAU A, KIELA D, SCHWENK H, et al. Supervised learning of universal sentence representations from natural language inference data [C]// 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen: EMNLP, 2017: 670-680.
|
|
|
[10] |
NIE Y, BANSAL M. Shortcut-stacked sentence encoders for multi-domain inference [C]// 2017 2nd Workshop on Evaluating Vector Space Representations for NLP. Copenhagen: EMNLP, 2017: 41-45.
|
|
|
[11] |
TAO S, ZHOU T, LONG G, et al. Reinforced self-attention network: a hybrid of hard and soft attention for sequence modeling [C]// 2018 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI, 2018: 4345-4352.
|
|
|
[12] |
WANG B, LIU K, ZHAO J. Inner attention based recurrent neural networks for answer selection [C]// 2016 54th Annual Meeting of the Association for Computational Linguistics. Berlin: ACL, 2016: 1288-1297.
|
|
|
[13] |
TAY Y, LUU A, HUI S C. Hermitian co-attention networks for text matching in asymmetrical domains [C]// 2018 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI, 2018: 4425–4431.
|
|
|
[14] |
YANG R, ZHANG J, GAO X, et al. Simple and effective text matching with richer alignment features [C]// 2019 57th Conference of the Association for Computational Linguistics. Florence: ACL, 2019: 4699-4709.
|
|
|
[15] |
HUANG G, LIU Z, MAATEN L V D, et al. Densely connected convolutional networks [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2261-2269.
|
|
|
[16] |
GERS F A, SCHMIDHUBER E LSTM recurrent networks learn simple context-free and context-sensitive languages[J]. IEEE Transactions on Neural Networks, 2001, 12 (6): 1333- 1340
doi: 10.1109/72.963769
|
|
|
[17] |
PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation [C]// 2014 Conference on Empirical Methods in Natural Language Processing. Doha: EMNLP, 2014: 1532-1543.
|
|
|
[18] |
COLLOBERT R, WESTON J, BOTTOU L, et al Natural language processing (almost) from scratch[J]. Journal of Machine Learning Research, 2011, 12 (1): 2493- 2537
|
|
|
[19] |
PARIKH A P, TÄCKSTRÖM O, DAS D, et al. A decomposable attention model for natural language inference [C]// In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin: EMNLP, 2016: 2249–2255.
|
|
|
[20] |
SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15 (1): 1929- 1958
|
|
|
[21] |
GAO Y, CHANG H J, DEMIRIS Y. Iterative path optimisation for personalised dressing assistance using vision and force information [C]// 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems. Daejeon: IEEE, 2016: 4398-4403.
|
|
|
[22] |
LIU X , DUH K , GAO J . Stochastic answer networks for natural language inference [EB/OL]. [2021-03-13]. https://arxiv.org/abs/1804.07888.
|
|
|
[23] |
PETERS M, NEUMANN M, IYYER M, et al. Deep contextualized word representations [C]// 2018 Conference of the North American Chapter of the Association for Computational Linguistics. [S.l.]: NAACL-HLT, 2018: 2227-2237.
|
|
|
[24] |
YI T, LUU A T, HUI S C. Compare, compress and propagate: enhancing neural architectures with alignment factorization for natural language inference [C]// 2018 Conference on Empirical Methods in Natural Language Processing. Brussels: EMNLP, 2018: 1565-1575.
|
|
|
[25] |
LIU M , ZHANG Y , XU J , et al. Original semantics-oriented attention and deep fusion network for sentence matching [C]// 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing . Hong Kong: EMNLP-IJCNLP, 2019: 2652-2661.
|
|
|
[26] |
TAY Y , TUAN L A , HUI S C. Co-stack residual affinity networks with multi-level attention refinement for matching text sequences [C]// 2018 Conference on Empirical Methods in Natural Language Processing, Brussels : EMMLP, 2018: 4492–4502.
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|