| 
					
						| 
								
									| 计算机技术、自动化技术 |  |   |  |  
    					|  |  
    					| 基于门控特征融合与中心损失的目标识别 |  
						| 莫建文(  ),李晋,蔡晓东*(  ),陈锦威 |  
					| 桂林电子科技大学 信息与通信学院,广西 桂林 541004 |  
						|  |  
    					| Target recognition based on gated feature fusion and center loss |  
						| Jian-wen MO(  ),Jin LI,Xiao-dong CAI*(  ),Jin-wei CHEN |  
						| School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China |  
					
						| 
								
									|  
          
          
            
             
												
												
												| 
												
												引用本文:
																																莫建文,李晋,蔡晓东,陈锦威. 基于门控特征融合与中心损失的目标识别[J]. 浙江大学学报(工学版), 2023, 57(10): 2011-2017.	
																															 
																																Jian-wen MO,Jin LI,Xiao-dong CAI,Jin-wei CHEN. Target recognition based on gated feature fusion and center loss. Journal of ZheJiang University (Engineering Science), 2023, 57(10): 2011-2017.	
																															 链接本文: 
																
																	
																	https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2023.10.010
																	   或   
																
																
																https://www.zjujournals.com/eng/CN/Y2023/V57/I10/2011
														    |  
            
									            
									                
																																															
																| 1 | WANG K, WANG S, ZHANG P, et al. An efficient training approach for very large scale face recognition [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 4083-4092. |  
																| 2 | ZHU H, KE W, LI D, et al. Dual cross-attention learning for fine-grained visual categorization and object re-identification [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 4692-4702. |  
																| 3 | YE F, YANG J A deep neural network model for speaker identification[J]. Applied Sciences, 2021, 11 (8): 3603 doi: 10.3390/app11083603
 |  
																| 4 | YE M, SHEN J, SHAO L Visible-infrared person re-identification via homogeneous augmented tri-modal learning[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 728- 739 |  
																| 5 | QIAN Y, CHEN Z, WANG S Audio visual deep neural network for robust person verification[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 1079- 1092 doi: 10.1109/TASLP.2021.3057230
 |  
																| 6 | SARANGI P P, NAYAK D R, PANDA M, et al A feature-level fusion based improved multimodal biometric recognition system using ear and profile face[J]. Journal of Ambient Intelligence and Humanized Computing, 2022, 13: 1867- 1898 |  
																| 7 | GUO W, WANG J, WANG S Deep multimodal representation learning: a survey[J]. IEEE Access, 2019, 7: 63373- 63394 doi: 10.1109/ACCESS.2019.2916887
 |  
																| 8 | AREVALO J, SOLORIO T, MONTESYGOMEZ M, et al Gated multimodal networks[J]. Neural Computing and Applications, 2020, 32: 10209- 10228 doi: 10.1007/s00521-019-04559-1
 |  
																| 9 | DICKSON M C, BOSMAN A S, MALAN K M. Hybridised loss functions for improved neural network generalisation [C]// Pan African Artificial Intelligence and Smart Systems: First International Conference. Cham: SIP, 2022: 169-181. |  
																| 10 | WEN Y, ZHANG K, LI Z, et al. A discriminative feature learning approach for deep face recognition [C]// Computer Vision ECCV 2016: 14th European Conference. Netherlands: SIP, 2016: 499-515. |  
																| 11 | DENG J, GUO J, XUE N, et al. Arcface: Additive angular margin loss for deep face recognition [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 4690-4699. |  
																| 12 | SUN Y, ZHENG L, YANG Y, et al. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline) [C]// Proceedings of the European Conference on Computer Vision. Munich: [s. n. ], 2018: 480-496. |  
             
												
											    	
											        	|  | Viewed |  
											        	|  |  |  
												        |  | Full text 
 | 
 
 |  
												        |  |  |  
												        |  | Abstract 
 | 
 |  
												        |  |  |  
												        |  | Cited |  |  
												        |  |  |  |  
													    |  | Shared |  |  
													    |  |  |  |  
													    |  | Discussed |  |  |  |  |