Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2026, Vol. 60 Issue (1): 71-80    DOI: 10.3785/j.issn.1008-973X.2026.01.007
    
Pose-guided dual-branch network for clothing-changing person re-identification
Siyao ZHOU(),Nan XIA*(),Jiahong JIANG
School of Information Science and Engineering, Dalian Polytechnic University, Dalian 116034, China
Download: HTML     PDF(1117KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A pose-guided dual-branch clothing-changing person re-identification network (PGNet) was proposed to address the issue of reduced recognition accuracy in the clothing-changing person re-identification tasks caused by complex environments and clothing variations. The network adopted a dual-branch architecture based on appearance features and guided by pose features. To effectively remove the interference of clothing-related information, reduce its impact on model performance, and preserve the deep representational features, a multi-level feature fusion module was designed. An action-related adjacency matrix and a natural topology adjacency matrix were constructed and combined to form a dual adjacency matrix, which was input into the graph convolutional network. An adjacency matrix weighting mechanism was introduced to enhance the model’s ability to capture pose features. A bilinear multi-feature pooling method was adopted to enhance the complementarity between the pose features and the appearance features, thereby improving the recognition accuracy. Experimental results demonstrated that the PGNet achieved mAP values of 60.5%, 84.7%, 15.7%, 22.6%, and Rank-1 accuracies of 63.7%, 93.3%, 59.5%, 41.2% on the clothing-changing datasets of PRCC, VC-Clothes, Celeb-reID, and Celeb-reID-light, respectively, outperforming other comparative methods such as SirNet. The proposed method can effectively reduce the impact of clothing variations and significantly improve the recognition accuracy.



Key wordsclothing-changing person re-identification      pose guide      feature fusion      graph convolutional network      attention mechanism     
Received: 24 January 2025      Published: 15 December 2025
CLC:  TP 183  
Fund:  教育部产学合作协同育人资助项目(220603231024713).
Corresponding Authors: Nan XIA     E-mail: 220520854000543@xy.dlpu.edu.cn;xianan@dlpu.edu.cn
Cite this article:

Siyao ZHOU,Nan XIA,Jiahong JIANG. Pose-guided dual-branch network for clothing-changing person re-identification. Journal of ZheJiang University (Engineering Science), 2026, 60(1): 71-80.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2026.01.007     OR     https://www.zjujournals.com/eng/Y2026/V60/I1/71


姿态引导的双分支换装行人重识别网络

针对换装行人重识别任务中由复杂环境和行人服装变化等因素导致的识别精度下降的问题,提出姿态引导的双分支换装行人重识别网络PGNet,该网络采用以外观特征为基础、由姿态特征引导的双分支结构. 为了有效去除服装相关信息的干扰,降低其对模型性能的影响,同时保留深度表征特征,设计多层次特征融合模块;构建动作关联和自然拓扑邻接矩阵,组合为双重矩阵后输入图卷积网络,并引入邻接矩阵加权机制以增强模型对姿态特征的捕捉能力;采用双线性多特征池化方法增强姿态与外观特征的互补性,从而提升识别精度. 实验结果表明,PGNet在换装数据集PRCC、VC-Clothes、Celeb-reID以及Celeb-reID-light上的mAP指标分别为60.5%、84.7%、15.7%、22.6%,Rank-1指标分别为63.7%、93.3%、59.5%、41.2%,优于SirNet等其他对比方法,验证了所提方法能够有效降低服装变化的影响,并显著提高识别精度.


关键词: 换装行人重识别,  姿态引导,  特征融合,  图卷积网络,  注意力机制 
Fig.1 Overall structure diagram of pose-guided dual-branch network (PGNet) for clothing-changing person re-identification
Fig.2 Schematic diagram of adjacency matrix construction
Fig.3 Structure diagram of adjacency matrix weighting module
Fig.4 Structure diagram of aggregated feature fusion module
方法PRCC常规场景PRCC换衣场景VC-Clothes常规场景VC-Clothes换衣场景
mAP/%Rank-1/%mAP/%Rank-1/%mAP/%Rank-1/%mAP/%Rank-1/%
PCB[2]97.099.838.741.874.687.462.262.0
AGW[5]89.097.837.139.789.791.182.192.0
CAL[12]99.2100. 055.855.295.395.187.292.9
TransReID[6]97.098.245.042.993.892.481.090.4
CRE+BSGA[18]97.399.658.761.888.294.484.384.5
SCNet[15]97.8100. 059.961.389.694.984.490.1
IMS-GEP[1]99.899.765.857.394.994.781.781.8
CDM+GCA[7]94.399.361.364.892.893.182.783.7
IRM[4]52.354.280.190.1
PGAL[14]58.759.5
PGNet99.299.860.563.791.395.484.793.3
Tab.1 Performance comparison of different methods on PRCC and VC-Clothes datasets
方法Celeb-reIDCeleb-reID-light
mAP/%Rank-1/%mAP/%Rank-1/%
PCB[2]8.745.112.723.9
AGW[5]11.247.113.822.0
TransReID[6]9.345.712.921.2
RCSANet[8]11.955.616.729.5
CAL[12]13.759.218.533.6
ACID[9]11.452.515.827.9
MBUNet[19]12.855.521.535.5
SirNet[11]14.256.020.036.0
PGAL[14]15.360.923.340.4
PGNet15.759.522.641.2
Tab.2 Performance comparison of different methods on Celeb-reID and Celeb-reID-light datasets
方法Rank-1/%Rank-5/%Rank-10/%
AGW[5]39.745.848.5
TransReID[6]42.947.750.4
SCNet[15]61.368.170.2
IRM[4]54.260.564.2
PGNet63.770.873.7
Tab.3 Comparison of Rank-n metrics of different methods on PRCC dataset
Fig.5 Visualization of testing results of baseline model, IRM and PGNet on PRCC dataset
AFFM矩阵Am+AMWMBMFPPRCCVC-Clothes
mAP/%Rank-1/%mAP/%Rank-1/%
58.360.983.292.6
56.858.682.493.0
59.761.283.893.3
60.563.784.793.3
Tab.4 Ablation experimental results of PGNet modules
方法PRCCVC-Clothes
mAP/%Rank-1/%mAP/%Rank-1/%
矩阵${{\boldsymbol{A}}_{\text{b}}}$57.659.282.993.0
矩阵${{\boldsymbol{A}}_{\text{m}}}$56.858.482.692.8
矩阵${{\boldsymbol{A}}_{\text{m}}}$+AMWM60.563.784.793.3
Tab.5 Ablation study results of dual adjacency matrix weighting module and two adjacency matrices
[1]   ZHAO Z, LIU B, LU Y, et al Joint identity-aware mixstyle and graph-enhanced prototype for clothes-changing person re-identification[J]. IEEE Transactions on Multimedia, 2023, 26: 3457- 3468
[2]   SUN Y, ZHENG L, YANG Y, et al. Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline) [C]// Proceedings of the European Conference on Computer Vision. Munich: Springer, 2018: 501–518.
[3]   闫禹铭, 何剑锋, 李殊昭, 等 现实场景下行人身份识别[J]. 浙江大学学报: 工学版, 2021, 55 (11): 2022- 2032
YAN Yuming, HE Jianfeng, LI Shuzhao, et al Identity recognition under real scenes[J]. Journal of Zhejiang University: Engineering science, 2021, 55 (11): 2022- 2032
[4]   HE W, DENG Y, TANG S, et al. Instruct-ReID: a multi-purpose person re-identification task with instructions [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2024: 17521–17531.
[5]   YE M, SHEN J, LIN G, et al Deep learning for person re-identification: a survey and outlook[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44 (6): 2872- 2893
doi: 10.1109/TPAMI.2021.3054775
[6]   HE S, LUO H, WANG P, et al. TransReID: Transformer-based object re-identification [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 14993–15002.
[7]   DING Y, MAO R, ZHU H, et al. Discriminative pedestrian features and gated channel attention for clothes-changing person re-identification [C]// Proceedings of the IEEE International Conference on Multimedia and Expo. Niagara Falls: IEEE, 2024: 1–6.
[8]   HUANG Y, WU Q, XU J, et al. Clothing status awareness for long-term person re-identification [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 11875–11884.
[9]   YANG Z, ZHONG X, ZHONG Z, et al Win-win by competition: auxiliary-free cloth-changing person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 2985- 2999
doi: 10.1109/TIP.2023.3277389
[10]   WANG L, ZHANG Y, LU T, et al Multi feature fusion attention learning for clothing-changing person re-identification[J]. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 2022, 105 (8): 1170- 1174
[11]   YANG S, KANG B, LEE Y Sampling agnostic feature representation for long-term person re-identification[J]. IEEE Transactions on Image Processing, 2022, 31: 6412- 6423
doi: 10.1109/TIP.2022.3207024
[12]   GU X, CHANG H, MA B, et al. Clothes-changing person re-identification with RGB modality only [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 1060–1069.
[13]   WANG Q, QIAN X, FU Y, et al. Co-attention aligned mutual cross-attention for cloth-changing person re-identification [C]// Proceedings of the Asian Conference on Computer Vision. Macau: Springer, 2022: 2270–2288.
[14]   LIU X, LIU K, GUO J, et al Pose-guided attention learning for cloth-changing person re-identification[J]. IEEE Transactions on Multimedia, 2023, 26 (8): 5490- 5498
[15]   GUO P, LIU H, WU J, et al. Semantic-aware consistency network for cloth-changing person re-identification [C]// Proceedings of the 31st ACM International Conference on Multimedia. Ottawa: ACM, 2023: 8730–8739.
[16]   ZHU K, GUO H, LIU Z, et al. Identity-guided human semantic parsing for person re-identification [C]// Proceedings of the European Conference on Computer Vision. Glasgow: Springer, 2020: 346–363.
[17]   LIU F, YE M, DU B Dual level adaptive weighting for cloth-changing person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 5075- 5086
doi: 10.1109/TIP.2023.3310307
[18]   MU J, LI Y, LI J, et al. Learning clothes-irrelevant cues for clothes-changing person re-identification [C]// Proceedings of the 33rd British Machine Vision Conference. London: BMVA, 2022: 337.
[19]   ZHANG G, LIU J, CHEN Y, et al. Multi-biometric unified network for cloth-changing person re-identification [C]// Proceedings of the IEEE International Conference on Multimedia and Expo. Taipei: IEEE, 2022: 1–6.
[20]   LI Y Utilizing silhouette and head information for improved cloth-changing person re-identification[J]. Advances in Engineering Technology Research, 2024, 11 (1): 614
doi: 10.56028/aetr.11.1.614.2024
[21]   NGUYEN V D, KHALDI K, NGUYEN D, et al. Contrastive viewpoint-aware shape learning for long-term person re-identification [C]// Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa: IEEE, 2024: 1030–1038.
[22]   JIANG J, XIA N A dual-channel network based on occlusion feature compensation for human pose estimation[J]. Image and Vision Computing, 2024, 151: 105290
doi: 10.1016/j.imavis.2024.105290
[23]   JIANG J, XIA N, YU X A feature matching and compensation method based on importance weighting for occluded human pose estimation[J]. Journal of King Saud University: Computer and Information Sciences, 2024, 36 (5): 102061
doi: 10.1016/j.jksuci.2024.102061
[24]   TU Z, ZHANG J, LI H, et al Joint-bone fusion graph convolutional network for semi-supervised skeleton action recognition[J]. IEEE Transactions on Multimedia, 2022, 25: 1819- 1831
[25]   SUN R, CHEN L, ZHANG L, et al Robust visible-infrared person re-identification based on polymorphic mask and wavelet graph convolutional network[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 2800- 2813
doi: 10.1109/TIFS.2024.3354377
[26]   HUANG M, HOU C, YANG Q, et al Reasoning and tuning: graph attention network for occluded person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 1568- 1582
doi: 10.1109/TIP.2023.3247159
[27]   LIAN Y, HUANG W, LIU S, et al Person re-identification using local relation-aware graph convolutional network[J]. Sensors, 2023, 23 (19): 8138
doi: 10.3390/s23198138
[28]   WANG J, SUN K, CHENG T, et al Deep high-resolution representation learning for visual recognition[J]. IEEE transactions on Pattern Analysis and Machine Intelligence, 2021, 43 (10): 3349- 3364
doi: 10.1109/TPAMI.2020.2983686
[29]   DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Miami: IEEE, 2009: 248–255.
[30]   LI P, XU Y, WEI Y, et al Self-correction for human parsing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44 (6): 3260- 3271
doi: 10.1109/TPAMI.2020.3048039
[31]   HOSSAIN S, UMER S, ROUT R K, et al Fine-grained image analysis for facial expression recognition using deep convolutional neural networks with bilinear pooling[J]. Applied Soft Computing, 2023, 134: 109997
doi: 10.1016/j.asoc.2023.109997
[32]   YANG Q, WU A, ZHENG W S Person re-identification by contour sketch under moderate clothing change[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43 (6): 2029- 2046
doi: 10.1109/TPAMI.2019.2960509
[33]   SHU X, WANG X, ZANG X, et al Large-scale spatio-temporal person re-identification: algorithms and benchmark[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32 (7): 4390- 4403
[1] Jian XIAO,Xinze HE,Hongliang CHENG,Xiaoyuan YANG,Xin HU. Aerial small target detection algorithm based on multi-scale feature enhancement[J]. Journal of ZheJiang University (Engineering Science), 2026, 60(1): 19-31.
[2] Wenxin CHENG,Guanghui YAN,Wenwen CHANG,Baijing WU,Yaning HUANG. Channel-weighted multimodal feature fusion for EEG-based fatigue driving detection[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(9): 1775-1783.
[3] Chaoqun DONG,Zhan WANG,Ping LIAO,Shuai XIE,Yujie RONG,Jingsong ZHOU. Lightweight YOLOv5s-OCG rail sleeper crack detection algorithm[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(9): 1838-1845.
[4] Fujian WANG,Zetian ZHANG,Xiqun CHEN,Dianhai WANG. Usage prediction of shared bike based on multi-channel graph aggregation attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(9): 1986-1995.
[5] Zhuguo ZHOU,Yujun LU,Liye LV. Improved YOLOv5s-based algorithm for printed circuit board defect detection[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(8): 1608-1616.
[6] Xuejun ZHANG,Shubin LIANG,Wanrong BAI,Fenghe ZHANG,Haiyan HUANG,Meifeng GUO,Zhuo CHEN. Source code vulnerability detection method based on heterogeneous graph representation[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(8): 1644-1652.
[7] Yishan LIN,Jing ZUO,Shuhua LU. Multimodal sentiment analysis based on multi-head self-attention mechanism and MLP-Interactor[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(8): 1653-1661.
[8] Yahong ZHAI,Yaling CHEN,Longyan XU,Yu GONG. Improved YOLOv8s lightweight small target detection algorithm of UAV aerial image[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(8): 1708-1717.
[9] Jiarui FU,Zhaofei LI,Hao ZHOU,Wei HUANG. Camouflaged object detection based on Convnextv2 and texture-edge guidance[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(8): 1718-1726.
[10] Rongtai YANG,Yubin SHAO,Qingzhi DU. Structure-aware model for few-shot knowledge completion[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1394-1402.
[11] Shengju WANG,Zan ZHANG. Missing value imputation algorithm based on accelerated diffusion model[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1471-1480.
[12] Yongqing CAI,Cheng HAN,Wei QUAN,Wudi CHEN. Visual induced motion sickness estimation model based on attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1110-1118.
[13] Wenbo JU,Huajun DONG. Motherboard defect detection method based on context information fusion and dynamic sampling[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1159-1168.
[14] Xiangyu ZHOU,Yizhi LIU,Yijiang ZHAO,Zhuhua LIAO,Decheng ZHANG. Hierarchical spatial embedding BiGRU model for destination prediction[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1211-1218.
[15] Zongmin LI,Chang XU,Yun BAI,Shiyang XIAN,Guangcai RONG. Dual-neighborhood graph convolution method for point cloud understanding[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(5): 879-889.