Please wait a minute...
浙江大学学报(理学版)  2019, Vol. 46 Issue (4): 431-438    DOI: 10.3785/j.issn.1008-9497.2019.04.008
数学与计算机科学     
基于多层次特征的跨场景服装检索
李宗民, 边玲燕, 刘玉杰
中国石油大学( 华东 ) 计算机与通信工程学院, 山东 青岛 266580
Cross-scenario clothing retrieval based on multi-level features
LI Zongmin, BIAN Lingyan, LIU Yujie
College of Computer and Communication Engineering , China University of Petroleum Huadong , Qingdao 266580, Shandong Province, China
 全文: PDF(5485 KB)   HTML  
摘要: 针对跨场景服装检索如何提取更有表述力的服装特征问题,提出了一种新的基于高层公共特征约束的相似性度量算法。首先,通过类别空间学习提取不同场景域的类别信息;然后,在场景域空间网络用类别信息约束传统对比损失函数,增大对类间负样本对的惩罚以减轻过拟合;最后,融合类别公共特征和域特定特征并通过类别判断进行辅助检索。分析和实验结果表明,新算法对跨场景服装检索的准确度要优于当前前沿的方法。
关键词: 相似性度量跨场景服装检索多层次特征提取基于内容的图像检索    
Abstract: The current cross-scenario clothing retrieval framework is based on expressive feature extraction. A new similarity measure based on category constraint is proposed. First, the category information of different scene domains is extracted through category space learning; The category information is then used to constrain the traditional contrast loss function in the scene domain space network. By this way, the penalty of the negative sample pair is increased to alleviate the over-fitting. Finally, we combine the common category features with domain-specific features to carry on retrieval with category constraint. The analysis and experimental results show that the new algorithm outperforms state-of-the-art methods in terms of cross-scenario clothing retrieval.
Key words: similarity measure    cross-scenario clothing retrieval    multi-level    feature extraction    context based image retrieval
收稿日期: 2018-12-25 出版日期: 2019-07-25
CLC:  TP301.6  
基金资助: 国家自然科学基金资助项目(61379106); 山东省自然科学基金资助项目(ZR2009GL014, ZR2013FM036, ZR2015FM011); 浙江大学 CAD&CG 国家重点实验室开放课题(A1315) .
作者简介: 李宗民(1965—), ORCID:http://orcid.org/0000-0003-4785-791X,男, 博士, 教授, 主要从事计算机图形学与图像处理、模式识别研究,E-mail:lizongmin@ upc.edu.cn.
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  
李宗民
边玲燕
刘玉杰

引用本文:

李宗民, 边玲燕, 刘玉杰. 基于多层次特征的跨场景服装检索[J]. 浙江大学学报(理学版), 2019, 46(4): 431-438.

LI Zongmin, BIAN Lingyan, LIU Yujie. Cross-scenario clothing retrieval based on multi-level features. Journal of Zhejiang University (Science Edition), 2019, 46(4): 431-438.

链接本文:

https://www.zjujournals.com/sci/CN/10.3785/j.issn.1008-9497.2019.04.008        https://www.zjujournals.com/sci/CN/Y2019/V46/I4/431

1 CHENQ, HUANGJ, FERISR, et al. Deep domain adaptation for describing people based on fine-grained clothing attributes[C]// IEEE Conference on Computer Vision and Pattern Recognition. Boston:IEEE Computer Society, 2015: 5315-5324.
2 LIUZ, LUOP, QIUS, et al. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations[C]// IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 1096-1104.
3 ZHANGX, JIAJ, GAOK, et al. Trip Outfits Advisor: Location-Oriented Clothing Recommendation[J]. IEEE Transactions on Multimedia, 2017, 19(11):2533-2544.
4 LIUS, FENGJ, ZHANGT, et al. Hi, magic closet, tell me what to wear![C]// ACM International Conference on Multimedia. New York: ACM, 2012:619-628.
5 LIUY, GAOY, FENGS, et al. Weather-to-garment: Weather-oriented clothing recommendation[C]// IEEE International Conference on Multimedia and Expo. Hong Kong: IEEE, 2017:181-186.
6 KIAPOURM H, HANX, LAZEBNIKS, et al. Where to buy it: matching street clothing photos in online shops[C]// IEEE International Conference on Computer Vision. Santiago: IEEE, 2015:3343-3351.
7 LIZ, LIY, GAOY, et al. Fast Cross-Scenario Clothing Retrieval Based on Indexing Deep Features[C]// Pacific-Rim Conference on Advances in Multimedia Information Processing. New York: Springer-Verlag, 2016:107-118.
8 LIUS, SONGZ, WANGM, et al. Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set[C]// Computer Vision and Pattern Recognition. Providence: IEEE, 2012:1335-1336.
9 JIX, WANGW, ZHANGM, et al. Cross-Domain Image Retrieval with Attention Modeling[C]// ACM on Multimedia Conference. Mountain View: ACM, 2017:1654-1662.
10 HUANGJ, FERISR, CHENQ, et al. Cross-Domain Image Retrieval with a Dual Attribute-Aware Ranking Network[C]// IEEE International Conference on Computer Vision. Santiago: IEEE Computer Society, 2015:1062-1070.
11 YANGX, YANGX, OOI B C, et al. Effective deep learning-based multi-modal retrieval[J]. Vldb Journal -the International Journal on Very Large Data Bases, 2016, 25(1):79-101.
12 WANGX, SUNZ, ZHANGW, et al. Matching User Photos to Online Products with Robust Deep Features[M].New York: ACM,2016:7-14.
13 KALANTIDISY, KENNEDYL, LIL J. Getting the look:Clothing recognition and segmentation for automatic product suggestions in everyday photos[C]// International Conference on Multimedia Retrieval. New York: ACM, 2013:105-112.
14 WUP, HOI S C H, XIAH, et al. Online multimodal deep similarity learning with application to image retrieval[C]// ACM International Conference on Multimedia. Barcelona: ACM, 2013:153-162.
15 SCHROFFF, KALENICHENKOD, PHILBINJ. FaceNet: A unified embedding for face recognition and clustering[C]// IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE Computer Society,2015: 815-823.
16 CHOPRAS, HADSELLR, LECUNY. Learning a similarity metric discriminatively, with application to face verification[C]// Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference. San Diego: IEEE, 2005(1):539-546.
17 KUO Y H, CHENGW H, LINH T, et al. Unsupervised semantic feature discovery for image object retrieval and tag refinement[J]. IEEE Transactions on Multimedia, 2012, 14(4):1079-1090.
18 KRIZHEVSKYA, SUTSKEVERI, HINTONG E. ImageNet classification with deep convolutional neural networks[C]// International Conference on Neural Information Processing Systems. Lake Tahoe: Curran Associates Inc, 2012:1097-1105.
19 RUSSAKOVSKYO, DENGJ, SUH, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2014, 115(3):211-252.
20 MELEKHOVI, KANNALAJ, RAHTUE. Siamese network features for image matching[C]// International Conference on Pattern Recognition. Cancun: IEEE, 2017:378-383.
21 BELLS, BALAK. Learning visual similarity for product design with convolutional neural networks[J]. ACM Transactions on Graphics, 2015,34(4):1-10.
22 LINM, CHENQ, YANS. Network in Network[C] //ICLR. Banff: Compute Science, 2014
23 CHATFIELDK, SIMONYANK, VEDALDIA, et al. Return of the devil in the details: delving deep into convolutional nets[C]//Proceeding of the British Machine Vision Conference. Nottingham:BMVA Press,2014 .
24 SIMONYANK, ZISSERMANA. Very deep convolutional networks for large-scale image recognition[C]//3rd. International Conference on learning Representations. San Diego: IEEE, 2015.
[1] 任燕芝. 基于动态分级和邻域反向学习的改进粒子群算法[J]. 浙江大学学报(理学版), 2018, 45(3): 261-271.
[2] 韩萌. 耗散结构和差分变异混合的鸡群算法[J]. 浙江大学学报(理学版), 2018, 45(3): 272-283.
[3] 郭立婷. 基于自适应和变游走方向的改进狼群算法[J]. 浙江大学学报(理学版), 2018, 45(3): 284-293.
[4] 林耿. 一种求解厌恶型p-中位问题的混合进化算法[J]. 浙江大学学报(理学版), 2018, 45(1): 29-36,43.
[5] 张晓燕, 孙婷婷, 徐新民. 一种基于多普勒效应的拟牛顿室内定位算法[J]. 浙江大学学报(理学版), 2017, 44(3): 322-326.
[6] 谭熠峰, 孙婷婷, 徐新民. 基于动态因子和共享适应度的改进粒子群算法[J]. 浙江大学学报(理学版), 2016, 43(6): 696-700.