|
|
Multi-source unsupervised domain adaption method based on self-supervised task |
Lan WU1(),Han WANG1,Bin-quan LI1,Chong-yang LI1,Fan-shi KONG2 |
1. School of Electrical Engineering, Henan University of Technology, Zhengzhou 450001, China 2. Zhengzhou Railway Vocational and Technical College, Zhengzhou 450001, China |
|
|
Abstract A multi-source unsupervised domain adaptation method based on self-supervised tasks was proposed aiming at the problem of low classification accuracy due to the difficulty of simultaneously aligning domain-invariant features under multi-source aggregation. The method introduced three self-supervised auxiliary tasks of rotation, horizontal flip and position prediction, and performed adaption alignment optimization on unlabeled data through pseudo-labeling and consistency of semantic information. A new optimized loss was built, and the classification variance of multi-domain common classes was reduced. Dynamic weight parameters were defined to improve the classification performance of the model based on the principle of few samples and large weights for the problem of class-imbalance. Experiments were compared with the existing mainstream methods on the two benchmark data sets, Office-31 and Office-Caltech10. The experimental results show that the classification accuracy can be improved by up to 6.8% in the two cases of class balance and imbalance.
|
Received: 01 June 2021
Published: 24 April 2022
|
|
Fund: 国家自然科学基金资助项目(61973103);河南省优秀青年科学基金资助项目;郑州市协同创新专项资助项目(21ZZXTCX01) |
基于自监督任务的多源无监督域适应法
针对多源聚合下同时对齐域不变特征较困难而造成分类精度不高的问题, 提出基于自监督任务的多源无监督域适应法. 该方法引入旋转、水平翻转和位置预测3个自监督辅助任务, 通过伪标签性、语义信息的一致性对无标签数据进行自适应的对齐优化. 构建新的优化损失函数, 减少多域公共类别的分类差异. 针对类别不均衡的问题, 基于少样本大权重的原则, 定义动态权重参数, 提高模型的分类性能. 在公开的Office-31、Office-Caltech10 2种基准数据集上, 与现有的主流方法进行实验对比. 实验结果表明, 在类别均衡、不均衡2种情况下, 分类精度最高可以提高6.8%.
关键词:
自监督任务,
类别不均衡,
语义信息,
权重,
域自适应
|
|
[1] |
沈宗礼, 余建波 基于迁移学习与深度森林的晶圆图缺陷识别[J]. 浙江大学学报: 工学版, 2020, 54 (6): 1228- 1239 SHEN Zong-li, YU Jian-bo Wafer map defect recognition based on transfer learning and deep forest[J]. Journal of Zhejiang University: Engineering Science, 2020, 54 (6): 1228- 1239
|
|
|
[2] |
LONG M, CAO Y, WANG J, et al. Learning transferable features with deep adaptation networks [C]// International Conference on Machine Learning. Lille: [s. n.], 2015: 97-105.
|
|
|
[3] |
TZENG E, HOFFMAN J, SAENKO K, et al. Adversarial discriminative domain adaptation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 7167–7176.
|
|
|
[4] |
GANIN Y, LEMPITSKY V. Unsupervised domain adaptation by backpropagation [C]// International Conference on Machine Learning. Lille: PMLR, 2015: 1180–1189.
|
|
|
[5] |
LONG M, ZHU H, WANG J, et al. Deep transfer learning with joint adaptation networks[C]// International Conference on Machine Learning. Sydney: PMLR, 2017: 2208–2217.
|
|
|
[6] |
SAITO K, WATANABE K, USHIKU Y, et al. Maximum classifier discrepancy for unsupervi-sed domain adaptation [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 3723–3732.
|
|
|
[7] |
XU R, CHEN Z, ZUO W, et al. Deep cockt-ail network: multi-source unsupervised domain adaptation with category shift [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 3964–3973.
|
|
|
[8] |
ZHAO H, ZHANG S, WU G. Adversarial multiple source domain adaptation [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montreal: MIT Press, 2018: 8568–8579.
|
|
|
[9] |
李威, 王蒙. 基于渐进多源域迁移的无监督跨域目标检测[EB/OL]. (2020-03-20). http://kns.cnki.net/kcms/detail/11.2109.TP.20200320.1044.003.html. LI Wei, WANG Meng. Unsupervised cross-domain object detection based on progressive multi-source transfer [EB/OL]. (2020-03-20). http://kns.cnki.net/kcms/detail/11.2109.TP.20200320.1044.003.html.
|
|
|
[10] |
PENG X, BAI Q, XIA X, et al. Moment matching for multi-source domain adaptation [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 1406–1415.
|
|
|
[11] |
ZHAO S, WANG G, ZHANG S, et al. Multi-source distilling domain adaptation [C]// Proceedings of the AAAI Conference on Artificial Intelligence. New York: AAAI, 2020: 1297–1298.
|
|
|
[12] |
ZHU Y, ZHUANG F, WANG D. Aligning domain specific distribution and classifier for cross-domain classification from multiple sources [C]// Proceedings of the AAAI Conference on Artificial Intelligence. Hawaii: AAAI, 2019: 5989–5996.
|
|
|
[13] |
PENG X, HUANG Z, ZHU Y, et al. Federatedadversarial domain adaptation [C]// International Conference on Learning Representations. Addis Ababa: [s. n.], 2020.
|
|
|
[14] |
ZHANG R, ISOLA P, ALEXEI A. Colorful image colorization [C]// European Conference on Computer Vision. Cham: Springer, 2016: 649–666.
|
|
|
[15] |
GUSTAV L A, MICHAEL M, GREG O, et al. Learning representations for automatic colorization [C]// European Conference on Computer Vision. Amsterdam: [s. n.], 2016: 577–593.
|
|
|
[16] |
CARL V, ABHINAV S, ALIREZA F, et al. Tracking emerges by colorizing videos [C]// Proceedings of the European Conference on Computer Vision. Munich: [s. n.], 2018: 391–408.
|
|
|
[17] |
NOROOZI M, FAVARO P. Unsupervised learning of visual representations by solving jigsaw puzzles [C]// European Conference on Computer Vision. Amsterdam: [s. n.], 2016: 69–84.
|
|
|
[18] |
CARL D, ABHINAV G, ALEXEI A. Unsupervised visual representation learning by conte-xt prediction [C]// Proceedings of the IEEE International Conference on Computer Vision. [S. l.]: IEEE, 2015: 1422–1430.
|
|
|
[19] |
IMON J, PAOLO F. Self-supervised feature learning by learning to spot artifacts [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 2733–2742.
|
|
|
[20] |
SPYROS G, PRAVEER S, NIKOS K. Unsupervised representation learning by predicting image rotations [EB/OL].(2018-03-21). https://doi.org/10.48550/arXiv.1803.07728.
|
|
|
[21] |
DEEPAK P, PHILIPP K, JEFF D, et al. Context encoders: feature learning by inpainting [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 2536–2544.
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|