自动化技术、计算机技术 |
|
|
|
|
基于距离度量损失框架的半监督学习方法 |
刘半藤1,2(),叶赞挺2,秦海龙3,王柯1,4,*(),郑启航1,王章权1,2 |
1. 浙江树人学院 信息科技学院,浙江 杭州 310015 2. 常州大学 计算机与人工智能学院,江苏 常州 213164 3. 浙江绿城未来数智科技有限公司,浙江 杭州 311121 4. 浙江大学 工业控制技术国家重点实验室,浙江 杭州 310027 |
|
Semi-supervised learning method based on distance metric loss framework |
Ban-teng LIU1,2(),Zan-ting YE2,Hai-long QIN3,Ke WANG1,4,*(),Qi-hang ZHENG1,Zhang-quan WANG1,2 |
1. College of Information Science and Technology, Zhejiang Shuren University, Hangzhou 310015, China 2. College of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China 3. Zhejiang Lvcheng Future Digital Intelligence Technology Limited Company, Hangzhou 311121, China 4. State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China |
引用本文:
刘半藤,叶赞挺,秦海龙,王柯,郑启航,王章权. 基于距离度量损失框架的半监督学习方法[J]. 浙江大学学报(工学版), 2023, 57(4): 744-752.
Ban-teng LIU,Zan-ting YE,Hai-long QIN,Ke WANG,Qi-hang ZHENG,Zhang-quan WANG. Semi-supervised learning method based on distance metric loss framework. Journal of ZheJiang University (Engineering Science), 2023, 57(4): 744-752.
链接本文:
https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2023.04.012
或
https://www.zjujournals.com/eng/CN/Y2023/V57/I4/744
|
1 |
KORNBLITH S, SHLENS J, LE Q V. Do better imagenet models transfer better? [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 2661-2671.
|
2 |
YANG S, LUO P, LOY C C, et al. WIDER FACE: a face detection benchmark [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 5525-5533.
|
3 |
许佳辉, 王敬昌, 陈岭, 等 基于图神经网络的地表水水质预测模型[J]. 浙江大学学报:工学版, 2021, 55 (4): 601- 607 XU Jia-hui, WANG Jing-chang, CHEN Ling, et al Surface water quality prediction model based on graph neural network[J]. Journal of Zhejiang University: Engineering Science, 2021, 55 (4): 601- 607
|
4 |
LEE D H. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks [C]// ICML 2013 Workshop on Challenges in Representation Learning. Atlanta: PMLR, 2013: 896.
|
5 |
TARVAINEN A, VALPOLA H Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results[J]. Advances in Neural Information Processing Systems, 2017, 30: 1195- 1204
|
6 |
XIE Q, DAI Z, HOVV E, et al Unsupervised data augmentation for consistency training[J]. Advances in Neural Information Processing Systems, 2020, 33 (2): 6256- 6268
|
7 |
MIYATO T, MAEDA S, KOYAMA M, et al Virtual adversarial training: a regularization method for supervised and semi-supervised learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41 (8): 1979- 1993
|
8 |
WANG F, CHENG J, LIU W, et al Additive margin softmax for face verification[J]. IEEE Signal Processing Letters, 2018, 25 (7): 926- 930
doi: 10.1109/LSP.2018.2822810
|
9 |
LAINE S, AILA T. Temporal ensembling for semi-supervised learning [C]// International Conference on Learning Representations. Toulon: [s. n.], 2017: 1-13.
|
10 |
SAJJADI M, JAVANMARDI M, TASDIZEN T Regularization with stochastic transformations and perturbations for deep semi-supervised learning[J]. Advances in Neural Information Processing Systems, 2016, 29 (7): 1163- 1171
|
11 |
LIU W, WEN Y, YU Z, et al. Large-margin softmax loss for convolutional neural networks [C]// Proceedings of the 33rd International Conference on Machine Learning. New York: PMLR, 2016: 507-516.
|
12 |
LI Y, GAO F, OU Z, et al. Angular softmax loss for end-to-end speaker verification [C]// 2018 11th International Symposium on Chinese Spoken Language Processing. Taipei: IEEE, 2018: 190-194.
|
13 |
GRANDVALET Y, BENGIO Y Semi-supervised learning by entropy minimization[J]. Advances in Neural Information Processing Systems, 2004, 17: 529- 536
|
14 |
VERMA V, KAWAGUCHI K, LAMB A, et al Interpolation consistency training for semi-supervised learning[J]. Neural Networks, 2022, 145: 90- 106
doi: 10.1016/j.neunet.2021.10.008
|
15 |
HENDRYCKS D, MU N, CUBUK E D, et al. Augmix: a simple method to improve robustness and uncertainty under data shift [C]// International Conference on Learning Representations. Ethiopia: [s. n.], 2020: 1-15.
|
16 |
KENDALL A, GAL Y, CIPOLLA R. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7482-7491.
|
17 |
AZIERE N, TODOROVIC S. Ensemble deep manifold similarity learning using hard proxies [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 7299-7307.
|
18 |
QIAN Q, SHANG L, SUN B, et al. Softtriple loss: deep metric learning without triplet sampling [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 6450-6458.
|
19 |
KIM S, KIM D, CHO M, et al. Proxy anchor loss for deep metric learning [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 3238-3247.
|
20 |
SUN Y, CHENG C, ZHANG Y, et al. Circle loss: a unified perspective of pair similarity optimization [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 6398-6407.
|
21 |
KERMANY D, ZHANG K, GOLDBAUM M Labeled optical coherence tomography (oct) and chest X-ray images for classification[J]. Mendeley Data, 2018, 2 (2): 255- 265
|
22 |
BERTHELOT D, CARLINI N, CUBUK E D, et al. Remixmatch: semi-supervised learning with distribution matching and augmentation anchoring [C]// International Conference on Learning Representations. Addis Ababa: [s. n.], 2020: 1-13.
|
23 |
BERTHELOT D, CARLINI N, GOODFELLOW I, et al Mixmatch: a holistic approach to semi-supervised learning[J]. Advances in Neural Information Processing Systems, 2019, 32: 155- 166
|
24 |
SOHN K, BERTHELOT D, CARLINI N, et al Fixmatch: simplifying semi-supervised learning with consistency and confidence[J]. Advances in Neural Information Processing Systems, 2020, 33: 596- 608
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|