Please wait a minute...
浙江大学学报(工学版)  2021, Vol. 55 Issue (11): 2033-2044    DOI: 10.3785/j.issn.1008-973X.2021.11.003
生物医学工程     
基于3D scSE-UNet的肝脏CT图像半监督学习分割方法
刘清清1,2(),周志勇2,范国华3,钱旭升1,2,胡冀苏1,2,陈光强3,戴亚康2,4,*()
1. 中国科学技术大学 生命科学与医学部 生物医学工程学院(苏州),江苏 苏州 215163
2. 中国科学院 苏州生物医学工程技术研究所,江苏 苏州 215163
3. 苏州大学附属第二医院,江苏 苏州 215000
4. 济南国科医工科技发展有限公司,山东 济南 250000
Semi-supervised learning segmentation method of liver CT images based on 3D scSE-UNet
Qing-qing LIU1,2(),Zhi-yong ZHOU2,Guo-hua FAN3,Xu-sheng QIAN1,2,Ji-su HU1,2,Guang-qiang CHEN3,Ya-kang DAI2,4,*()
1. School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
2. Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
3. The Second Affiliated Hospital of Suzhou University, Suzhou 215000, China
4. Jinan Guoke Medical Engineering Technology Development Limited Company, Jinan 250000, China
 全文: PDF(1006 KB)   HTML
摘要:

针对分割神经网络需要大量的高质量标签但较难获取的问题,提出基于3D scSE-UNet的半监督学习分割方法. 该方法使用自训练的半监督学习框架,将包含改进的并行空间/特征通道压缩和激励模块(scSE-block+)的3D scSE-UNet作为分割网络. scSE-block+可以从图像空间和特征通道2个方面自动学习图像的有效特征,抑制无用冗余特征,更好地保留图像边缘信息. 在自训练过程中加入全连接条件随机场,对分割网络产生的伪标签进行边缘细化,提升伪标签的精确度. 在LiTS17 Challenge和SLIVER07数据集上验证所提出方法的有效性. 当有标签图像占训练集总图像的30%时,所提方法的Dice相似系数(dice score)为0.941. 结果表明,所提出的半监督学习分割方法可以在仅使用少量标注数据的情况下,取得与全监督分割方法相当的分割效果,有效减轻肝脏CT图像分割对专家标注数据的依赖.

关键词: 半监督学习自训练3D UNet注意力模块全连接条件随机场    
Abstract:

A semi-supervised learning segmentation method based on 3D scSE-UNet was proposed aiming at the problem that segmentation network requires a large number of high-quality labels and it is difficult to obtain. A self-training semi-supervised learning framework is used and 3D scSE-UNet containing the improved concurrent spatial and channel squeeze and excitation module (scSE-block+) in 3D UNet is utilized as the segmentation network. The scSE-block+ can automatically learn effective features of an image from two aspects, image space and feature channel, and suppress redundant features, which helps to preserve more edge information. During the self-training process, dense conditional random field (CRF) is used to refine the segmentation results generated by 3D scSE-UNet, so as to improve the accuracy of the pseudo labels. The effectiveness of the proposed method was verified on LiTS17 Challenge and SLIVER07 dataset. When the labeled images accounted for 30% of the total images in the training set, the dice score of the proposed method was 0.941. Results show that the proposed semi-supervised learning segmentation method can achieve comparable segmentation results with the fully-supervised 3D UNet segmentation method, which effectively reduces the dependence on expert labeled data in liver CT images segmentation.

Key words: semi-supervised learning    self-training    3D UNet    attention module    dense conditional random field
收稿日期: 2020-12-11 出版日期: 2021-11-05
CLC:  R 318.14  
基金资助: 国家重点研发计划资助项目(2018YFA0703101);中国科学院青年创新促进会资助项目(2021324);苏州市科技计划资助项目(SS201854);丽水市重点研发计划资助项目(2019ZDYF17);泉城5150人才计划资助项目;济南创新团队资助项目(2018GXRC017);江苏省医疗器械联合资金资助项目(SYC2020002)
通讯作者: 戴亚康     E-mail: 17865198623@163.com;daiyk@sibet.ac.cn
作者简介: 刘清清(1995—),女,硕士生,从事医学影像分析研究. orcid.org/0000-0001-7745-2003. E-mail: 17865198623@163.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
刘清清
周志勇
范国华
钱旭升
胡冀苏
陈光强
戴亚康

引用本文:

刘清清,周志勇,范国华,钱旭升,胡冀苏,陈光强,戴亚康. 基于3D scSE-UNet的肝脏CT图像半监督学习分割方法[J]. 浙江大学学报(工学版), 2021, 55(11): 2033-2044.

Qing-qing LIU,Zhi-yong ZHOU,Guo-hua FAN,Xu-sheng QIAN,Ji-su HU,Guang-qiang CHEN,Ya-kang DAI. Semi-supervised learning segmentation method of liver CT images based on 3D scSE-UNet. Journal of ZheJiang University (Engineering Science), 2021, 55(11): 2033-2044.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2021.11.003        https://www.zjujournals.com/eng/CN/Y2021/V55/I11/2033

图 1  基于3D scSE-UNet的自训练半监督方法示意图
图 2  3D scSE-UNet网络结构图
网络结构层 1) 特征图大小 卷积核参数 网络结构层 特征图大小 卷积核参数
1)注:第2列表示当前层的输出特征的大小及通道数; 第3列中[ ]表示卷积操作,“3×3×3,8”表示经过卷积核大小为3×3×3和8通道的卷积层; “Dropout_1+UpSampling3D_1”中“+”表示Concatenate_1是将Dropout_1与UpSampling3D_1跳跃连接.
input 128×128×64×1 ? scSE_block_1 16×16×8×128 ?
Conv3D_1 $ \begin{array}{*{20}{c}} {128 \times 128 \times 64 \times 8} \\ {128 \times 128 \times 64 \times 16} \end{array} $ $\left[{\begin{array}{*{20}{c} }{3 \times 3 \times 3,\;8}\\{3 \times 3 \times 3,\;16}\end{array} } \right]$ UpSampling3D_2 32×32×16×128 2×2×2
MaxPooling3D_1 64×64×32×16 2×2×2 Concatenate_2 32×32×16×192 Conv3D_3+UpSampling3D_2
Conv3D_2 $ \begin{array}{*{20}{c}} {64 \times 64 \times 32 \times 16} \\ {64 \times 64 \times 32 \times 32} \end{array} $ $\left[ {\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;16} \\ {3 \times 3 \times 3,\;32} \end{array} } \right]$ Conv3D_7 $ \begin{array}{*{20}{c}} {32 \times 32 \times 16 \times 64} \\ {32 \times 32 \times 16 \times 64} \end{array} $ $\left[ {\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;64} \\ {3 \times 3 \times 3,\;64} \end{array} } \right]$
MaxPooling3D_2 32×32×16×32 2×2×2 scSE_block_2 32×32×16×64 ?
Conv3D_3 $ \begin{array}{*{20}{c}} {32 \times 32 \times 16 \times 32} \\ {32 \times 32 \times 16 \times 64} \end{array} $ $\left[{\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;32} \\ {3 \times 3 \times 3,\;64} \end{array} }\right]$ UpSampling3D_3 64×64×32×64 2×2×2
MaxPooling3D_3 16×16×8×64 2×2×2 Concatenate_3 64×64×32×96 Conv3D_2+UpSampling3D_3
Conv3D_4 $ \begin{array}{*{20}{c}} {16 \times 16 \times 8 \times 64} \\ {16 \times 16 \times 8 \times 128} \end{array} $ $\left[{\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;64} \\ {3 \times 3 \times 3,\;128} \end{array} }\right]$ Conv3D_8 64×64×32×32 $\left[ {\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;32} \\ {3 \times 3 \times 3,\;32} \end{array} } \right]$
Dropout_1 16×16×8×128 0.5 scSE_block_3 64×64×32×32 ?
MaxPooling3D_4 8×8×4×128 2×2×2 UpSampling3D_4 128×128×64×32 2×2×2
Conv3D_5 $ \begin{array}{*{20}{c}} {8 \times 8 \times 4 \times 128} \\ {8 \times 8 \times 4 \times 256} \end{array} $ $\left[{\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;128} \\ {3 \times 3 \times 3,\;256} \end{array} }\right]$ Concatenate_4 128×128×64×48 Conv3D_1+UpSampling3D_4
Dropout_2 8×8×4×256 0.5 Conv3D_9 128×128×64×16 $\left[ {\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;16} \\ {3 \times 3 \times 3,\;16} \end{array} } \right]$
UpSampling3D_1 16×16×8×256 2×2×2 scSE_block_4 128×128×64×16 ?
Concatenate_1 16×16×8×384 Dropout_1+UpSampling3D_1 Conv3D_10 128×128×64×1 [1×1×1,1]
Conv3D_6 $ \begin{array}{*{20}{c}} {16 \times 16 \times 8 \times 128} \\ {16 \times 16 \times 8 \times 128} \end{array} $ $\left[{\begin{array}{*{20}{c} } {3 \times 3 \times 3,\;128} \\ {3 \times 3 \times 3,\;128} \end{array} }\right]$ ? ? ?
表 1  3D scSE-UNet网络结构表
图 3  附加层scSE-block+结构细节图
方法 L U LP/% DSC SEN PPV VOE/% RVD/% ASD/mm RMSD/mm MSD/mm
全监督3D UNet 100 ? ? 0.937±0.034 0.958±0.021 0.920±0.059 5.613±6.744 6.034±7.788 4.168±5.688 10.133±13.870 88.078±61.122
全监督3D
scSE-UNet
100 ? ? 0.950±0.032 0.964±0.047 0.938±0.041 4.532±6.572 4.689±6.443 1.974±2.032 4.257±4.660 38.767±22.557
半监督3D
scSE-UNet
50 50 50 0.948±0.023 0.960±0.024 0.937±0.041 4.162±4.696 4.262±5.078 2.037±1.123 4.227±2.571 37.608±16.748
40 60 40 0.946±0.025 0.959±0.026 0.934±0.048 5.079±5.358 4.970±6.004 2.143±1.213 4.295±2.801 36.659±16.682
30 70 30 0.941±0.032 0.959±0.047 0.925±0.045 5.130±6.832 5.176±6.853 2.431±1.702 5.014±3.798 40.554±19.559
20 80 20 0.930±0.040 0.945±0.048 0.919±0.060 6.326±8.065 6.598±8.684 2.972±2.176 5.758±4.541 43.552±21.964
10 90 10 0.902±0.049 0.943±0.052 0.874±0.078 11.337±10.201 12.419±12.079 4.403±2.736 8.000±4.958 49.131±21.866
表 2  不同标签数量占比下的分割性能比较
图 4  不同标签数量占比下的3D scSE-UNet分割dice score曲线
图 5  基线方法分割结果与本研究半监督3D scSE-UNet方法分割结果对比
监督方式 L U LP/% DSC SEN PPV VOE/% RVD/% ASD/mm RMSD/mm MSD/mm
全监督 10 ? ? 0.896±0.053 0.934±0.054 0.867±0.082 11.283±10.339 12.333±12.343 9.203±7.892 21.924±17.558 140.408±63.219
半监督 10 90 10 0.902±0.049 0.943±0.052 0.874±0.078 11.337±10.201 12.419±12.079 4.403±2.736 8.000±4.958 49.131±21.866
全监督 20 ? ? 0.914±0.065 0.941±0.060 0.893±0.086 7.860±10.219 8.721±12.834 5.882±8.069 13.767±15.446 115.778±69.480
半监督 20 80 20 0.930±0.040 0.945±0.048 0.919±0.060 6.326±8.065 6.598±8.684 2.972±2.176 5.758±4.541 43.552±21.964
全监督 30 ? ? 0.925±0.045 0.953±0.057 0.903±0.067 7.786±9.618 8.356±10.889 4.301±2.755 10.548±5.728 81.865±25.524
半监督 30 70 30 0.941±0.032 0.959±0.047 0.925±0.045 5.130±6.832 5.176±6.853 2.431±1.702 5.014±3.798 40.554±19.559
全监督 40 ? ? 0.937±0.039 0.950±0.036 0.928±0.064 5.549±7.023 5.847±7.830 3.821±3.949 6.258±10.666 59.050±67.107
半监督 40 60 40 0.946±0.025 0.959±0.026 0.934±0.048 5.079±5.358 4.970±6.004 2.143±1.213 4.295±2.801 36.659±16.682
全监督 50 ? ? 0.943±0.026 0.960±0.028 0.929±0.044 4.717±5.280 4.912±5.702 2.230±1.212 4.452±2.651 36.582±14.887
半监督 50 50 50 0.948±0.023 0.960±0.024 0.937±0.041 4.162±4.696 4.262±5.078 2.037±1.123 4.227±2.571 37.608±16.748
表 3  不同标签数量占比下使用3D scSE-UNet进行全监督和半监督的分割性能比较
方法 DSC SEN PPV VOE/% RVD/% ASD/mm RMSD/mm MSD/mm
半监督3D UNet 0.935±0.032 0.955±0.045 0.921±0.049 6.458±7.041 6.740±7.681 3.493±1.596 8.476±3.617 66.090±19.594
半监督3D scSE-UNet 0.941±0.032 0.959±0.047 0.925±0.045 5.130±6.832 5.176±6.853 2.431±1.702 5.014±3.798 40.554±19.559
表 4  3D UNet与3D scSE-UNet性能对比
方法 DSC SEN PPV VOE/% RVD/% ASD/mm RMSD/mm MSD/mm
scSE-block 0.939±0.030 0.951±0.044 0.928±0.049 5.464±6.775 5.583±7.124 2.535±1.693 5.162±3.824 40.689±18.774
scSE-block+ 0.941±0.032 0.959±0.047 0.925±0.045 5.130±6.832 5.176±6.853 2.431±1.702 5.014±3.798 40.554±19.559
表 5  scSE-block+改进前、后性能对比
Dense CRF DSC SEN PPV VOE/% RVD/% ASD/mm RMSD/mm MSD/mm
不使用 0.940±0.036 0.956±0.045 0.922±0.047 4.544±6.371 4.625±6.580 2.417±3.606 5.097±10.050 48.937±.32.456
使用 0.941±0.032 0.959±0.047 0.925±0.045 5.130±6.832 5.176±6.853 2.431±1.702 5.014±3.798 40.554±19.559
表 6  Dense CRF处理前、后分割性能对比
图 6  Dense CRF优化后的结果与真值标注、网络分割结果的对比
模型 T/ms
全监督3D UNet 747
半监督3D UNet 769
全监督3D scSE-UNet 895
半监督3D scSE-UNet 893
表 7  不同模型平均处理图像时间
1 LEO J Computer-aided surgery meets predictive, preventive and personalized medicine[J]. EPMA Journal, 2017, 8 (1): 1- 4
doi: 10.1007/s13167-017-0084-8
2 ZHOU S, CHENG Y, TAMURA S Automated lung seg- mentation and smoothing techniques for inclusion of juxtapleural nodules and pulmonary vessels on chest CT images[J]. Biomedical Signal Processing and Control, 2014, 13: 62- 70
doi: 10.1016/j.bspc.2014.03.010
3 SHI C, CHENG Y, WANG J, et al Low-rank and sparse decomposition based shape model and probabilistic atlas for automatic pathological organ segmentation[J]. Medical Image Analysis, 2017, 38: 30- 49
doi: 10.1016/j.media.2017.02.008
4 SHI C, CHENG Y, LIU F, et al A hierarchical local region-based sparse shape composition for liver segmentation in CT scans[J]. Pattern Recognition, 2016, 50: 88- 106
doi: 10.1016/j.patcog.2015.09.001
5 LI G, CHEN X, SHI F, et al Automatic liver segmentation based on shape constraints and deformable graph cut in CT images[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2015, 24 (12): 5315
doi: 10.1109/TIP.2015.2481326
6 LIU Z, SONG Y, SHENG V, et al Liver CT sequence segmentation based with improved U-Net and graph cut[J]. Expert Systems with Applications, 2019, 126: 54- 63
doi: 10.1016/j.eswa.2019.01.055
7 LU X, WU J, REN X, et al The study and application of the improved region growing algorithm for liver segmentation[J]. Optik-International Journal for Light and Electron Optics, 2014, 125 (9): 2142- 2147
doi: 10.1016/j.ijleo.2013.10.049
8 RAFIEI S, KARIMI N, MIRMAHBOUB B, et al. Liver segmentation in abdominal CT images using probabilistic atlas and adaptive 3D region growing [C]// 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Berlin: IEEE, 2019.
9 郑洲, 张学昌, 郑四鸣, 等 基于区域增长与统一化水平集的CT肝脏图像分割[J]. 浙江大学学报: 工学版, 2018, 52 (12): 145- 159
ZHENG Zhou, ZHANG Xue-chang, ZHENG Si-ming, et al Liver segmentation in CT images based on region-growing and unified level set method[J]. Journal of Zhejiang University: Engineering Science, 2018, 52 (12): 145- 159
10 WANG J, CHENG Y, GUO C, et al Shape-intensity prior level set: combining probabilistic atlas and probability map constrains for automatic liver segmentation from abdominal CT images[J]. International Journal of Computer Assisted Radiology and Surgery, 2016, 11 (5): 817- 826
doi: 10.1007/s11548-015-1332-9
11 LI X, CHEN H, QI X, et al H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes[J]. IEEE Transactions on Medical Imaging, 2018, 37 (12): 2663- 2674
doi: 10.1109/TMI.2018.2845918
12 LONG J, SHELHAMER E, DARRELL T Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39 (4): 640- 651
13 孙明建, 徐军, 马伟, 等 基于新型深度全卷积网络的肝脏CT影像三维区域自动分割[J]. 中国生物医学工程学报, 2018, 37 (4): 385- 393
SUN Ming-jian, XU Jun, MA Wei, et al A new fully convolutional network for 3D liver region segmentation on CT images[J]. Chinese Journal of Biomedical Engineering, 2018, 37 (4): 385- 393
doi: 10.3969/j.issn.0258-8021.2018.04.001
14 RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation [C]// International Conference on Medical Image Computing and Computer Assisted Intervention. Munich: Springer, 2015: 234-241.
15 ÇIÇEK Ö, ABDULKADIR A, LIENKAMP S, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens: Springer, 2016: 424-432.
16 CHRIST P, ELSHAER M, ETTLINGER F, et al. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens: Springer, 2016: 415-423.
17 刘哲, 张晓林, 宋余庆, 等 结合改进的U-Net和Morphsnakes的肝脏分割[J]. 中国图象图形学报, 2018, 23 (8): 1254- 1262
LIU Zhe, ZHANG Xiao-lin, SONG Yu-qing, et al Liver segmentation with improved U-Net and Morphsnakes algorithm[J]. Journal of Image and Graphics, 2018, 23 (8): 1254- 1262
18 HE F, ZHANG G, YANG H, et al Multi-scale attention module U-Net liver tumour segmentation method[J]. Journal of Physics: Conference Series, 2020, 1678: 012107
doi: 10.1088/1742-6596/1678/1/012107
19 OKTAY O, SCHLEMPER J, FOLGOC L, et al. Attention U-Net: learning where to look for the pancreas [EB/OL]. [2020-11-20]. https://arxiv.org/pdf/1804.03999.
20 HU J, SHEN L, SUN G, et al. Squeeze-and-excitation networks [C]// Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132-7141.
21 ROY A, NAVAB N, WACHINGER C. Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks [C]// International Conference on Medical Image Computing and Computer Assisted Intervention. Granada: Springer, 2018, 421-429.
22 潘崇煜, 黄健, 郝建国, 等 融合零样本学习和小样本学习的弱监督学习方法综述[J]. 系统工程与电子技术, 2020, 42 (10): 2246- 2256
PAN Chong-yu, HUANG Jian, HAO Jian-guo, et al Survey of weakly supervised learning integrating zero-shot and few-shot learing[J]. Systems Engineering and Electionics, 2020, 42 (10): 2246- 2256
doi: 10.3969/j.issn.1001-506X.2020.10.13
23 NAKAYAMA Y, LI Q, KATSURAGAWA S, et al Automated hepatic volumetry for living related liver transplantation at multisection CT[J]. Radiology, 2006, 240 (3): 743- 748
doi: 10.1148/radiol.2403050850
24 CHEPLYGINA V, DE BRUIJNE M, PLUIM J Not-so- supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis[J]. Medical Image Analysis, 2019, 54: 280- 296
doi: 10.1016/j.media.2019.03.009
25 XIE Q, LUONG M, HOVY E, et al. Self-training with noisy student improves imagenet classification [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. [s.l.]: IEEE, 2020, 10687-10698.
26 ZHAO F, CHEN Y, CHEN F, et al Semi-supervised cerebrovascular segmentation by hierarchical convolutional neural network[J]. IEEE Access, 2018, 6: 67841- 67852
doi: 10.1109/ACCESS.2018.2879521
27 XIA, Y, LIU F, YANG D, et al. 3D Semi-supervised learning with uncertainty-aware multi-view co-training [C]// 2020 Workshop on Applications of Computer Vision. Snowmass Village: IEEE, 2020: 3646-3655.
28 YANG Z, COHEN W, SALAKHUTDINOV R. Revisiting semi-supervised learning with graph embeddings [C]// International Conference on Machine Learning. New York City: JMLR, 2016: 40-48.
29 ZHOU Y, WANG Y, TANG P, et al. Semi-supervised 3D abdominal multi-organ segmentation via deep multi-planar co-training [C]// 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). Hawaii: IEEE, 2019: 121-140.
30 JIANG B, ZHANG Z, LIN D, et al. Semi-supervised learning with graph learning-convolutional networks [C]// Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. Long Beach: IEEE,, 2019: 11313-11320.
31 姜威威, 刘祥强, 韩金仓 基于深度协同训练的肝脏CT图像自动分割方法[J]. 电子设计工程, 2020, 28 (14): 175- 179
JIANG Wei-wei, LIU Xiang-qiang, HAN Jin-cang Automatic liver segmentation from CT images based on deep co-training[J]. Electronic Design Engineering, 2020, 28 (14): 175- 179
32 LEE D. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks [C]// In Workshop on Challenges in Representation Learning, ICML. Atlanta: JMLR, 2013, 3: 896.
33 BAI W, OKTAY O, SINCLAIR M, et al. Semi-supervised learning for network-based cardiac MR image segmentation [C]// International Conference on Medical Image Computing and Computer Assisted Intervention. Quebec: Springer, 2017: 253-260.
34 KRAHENBUHL P, KOLTUN V Efficient inference in fully connected CRFs with Gaussian edge potentials[J]. Advances in Neural Information Processing Systems, 2011, 24: 109- 117
[1] 王浩远,梁煜,张为. 融合多分辨率表征的实时烟雾分割算法[J]. 浙江大学学报(工学版), 2021, 55(12): 2334-2341.
[2] 丰小月, 梁艳春, 林希珣, 管仁初. 永恒语言学习研究与发展[J]. 浙江大学学报(工学版), 2017, 51(1): 82-88.
[3] 林亦宁, 韦巍, 戴渊明. 半监督Hough Forest跟踪算法[J]. J4, 2013, 47(6): 977-983.
[4] 姚伏天, 钱沄涛, 李吉明. 空间约束半监督高斯过程下的高光谱图像分类[J]. J4, 2012, 46(7): 1295-1300.