Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2020, Vol. 54 Issue (3): 566-573    DOI: 10.3785/j.issn.1008-973X.2020.03.017
Computer Technology and Image Processing     
Deep segmentation method of tumor boundaries from MR images of patients with nasopharyngeal carcinoma using multi-modality and multi-dimension fusion
Yan-jia HONG1(),Tie-bao MENG2,Hao-jiang LI2,Li-zhi LIU2,Li LI2,Shuo-yu XU2,Sheng-wen GUO1,*()
1. Department of Biomedical Engineering, South China University of Technology, Guangzhou 510006, China
2. Medical Image Center, Sun Yat-sen University Cancer Center, Guangzhou 510060, China
Download: HTML     PDF(1082KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

First, T1-weighted (T1W), T2-weighted (T2W) and T1 enhanced structural MR images of 421 patients were collected, the tumor boundaries of all images were delineated manually by two experienced doctors as the ground truth, the images and ground truth of 346 patients were considered as training set and the remaining images and corresponding ground truth of 75 patients were selected as independent testing set. Second, three single modality, based multi-dimension deep convolutional neural networks (CNN) and three two-modality multi-dimension fusion deep convolutional networks and a multi-modality multi-dimension fusion (MMMDF) deep convolutional neural network were constructed, and the networks were trained and tested, respectively. Finally, the performance of the three methods were evaluated by using three indexes, including Dice, Hausdorff distance (HD) and percentage area difference (PAD). The experimental results show that the MMMDF CNNs can acquire the best performances, followed by the two-modality multi-dimental fusion CNNs, while the single modlity multi-dimension CNNs achieves the worst measures.. This study demonstrates that the MMMDF-CNN combining multi-modality images and incorporating 2D with 3D images features can effectively fulfill accurate segmentation on tumors of MR images from NPC patients.



Key wordsnasopharyngeal carcinoma      MR images      segmentation      multi-modality multi-dimension      deep learning     
Received: 02 March 2019      Published: 05 March 2020
CLC:  R 318.04  
Corresponding Authors: Sheng-wen GUO     E-mail: 531679559@qq.com;shwguo@scut.edu.cn
Cite this article:

Yan-jia HONG,Tie-bao MENG,Hao-jiang LI,Li-zhi LIU,Li LI,Shuo-yu XU,Sheng-wen GUO. Deep segmentation method of tumor boundaries from MR images of patients with nasopharyngeal carcinoma using multi-modality and multi-dimension fusion. Journal of ZheJiang University (Engineering Science), 2020, 54(3): 566-573.

URL:

http://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2020.03.017     OR     http://www.zjujournals.com/eng/Y2020/V54/I3/566


多模态多维信息融合的鼻咽癌MR图像肿瘤深度分割方法

收集421名鼻咽癌患者头颈部水平位T1加权(T1W)、T2加权(T2W)以及T1增强(T1C)三种模态MR图像,并由2名经验丰富的临床医生对图像中的肿瘤区域进行勾画,将其中346位患者的多模态图像及其标签作为训练集,将剩余75位患者的多模态图像及其标签作为独立测试集;分别构建单模态多维信息融合、两模态多维信息融合以及多模态多维信息融合(MMMDF)的卷积神经网络(CNN),并对模型进行训练和测试;使用Dice、豪斯多夫距离(HD)与面积差占比(PAD)评估3种模型的性能,结果表明,多模态多维融合模型的性能最优,两模态多维信息融合模型性能次之,单模态多维信息融合模型性能最差. 结果证明,多模态二维与三维特征融合的深度卷积网络能够准确有效地分割鼻咽癌MR图像中的肿瘤.


关键词: 鼻咽癌,  MR图像,  分割,  多模态多维度,  深度学习 
Fig.1 Multi-modality and multi-dimension fusion CNN structure
网络层 2D-ResUNet 3D-ResUNet
特征图大小 网络层大小 特征图大小 网络层大小
输入 384×384 384×384×8
残差结构1 384×384 [3×3,16]×5 384×384×8 [3×3×3,16]×5
最大池化层1 192×192 2×2最大池化 192×192×4 2×2×2最大池化
残差结构2 192×192 [3×3,32]×5 192×192×4 [3×3×3,32]×5
最大池化层2 96×96 2×2最大池化 96×96×4 2×2×1最大池化
残差结构3 96×96 [3×3,64]×5 96×96×4 [3×3×3,64]×5
最大池化层3 48×48 2×2最大池化 48×48×2 2×2×2最大池化
残差结构4 48×48 [3×3,128]×5 48×48×2 [3×3×1,128]×5
最大池化层4 24×24 2×2最大池化 24×24×2 2×2×1最大池化
残差结构5 24×24 [3×3,256]×5 24×24×2 [3×3×1,256]×5
反卷积1 48×48 3×3,2×2-[残差结构4] 48×48×2 3×3×1,2×2×1-[残差结构4]
反卷积2 96×96 3×3,2×2-[残差结构3] 96×96×4 3×3×3,2×2×2-[残差结构3]
反卷积3 192×192 3×3,2×2-[残差结构2] 192×192×4 3×3×1,2×2×1-[残差结构2]
反卷积4 384×384 3×3,2×2-[残差结构1] 384×384×8 3×3×3,2×2×2-[残差结构1]
卷积层 384×384 1×1,2 384×384×8 1×1×1,2
Tab.1 Architectures of 2D-ResUNet and 3D-ResUNet
Fig.2 Architecture of multi-modality 2D-ResUNet
数据集 被试数量 人数(男/女) 年龄(均值±标准差)
训练集 346 254/92 45.5±11.9
测试集 75 55/20 44.9±11.6
Tab.2 Information of training and testing set for nasopharyngeal carcinoma(NPC)segmentation models
鼻咽癌分割模型 Dice HD/mm PAD/%
T1W-MDF 0.759 6.51 20.0
T2W-MDF 0.763 6.37 17.9
T1C-MDF 0.747 6.41 19.8
T1W+T2W-MDF 0.781 5.84 16.5
T1W+T1C-MDF 0.773 6.02 17.1
T2W+T1C-MDF 0.775 5.93 16.8
Men等[8] 0.726 6.82 23.8
Li等[10] 0.718 6.91 25.1
Zhao等[15] 0.731 6.75 22.7
MMMDF 0.805 5.56 15.5
Tab.3 Comparison on performance of different NPC segmentation models
Fig.3 Box plot comparison on performances of seven NPC segmentation models
Fig.4 Comparison of segmentation results (part of 2D slices) of seven different NPC segmentation models
Fig.5 Comparison of segmentation results (part of 3D slices) of seven different NPC segmentation models
[1]   CHANG E T, ADAMI H O The enigmatic epidemiology of nasopharyngeal carcinoma[J]. Cancer Epidemiology and Prevention Biomarkers, 2006, 15 (10): 1765- 1777
doi: 10.1158/1055-9965.EPI-06-0353
[2]   STEWART B W, WILD C. World cancer report 2014 [M]. Lyon: International Agency for Research on Cancer , 2014.
[3]   邓伟, 黄天壬, 陈万青, 等 中国2003—2007年鼻咽癌发病与死亡分析[J]. 肿瘤, 2012, 32 (3): 189- 193
DENG Wei, HUANG Tian-Ren, CHEN Wan-Qing, et al Analysis of the incidence and mortality of nasopharyngeal carcinoma in China from 2003 to 2007[J]. Tumor, 2012, 32 (3): 189- 193
doi: 10.3781/j.issn.1000-7431.2012.03.007
[4]   HUANG K W, ZHAO Z Y, GONG Q, et al. Nasopharyngeal carcinoma segmentation via HMRF-EM with maximum entropy [C] // 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Milan: IEEE, 2015: 2968-2972.
[5]   RITTHIPRAVAT P, TATANUM C, BHONGMAKAPAT T, et al. Automatic segmentation of nasopharyngeal carcinoma from CT images [C] // 2008 International Conference on BioMedical Engineering and Informatics. Sanya: IEEE Computer Society, 2008, 2: 18-22.
[6]   ZHOU J, CHAN K L, XU P, et al. Nasopharyngeal carcinoma lesion segmentation from MR images by support vector machine [C] // 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, 2006. Arlington: IEEE, 2006: 1364-1367.
[7]   MOHAMMED M A, GHANI M K A, HAMED R I, et al Artificial neural networks for automatic segmentation and identification of nasopharyngeal carcinoma[J]. Journal of Computer Science, 2017, 21: 263- 274
doi: 10.1016/j.jocs.2017.03.026
[8]   MEN K, CHEN X, ZHANG Y, et al Deep deconvolutional neural network for Target segmentation of nasopharyngeal cancer in Planning computed Tomography images[J]. Frontiers in Oncology, 2017, 7: 315
doi: 10.3389/fonc.2017.00315
[9]   SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv Preprint arXiv: 1409.1556, 2014.
[10]   LI Q L, XU Y, CHEN Z, et al Tumor segmentation in contrast-enhanced magnetic resonance imaging for nasopharyngeal carcinoma: deep learning with convolutional neural network[J]. BioMed Research International, 2018, 2018, 5: 1- 7
[11]   MA Z Q, WU X, SONG Q, et al Automated nasopharyngeal carcinoma segmentation in magnetic resonance images by combination of convolutional neural networks and graph cut[J]. Experimental and Therapeutic Medicine, 2018, 16 (3): 2511- 2521
[12]   LI X M, CHEN H, QI X J, et al H-DenseUNet: hybrid densely connected unet for liver and tumor segmentation from ct volumes[J]. IEEE Transactions on Medical Imaging, 2018, 37 (12): 2663- 2674
doi: 10.1109/TMI.2018.2845918
[13]   MILLETARI F, NAVAB N, AHMADI S A. V-net: Fully convolutional neural networks for volumetric medical image segmentation[C] // 2016 Fourth International Conference on 3D Vision (3DV). California: IEEE, 2016: 565-571.
[14]   ABADI M, BARHAM P, CHEN J, et al. Tensorflow: A system for large-scale machine learning[C] // 12th Symposium on Operating Systems Design and Implementation. Savannah, GA: OSDI, 2016: 265-283.
[1] Jia-hui XU,Jing-chang WANG,Ling CHEN,Yong WU. Surface water quality prediction model based on graph neural network[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(4): 601-607.
[2] Hong-li WANG,Bin GUO,Si-cong LIU,Jia-qi LIU,Yun-gang WU,Zhi-wen YU. End context-adaptative deep sensing model with edge-end collaboration[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(4): 626-638.
[3] Teng ZHANG,Xin-long JIANG,Yi-qiang CHEN,Qian CHEN,Tao-mian MI,Piu CHAN. Wrist attitude-based Parkinson's disease ON/OFF state assessment after medication[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(4): 639-647.
[4] Li-feng XU,Hai-fan HUANG,Wei-long DING,Yu-lei FAN. Detection of small fruit target based on improved DenseNet[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(2): 377-385.
[5] Hao-can XU,Ji-tuo LI,Guo-dong LU. Reconstruction of three-dimensional human bodies from single image by LeNet-5[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(1): 153-161.
[6] Yi-peng HUANG,Ji-su HU,Xu-sheng QIAN,Zhi-yong ZHOU,Wen-lu ZHAO,Qi MA,Jun-kang SHEN,Ya-kang DAI. SE-Mask-RCNN: segmentation method for prostate cancer on multi-parametric MRI[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(1): 203-212.
[7] Pu ZHENG,Hong-yang BAI,Wei LI,Hong-wei GUO. Small target detection algorithm in complex background[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(9): 1777-1784.
[8] Qiao-hong CHEN,YI CHEN,Wen-shu Li,Yu-bo JIA. Clothing image classification based on multi-scale SE-Xception[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(9): 1727-1735.
[9] Deng-wen ZHOU,Jin-yue TIAN,Lu-yao MA,Xiu-xiu SUN. Lightweight image semantic segmentation based on multi-level feature cascaded network[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(8): 1516-1524.
[10] Tao MING,Dan WANG,Ji-chang GUO,Qiang LI. Breast cancer histopathological image classification using multi-scale channel squeeze-and-excitation model[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(7): 1289-1297.
[11] Xu YAN,Xiao-liang FAN,Chuan-pan ZHENG,Yu ZANG,Cheng WANG,Ming CHENG,Long-biao CHEN. Urban traffic flow prediction algorithm based on graph convolutional neural networks[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(6): 1147-1155.
[12] Zhou-fei WANG,Wei-na YUAN. Channel estimation and detection method for multicarrier system based on deep learning[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(4): 732-738.
[13] Bing YANG,Wen-bo MO,Jin-liang YAO. 3D palmprint recognition by using local features and deep learning[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(3): 540-545.
[14] Jin-hai ZHOU,Yi-chuan WANG,Jing-ping TONG,Shi-yi ZHOU,Xiang-fei WU. Ultra wide band radar gait recognition based on slow-time segmentation[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(2): 283-290.
[15] Tai-heng ZHANG,Biao MEI,Lei QIAO,Hao-jie YANG,Wei-dong ZHU. Detection method for composite hole guided by texture boundary[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(12): 2294-2300.