Please wait a minute...
浙江大学学报(工学版)  2020, Vol. 54 Issue (7): 1289-1297    DOI: 10.3785/j.issn.1008-973X.2020.07.006
自动化技术、计算机技术     
基于多尺度通道重校准的乳腺癌病理图像分类
明涛1,王丹2,郭继昌1,*(),李锵3
1. 天津大学 电气自动化与信息工程学院,天津 300072
2. 天津医科大学 总医院病理科,天津 300052
3. 天津大学 微电子学院,天津 300072
Breast cancer histopathological image classification using multi-scale channel squeeze-and-excitation model
Tao MING1,Dan WANG2,Ji-chang GUO1,*(),Qiang LI3
1. School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
2. Department of Pathology, General Hospital, Tianjin Medical University, Tianjin 300052, China
3. School of Microelectronics, Tianjin University, Tianjin 300072, China
 全文: PDF(942 KB)   HTML
摘要:

针对乳腺癌病理图像的自动分类问题,提出基于深度学习的分类算法. 通道重校准模型是作用于特征通道的注意力模型,可以利用学习到的通道权重对无用特征进行抑制来实现对特征通道的重校准,以达到更高的分类准确率. 为了使通道重校准的结果更加准确,提出多尺度通道重校准模型,设计卷积神经网络 msSE-ResNet. 多尺度特征通过网络中的最大池化层获得并作为后续通道重校准模型的输入,将不同尺度下学到的通道权重进行融合,可以改善通道重校准的结果. 该实验在公开数据集BreaKHis上开展. 实验结果表明,该网络对良性/恶性乳腺病理图像分类任务达到88.87%的分类精度,可以对不同放大倍数下获取的病理图像具有较好的鲁棒性.

关键词: 乳腺癌病理图像分类深度学习残差网络多尺度特征通道重校准模型    
Abstract:

A deep learning-based classification algorithm was proposed aiming at the problem of automatic classification algorithms focusing on breast cancer histopathological images. Channel squeeze-and-excitation (SE) model is an attention model applied to the feature channels. Useless features can be suppressed with learned channel weights so as to recalibrate the feature channels for better classification accuracy. A multi-scale channel SE model was proposed, and a convolutional neural network named msSE-ResNet was designed in order to make the result of channel recalibration more accurate. Multi-scale features were obtained by max-pooling layers and served as inputs to subsequent channel SE models, and the result of channel recalibration was improved by merging channel weights learned under different feature scales. Experiments were conducted on public dataset BreaKHis. Results show that the network can reach an accuracy of 88.87% on the task of classifying benign/malignant breast histopathological images, and can remain good robustness to histopathological images acquired under different magnifications.

Key words: breast cancer histopathological image classification    deep learning    residual network    multi-scale feature    channel squeeze-and-excitation model
收稿日期: 2019-06-18 出版日期: 2020-07-05
CLC:  TP 391  
基金资助: 国家自然科学基金资助项目(61471263);天津市自然科学基金资助项目(16JCZDJC31100)
通讯作者: 郭继昌     E-mail: jcguo@tju.edu.cn
作者简介: 明涛(1994―),男,硕士生,从事医学图像处理的研究. orcid.org/0000-0003-0835-8376. E-mail: mos_ming@163.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
明涛
王丹
郭继昌
李锵

引用本文:

明涛,王丹,郭继昌,李锵. 基于多尺度通道重校准的乳腺癌病理图像分类[J]. 浙江大学学报(工学版), 2020, 54(7): 1289-1297.

Tao MING,Dan WANG,Ji-chang GUO,Qiang LI. Breast cancer histopathological image classification using multi-scale channel squeeze-and-excitation model. Journal of ZheJiang University (Engineering Science), 2020, 54(7): 1289-1297.

链接本文:

http://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2020.07.006        http://www.zjujournals.com/eng/CN/Y2020/V54/I7/1289

图 1  残差结构
图 2  残差结构与SE残差结构
图 3  msSE残差结构
放大倍数 肿瘤图像数
良性 恶性 总计
40倍 625 1370 1 995
100倍 644 1437 2 081
200倍 623 1390 2 013
400倍 588 1232 1 820
表 1  不同放大倍数与类别下的图片分布情况
图 4  良性与恶性乳腺肿瘤图像
模型 Acc/% AUC
ResNet18[12] 84.53 0.8878
SE-ResNet18[11] 83.56 0.8791
scSE-ResNet18[27] 83.90 0.8677
msSE-ResNet18-2way 86.81 0.9266
msSE-ResNet18-3way 86.00 0.9107
表 2  msSE-ResNet18与其他网络的分类结果比较
模型 40倍 100倍 200倍 400倍
Acc Pr R Acc Pr R Acc Pr R Acc Pr R
ResNet18[12] 0.822 0.845 0.907 0.836 0.836 0.921 0.864 0.868 0.947 0.875 0.864 0.967
SE-ResNet18[11] 0.826 0.820 0.956 0.862 0.861 0.953 0.867 0.862 0.962 0.879 0.865 0.973
scSE-ResNet18[27] 0.805 0.808 0.941 0.836 0.845 0.935 0.870 0.866 0.962 0.824 0.837 0.918
msSE-ResNet18-2way 0.862 0.890 0.912 0.862 0.884 0.921 0.880 0.887 0.947 0.889 0.889 0.957
msSE-ResNet18-3way 0.829 0.856 0.902 0.868 0.878 0.940 0.874 0.905 0.913 0.882 0.884 0.951
表 3  所有网络的放大倍数相关的分类结果比较
图 5  基于ResNet18的网络的ROC曲线
尺度数量 融合方法 Atr/% Ate/%
2 add 87.16 86.81
2 max 85.26 83.81
2 cat1(sigm) 85.82 84.65
2 cat1 87.12 86.37
2 cat2(sigm) 85.64 84.45
2 cat2 86.34 84.57
3 add 86.73 85.42
3 max 85.15 83.90
3 cat1(sigm) 85.47 83.77
3 cat1 87.36 86.00
3 cat2(sigm) 85.28 84.07
3 cat2 86.22 84.95
表 4  不同特征尺度数量下各融合方法的分类结果比较
模型 Acc/% AUC
ResNet34[12] 86.47 0.9135
SE-ResNet34[11] 87.36 0.9097
scSE-ResNet34[27] 83.96 0.8722
msSE-ResNet34-2way 88.06 0.9308
msSE-ResNet34-3way 88.87 0.9541
表 5  msSE-ResNet34与其他网络的分类结果比较
图 6  基于ResNet34的网络的ROC曲线
模型 40倍 100倍 200倍 400倍
Acc Pr R Acc Pr R Acc Pr R Acc Pr R
ResNet34[12] 0.846 0.880 0.898 0.859 0.887 0.912 0.874 0.901 0.918 0.882 0.888 0.946
SE-ResNet34[11] 0.849 0.870 0.917 0.863 0.887 0.916 0.877 0.894 0.933 0.886 0.888 0.951
scSE-ResNet34[27] 0.815 0.847 0.893 0.833 0.869 0.893 0.877 0.890 0.938 0.868 0.870 0.948
msSE-ResNet34-2way 0.873 0.900 0.917 0.884 0.905 0.930 0.890 0.911 0.933 0.893 0.897 0.951
msSE-ResNet34-3way 0.867 0.946 0.863 0.891 0.946 0.893 0.890 0.927 0.913 0.901 0.944 0.908
表 6  所有网络的放大倍数相关的分类结果比较
尺度数量 融合方法 Atr/% Ate/%
2 add 88.04 88.06
2 max 87.40 87.72
2 cat1(sigm) 86.61 86.74
2 cat1 88.63 88.04
2 cat2(sigm) 87.03 87.00
2 cat2 89.18 87.65
3 add 87.64 87.17
3 max 88.20 88.64
3 cat1(sigm) 87.64 87.52
3 cat1 88.36 88.12
3 cat2(sigm) 87.44 88.31
3 cat2 89.07 88.87
表 7  不同特征尺度数量下各融合方法的分类结果比较
1 FAN L, STRASSER-WEIPPL K, LI J J, et al Breast cancer in China[J]. Lancet Oncology, 2014, 15 (7): 279- 289
doi: 10.1016/S1470-2045(13)70567-9
2 LEONG A S-Y, ZHUANG Z P The changing role of pathology in breast cancer diagnosis and treatment[J]. Pathobiology, 2011, 78: 99- 114
doi: 10.1159/000292644
3 VETA M, PLUIM J P, VAN DIEST P J, et al Breast cancer histopathology image analysis: a review[J]. IEEE Transactions on Biomedical Engineering, 2014, 61 (5): 1400- 1411
doi: 10.1109/TBME.2014.2303852
4 SPANHOL F A, OLIVEIRA L S, PETITJEAN C, et al A dataset for breast cancer histopathological image classification[J]. IEEE Transactions on Biomedical Engineering, 2016, 63 (7): 1455- 1462
doi: 10.1109/TBME.2015.2496264
5 GUPTA V, BHAVSAR A. Breast cancer histopathological image classification: is magnification important? [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017: 769-776.
6 CIRESAN D C, GIUSTI A, GAMBARDELLA L M, et al. Mitosis detection in breast cancer histology images with deep neural networks [C] // Proceedings of Medical Image Computing and Computer-Assisted Intervention. Berlin, German: Springer, 2013: 411-418.
7 ARAúJO T, ARESTA G, CASTRO E, et al Classification of breast cancer histology images using convolutional neural networks[J]. PLos One, 2017, 12 (6): e0177544
doi: 10.1371/journal.pone.0177544
8 SPANHOL F A, OLIVEIRA L S, PETITJEAN C, et al. Breast cancer histopathological image classification using convolutional neural networks [C] // Proceedings of International Joint Conference on Neural Networks. Vancouver, Canada: IEEE, 2016: 2560-2567.
9 BAYRAMOGLU N, KANNALA J, HEIKKIL? J. Deep learning for magnification independent breast cancer histopathology image classification [C] // Proceedings of International Conference on Pattern Recognition. Cancun, Mexico: IEEE, 2016: 2441-2446.
10 SONG Y, ZOU J J, CHANG H, et al. Adapting Fisher vectors for histopathology image classification [C] // Proceedings of the IEEE 14th International Symposium on Biomedical Imaging. Melbourne: IEEE, 2017: 600-603.
11 HU J, SHEN L, SUN G. Squeeze-and-Excitation network [C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132-7141.
12 HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 770-778.
13 BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate [EB/OL]. [2019–03–01]. https://arxiv.org/abs/1409.0473.
14 VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of Neural Information Processing Systems. Long Beach, USA: Curran Associates, Inc., 2017: 5998-6008.
15 WANG F, JIANG M, QIAN C, et al. Residual attention network for image classification [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017: 6450-6458.
16 ZHU Y Y, WANG J, XIE L X, et al. Attention-based pyramid aggregation network for visual place recognition [C] // Proceedings of International Conference on Multimedia. Seoul, Korea: ACM, 2018: 99-107.
17 SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. 2019–04–23. https://arxiv.org/abs/1409.1556.
18 SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015: 1-9.
19 NAIR V, HINTON G E. rectified linear units improve restricted Boltzmann machine [C] // International Conference on International Conference on Machine Learning. Haifa, Israel: Omnipress, 2010: 807-814.
20 HE K, ZHANG X, REN S, et al Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37 (9): 1904- 1916
doi: 10.1109/TPAMI.2015.2389824
21 LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector [C] // Proceedings of European Conference on Computer Vision. Amsterdam: Springer, 2016: 21-37.
22 LIN T, DOLLAR P, GIRSHICK R. Feature pyramid networks for object detection [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2117-2125.
23 ZHAO H, SHI J, QI X, et el. Pyramid scene parsing network [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 6230-6239.
24 ZHAO H, QI X, SHEN X, et al. ICNet for real-time semantic segmentation on high-resolution images [C] // Proceedings of European Conference on Computer Vision. Munich, Germany: Springer, 2018: 418-434.
25 KAMNITASA K, LEDIG C, NEWCOMBE V F, et al Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation[J]. Medical Image Analysis, 2017, 36: 61- 78
doi: 10.1016/j.media.2016.10.004
26 PASZKE A, GROSS S, MASSA F, et al. PyTorch: an imperative style, high-performing deep learning library [C] // Proceedings of Neural Information Processing Systems. Vancouver: Curran Associates, Inc., 2019: 8024-8035.
[1] 许佳辉,王敬昌,陈岭,吴勇. 基于图神经网络的地表水水质预测模型[J]. 浙江大学学报(工学版), 2021, 55(4): 601-607.
[2] 王虹力,郭斌,刘思聪,刘佳琪,仵允港,於志文. 边端融合的终端情境自适应深度感知模型[J]. 浙江大学学报(工学版), 2021, 55(4): 626-638.
[3] 张腾,蒋鑫龙,陈益强,陈前,米涛免,陈彪. 基于腕部姿态的帕金森病用药后开-关期检测[J]. 浙江大学学报(工学版), 2021, 55(4): 639-647.
[4] 徐利锋,黄海帆,丁维龙,范玉雷. 基于改进DenseNet的水果小目标检测[J]. 浙江大学学报(工学版), 2021, 55(2): 377-385.
[5] 许豪灿,李基拓,陆国栋. 由LeNet-5从单张着装图像重建三维人体[J]. 浙江大学学报(工学版), 2021, 55(1): 153-161.
[6] 黄毅鹏,胡冀苏,钱旭升,周志勇,赵文露,马麒,沈钧康,戴亚康. SE-Mask-RCNN:多参数MRI前列腺癌分割方法[J]. 浙江大学学报(工学版), 2021, 55(1): 203-212.
[7] 陈巧红,陈翊,李文书,贾宇波. 多尺度SE-Xception服装图像分类[J]. 浙江大学学报(工学版), 2020, 54(9): 1727-1735.
[8] 郑浦,白宏阳,李伟,郭宏伟. 复杂背景下的小目标检测算法[J]. 浙江大学学报(工学版), 2020, 54(9): 1777-1784.
[9] 周登文,田金月,马路遥,孙秀秀. 基于多级特征并联的轻量级图像语义分割[J]. 浙江大学学报(工学版), 2020, 54(8): 1516-1524.
[10] 闫旭,范晓亮,郑传潘,臧彧,王程,程明,陈龙彪. 基于图卷积神经网络的城市交通态势预测算法[J]. 浙江大学学报(工学版), 2020, 54(6): 1147-1155.
[11] 汪周飞,袁伟娜. 基于深度学习的多载波系统信道估计与检测[J]. 浙江大学学报(工学版), 2020, 54(4): 732-738.
[12] 杨冰,莫文博,姚金良. 融合局部特征与深度学习的三维掌纹识别[J]. 浙江大学学报(工学版), 2020, 54(3): 540-545.
[13] 洪炎佳,孟铁豹,黎浩江,刘立志,李立,徐硕瑀,郭圣文. 多模态多维信息融合的鼻咽癌MR图像肿瘤深度分割方法[J]. 浙江大学学报(工学版), 2020, 54(3): 566-573.
[14] 贾子钰,林友芳,张宏钧,王晶. 基于深度卷积神经网络的睡眠分期模型[J]. 浙江大学学报(工学版), 2020, 54(10): 1899-1905.
[15] 王万良,杨小涵,赵燕伟,高楠,吕闯,张兆娟. 采用卷积自编码器网络的图像增强算法[J]. 浙江大学学报(工学版), 2019, 53(9): 1728-1740.