Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2020, Vol. 54 Issue (7): 1289-1297    DOI: 10.3785/j.issn.1008-973X.2020.07.006
    
Breast cancer histopathological image classification using multi-scale channel squeeze-and-excitation model
Tao MING1,Dan WANG2,Ji-chang GUO1,*(),Qiang LI3
1. School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
2. Department of Pathology, General Hospital, Tianjin Medical University, Tianjin 300052, China
3. School of Microelectronics, Tianjin University, Tianjin 300072, China
Download: HTML     PDF(942KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A deep learning-based classification algorithm was proposed aiming at the problem of automatic classification algorithms focusing on breast cancer histopathological images. Channel squeeze-and-excitation (SE) model is an attention model applied to the feature channels. Useless features can be suppressed with learned channel weights so as to recalibrate the feature channels for better classification accuracy. A multi-scale channel SE model was proposed, and a convolutional neural network named msSE-ResNet was designed in order to make the result of channel recalibration more accurate. Multi-scale features were obtained by max-pooling layers and served as inputs to subsequent channel SE models, and the result of channel recalibration was improved by merging channel weights learned under different feature scales. Experiments were conducted on public dataset BreaKHis. Results show that the network can reach an accuracy of 88.87% on the task of classifying benign/malignant breast histopathological images, and can remain good robustness to histopathological images acquired under different magnifications.



Key wordsbreast cancer histopathological image classification      deep learning      residual network      multi-scale feature      channel squeeze-and-excitation model     
Received: 18 June 2019      Published: 05 July 2020
CLC:  TP 391  
Corresponding Authors: Ji-chang GUO     E-mail: jcguo@tju.edu.cn
Cite this article:

Tao MING,Dan WANG,Ji-chang GUO,Qiang LI. Breast cancer histopathological image classification using multi-scale channel squeeze-and-excitation model. Journal of ZheJiang University (Engineering Science), 2020, 54(7): 1289-1297.

URL:

http://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2020.07.006     OR     http://www.zjujournals.com/eng/Y2020/V54/I7/1289


基于多尺度通道重校准的乳腺癌病理图像分类

针对乳腺癌病理图像的自动分类问题,提出基于深度学习的分类算法. 通道重校准模型是作用于特征通道的注意力模型,可以利用学习到的通道权重对无用特征进行抑制来实现对特征通道的重校准,以达到更高的分类准确率. 为了使通道重校准的结果更加准确,提出多尺度通道重校准模型,设计卷积神经网络 msSE-ResNet. 多尺度特征通过网络中的最大池化层获得并作为后续通道重校准模型的输入,将不同尺度下学到的通道权重进行融合,可以改善通道重校准的结果. 该实验在公开数据集BreaKHis上开展. 实验结果表明,该网络对良性/恶性乳腺病理图像分类任务达到88.87%的分类精度,可以对不同放大倍数下获取的病理图像具有较好的鲁棒性.


关键词: 乳腺癌病理图像分类,  深度学习,  残差网络,  多尺度特征,  通道重校准模型 
Fig.1 Residual structure
Fig.2 Residual structure and SE-Residual structure
Fig.3 msSE-Residual structure
放大倍数 肿瘤图像数
良性 恶性 总计
40倍 625 1370 1 995
100倍 644 1437 2 081
200倍 623 1390 2 013
400倍 588 1232 1 820
Tab.1 Image distribution by different magnification factors and classes
Fig.4 Benign and malignant breast tumor images
模型 Acc/% AUC
ResNet18[12] 84.53 0.8878
SE-ResNet18[11] 83.56 0.8791
scSE-ResNet18[27] 83.90 0.8677
msSE-ResNet18-2way 86.81 0.9266
msSE-ResNet18-3way 86.00 0.9107
Tab.2 Comparison of classification results of msSE-ResNet18 and other networks
模型 40倍 100倍 200倍 400倍
Acc Pr R Acc Pr R Acc Pr R Acc Pr R
ResNet18[12] 0.822 0.845 0.907 0.836 0.836 0.921 0.864 0.868 0.947 0.875 0.864 0.967
SE-ResNet18[11] 0.826 0.820 0.956 0.862 0.861 0.953 0.867 0.862 0.962 0.879 0.865 0.973
scSE-ResNet18[27] 0.805 0.808 0.941 0.836 0.845 0.935 0.870 0.866 0.962 0.824 0.837 0.918
msSE-ResNet18-2way 0.862 0.890 0.912 0.862 0.884 0.921 0.880 0.887 0.947 0.889 0.889 0.957
msSE-ResNet18-3way 0.829 0.856 0.902 0.868 0.878 0.940 0.874 0.905 0.913 0.882 0.884 0.951
Tab.3 Comparison of magnification-specific classification results of all networks
Fig.5 ROC curves of networks with ResNet18 as backbone
尺度数量 融合方法 Atr/% Ate/%
2 add 87.16 86.81
2 max 85.26 83.81
2 cat1(sigm) 85.82 84.65
2 cat1 87.12 86.37
2 cat2(sigm) 85.64 84.45
2 cat2 86.34 84.57
3 add 86.73 85.42
3 max 85.15 83.90
3 cat1(sigm) 85.47 83.77
3 cat1 87.36 86.00
3 cat2(sigm) 85.28 84.07
3 cat2 86.22 84.95
Tab.4 Comparison of classification results of different fusion methods under different feature scales
模型 Acc/% AUC
ResNet34[12] 86.47 0.9135
SE-ResNet34[11] 87.36 0.9097
scSE-ResNet34[27] 83.96 0.8722
msSE-ResNet34-2way 88.06 0.9308
msSE-ResNet34-3way 88.87 0.9541
Tab.5 Comparison of classification results of msSE-ResNet34 and other networks
Fig.6 ROC curves of networks with ResNet34 as backbone
模型 40倍 100倍 200倍 400倍
Acc Pr R Acc Pr R Acc Pr R Acc Pr R
ResNet34[12] 0.846 0.880 0.898 0.859 0.887 0.912 0.874 0.901 0.918 0.882 0.888 0.946
SE-ResNet34[11] 0.849 0.870 0.917 0.863 0.887 0.916 0.877 0.894 0.933 0.886 0.888 0.951
scSE-ResNet34[27] 0.815 0.847 0.893 0.833 0.869 0.893 0.877 0.890 0.938 0.868 0.870 0.948
msSE-ResNet34-2way 0.873 0.900 0.917 0.884 0.905 0.930 0.890 0.911 0.933 0.893 0.897 0.951
msSE-ResNet34-3way 0.867 0.946 0.863 0.891 0.946 0.893 0.890 0.927 0.913 0.901 0.944 0.908
Tab.6 Comparison of magnification-specific classification results of all networks
尺度数量 融合方法 Atr/% Ate/%
2 add 88.04 88.06
2 max 87.40 87.72
2 cat1(sigm) 86.61 86.74
2 cat1 88.63 88.04
2 cat2(sigm) 87.03 87.00
2 cat2 89.18 87.65
3 add 87.64 87.17
3 max 88.20 88.64
3 cat1(sigm) 87.64 87.52
3 cat1 88.36 88.12
3 cat2(sigm) 87.44 88.31
3 cat2 89.07 88.87
Tab.7 Comparison of classification results of different fusion methods under different feature scales
[1]   FAN L, STRASSER-WEIPPL K, LI J J, et al Breast cancer in China[J]. Lancet Oncology, 2014, 15 (7): 279- 289
doi: 10.1016/S1470-2045(13)70567-9
[2]   LEONG A S-Y, ZHUANG Z P The changing role of pathology in breast cancer diagnosis and treatment[J]. Pathobiology, 2011, 78: 99- 114
doi: 10.1159/000292644
[3]   VETA M, PLUIM J P, VAN DIEST P J, et al Breast cancer histopathology image analysis: a review[J]. IEEE Transactions on Biomedical Engineering, 2014, 61 (5): 1400- 1411
doi: 10.1109/TBME.2014.2303852
[4]   SPANHOL F A, OLIVEIRA L S, PETITJEAN C, et al A dataset for breast cancer histopathological image classification[J]. IEEE Transactions on Biomedical Engineering, 2016, 63 (7): 1455- 1462
doi: 10.1109/TBME.2015.2496264
[5]   GUPTA V, BHAVSAR A. Breast cancer histopathological image classification: is magnification important? [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017: 769-776.
[6]   CIRESAN D C, GIUSTI A, GAMBARDELLA L M, et al. Mitosis detection in breast cancer histology images with deep neural networks [C] // Proceedings of Medical Image Computing and Computer-Assisted Intervention. Berlin, German: Springer, 2013: 411-418.
[7]   ARAúJO T, ARESTA G, CASTRO E, et al Classification of breast cancer histology images using convolutional neural networks[J]. PLos One, 2017, 12 (6): e0177544
doi: 10.1371/journal.pone.0177544
[8]   SPANHOL F A, OLIVEIRA L S, PETITJEAN C, et al. Breast cancer histopathological image classification using convolutional neural networks [C] // Proceedings of International Joint Conference on Neural Networks. Vancouver, Canada: IEEE, 2016: 2560-2567.
[9]   BAYRAMOGLU N, KANNALA J, HEIKKIL? J. Deep learning for magnification independent breast cancer histopathology image classification [C] // Proceedings of International Conference on Pattern Recognition. Cancun, Mexico: IEEE, 2016: 2441-2446.
[10]   SONG Y, ZOU J J, CHANG H, et al. Adapting Fisher vectors for histopathology image classification [C] // Proceedings of the IEEE 14th International Symposium on Biomedical Imaging. Melbourne: IEEE, 2017: 600-603.
[11]   HU J, SHEN L, SUN G. Squeeze-and-Excitation network [C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132-7141.
[12]   HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016: 770-778.
[13]   BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate [EB/OL]. [2019–03–01]. https://arxiv.org/abs/1409.0473.
[14]   VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of Neural Information Processing Systems. Long Beach, USA: Curran Associates, Inc., 2017: 5998-6008.
[15]   WANG F, JIANG M, QIAN C, et al. Residual attention network for image classification [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017: 6450-6458.
[16]   ZHU Y Y, WANG J, XIE L X, et al. Attention-based pyramid aggregation network for visual place recognition [C] // Proceedings of International Conference on Multimedia. Seoul, Korea: ACM, 2018: 99-107.
[17]   SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. 2019–04–23. https://arxiv.org/abs/1409.1556.
[18]   SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015: 1-9.
[19]   NAIR V, HINTON G E. rectified linear units improve restricted Boltzmann machine [C] // International Conference on International Conference on Machine Learning. Haifa, Israel: Omnipress, 2010: 807-814.
[20]   HE K, ZHANG X, REN S, et al Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37 (9): 1904- 1916
doi: 10.1109/TPAMI.2015.2389824
[21]   LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector [C] // Proceedings of European Conference on Computer Vision. Amsterdam: Springer, 2016: 21-37.
[22]   LIN T, DOLLAR P, GIRSHICK R. Feature pyramid networks for object detection [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2117-2125.
[23]   ZHAO H, SHI J, QI X, et el. Pyramid scene parsing network [C] // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 6230-6239.
[24]   ZHAO H, QI X, SHEN X, et al. ICNet for real-time semantic segmentation on high-resolution images [C] // Proceedings of European Conference on Computer Vision. Munich, Germany: Springer, 2018: 418-434.
[25]   KAMNITASA K, LEDIG C, NEWCOMBE V F, et al Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation[J]. Medical Image Analysis, 2017, 36: 61- 78
doi: 10.1016/j.media.2016.10.004
[26]   PASZKE A, GROSS S, MASSA F, et al. PyTorch: an imperative style, high-performing deep learning library [C] // Proceedings of Neural Information Processing Systems. Vancouver: Curran Associates, Inc., 2019: 8024-8035.
[1] Jia-hui XU,Jing-chang WANG,Ling CHEN,Yong WU. Surface water quality prediction model based on graph neural network[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(4): 601-607.
[2] Hong-li WANG,Bin GUO,Si-cong LIU,Jia-qi LIU,Yun-gang WU,Zhi-wen YU. End context-adaptative deep sensing model with edge-end collaboration[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(4): 626-638.
[3] Teng ZHANG,Xin-long JIANG,Yi-qiang CHEN,Qian CHEN,Tao-mian MI,Piu CHAN. Wrist attitude-based Parkinson's disease ON/OFF state assessment after medication[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(4): 639-647.
[4] Li-feng XU,Hai-fan HUANG,Wei-long DING,Yu-lei FAN. Detection of small fruit target based on improved DenseNet[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(2): 377-385.
[5] Hao-can XU,Ji-tuo LI,Guo-dong LU. Reconstruction of three-dimensional human bodies from single image by LeNet-5[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(1): 153-161.
[6] Yi-peng HUANG,Ji-su HU,Xu-sheng QIAN,Zhi-yong ZHOU,Wen-lu ZHAO,Qi MA,Jun-kang SHEN,Ya-kang DAI. SE-Mask-RCNN: segmentation method for prostate cancer on multi-parametric MRI[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(1): 203-212.
[7] Qiao-hong CHEN,YI CHEN,Wen-shu Li,Yu-bo JIA. Clothing image classification based on multi-scale SE-Xception[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(9): 1727-1735.
[8] Pu ZHENG,Hong-yang BAI,Wei LI,Hong-wei GUO. Small target detection algorithm in complex background[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(9): 1777-1784.
[9] Deng-wen ZHOU,Jin-yue TIAN,Lu-yao MA,Xiu-xiu SUN. Lightweight image semantic segmentation based on multi-level feature cascaded network[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(8): 1516-1524.
[10] Xu YAN,Xiao-liang FAN,Chuan-pan ZHENG,Yu ZANG,Cheng WANG,Ming CHENG,Long-biao CHEN. Urban traffic flow prediction algorithm based on graph convolutional neural networks[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(6): 1147-1155.
[11] Zhou-fei WANG,Wei-na YUAN. Channel estimation and detection method for multicarrier system based on deep learning[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(4): 732-738.
[12] Bing YANG,Wen-bo MO,Jin-liang YAO. 3D palmprint recognition by using local features and deep learning[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(3): 540-545.
[13] Yan-jia HONG,Tie-bao MENG,Hao-jiang LI,Li-zhi LIU,Li LI,Shuo-yu XU,Sheng-wen GUO. Deep segmentation method of tumor boundaries from MR images of patients with nasopharyngeal carcinoma using multi-modality and multi-dimension fusion[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(3): 566-573.
[14] Zi-yu JIA,You-fang LIN,Hong-jun ZHANG,Jing WANG. Sleep stage classification model based ondeep convolutional neural network[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(10): 1899-1905.
[15] Wan-liang WANG,Xiao-han YANG,Yan-wei ZHAO,Nan GAO,Chuang LV,Zhao-juan ZHANG. Image enhancement algorithm with convolutional auto-encoder network[J]. Journal of ZheJiang University (Engineering Science), 2019, 53(9): 1728-1740.