Please wait a minute...
Journal of Zhejiang University (Science Edition)  2023, Vol. 50 Issue (4): 455-464    DOI: 10.3785/j.issn.1008-9497.2023.04.009
Mathematics and Computer Science     
MFDC-Net: A breast cancer pathological image classification algorithm incorporating multi-scale feature fusion and attention mechanism
Yuhua FANG(),Feng YE()
School of Management,Zhejiang University of Technology,Hangzhou 310023,China
Download: HTML( 4 )   PDF(3135KB)
Export: BibTeX | EndNote (RIS)      

Abstract  

Breast cancer is one of the most common malignant tumors in the world. Traditional methods take pathologists a lot of time and effort to diagnose, and the results are greatly affected by individual abilities. Using computer-aided diagnosis methods can improve the accuracy and efficiency of pathological image classification, meet the demands of clinical applications. To this end, a multi-scale feature fusion based on DenseNet and coordinate attention network (MFDC-Net) is proposed. The introduction of coordinate attention mechanism into the dense blocks can locate important feature spatial information precisely. The improved transition layers use average pooling and normal convolutions with different convolution kernels to reduce dimension and expand receptive fields. Finally the improved network employs a multi-scale feature fusion model using dilated convolution, average pooling and normal convolutions to fuse deep image features to improve classification performance. The experimental results show that MFDC-Net model has better classification performance, the accuracy rate of four classifications reaches 97.12%, the easily confused rate decreases to 3.34%. The method can better classify the histopathological images of breast cancer, and can provide an important basis for the diagnosis and treatment of doctors.



Key wordsbreast cancer pathological image      image classification      mechanism attention      feature fusion      multi-scale features     
Received: 25 October 2022      Published: 17 July 2023
CLC:  TP 391.41  
Corresponding Authors: Feng YE     E-mail: whatfyh@126.com;yefeng@zjut.edu.cn
Cite this article:

Yuhua FANG,Feng YE. MFDC-Net: A breast cancer pathological image classification algorithm incorporating multi-scale feature fusion and attention mechanism. Journal of Zhejiang University (Science Edition), 2023, 50(4): 455-464.

URL:

https://www.zjujournals.com/sci/EN/Y2023/V50/I4/455


MFDC-Net:一种融合多尺度特征和注意力机制的乳腺癌病理图像分类算法

乳腺癌是全球最常见的恶性肿瘤之一,采用传统方法诊断需花费大量时间和精力,且受个人能力影响较大。用计算机辅助诊断的方法,可以提高病理图像分类的准确率和效率,从而满足临床应用的需求。为此,提出一种基于DenseNet的融合多尺度特征和注意力机制的乳腺癌病理图像分类算法(MFDC-Net)。在密集块中引入坐标注意力机制,精准定位重要特征的空间信息。采用多尺度池化过渡层,通过不同卷积核的平均池化和普通卷积,在实现降维的同时扩大感受野。采用多尺度特征增强模块,融合深层次图像特征,提高分类性能。结果显示,MFDC-Net模型的分类性能较其他经典模型更优,分类准确率达97.12%,易混淆率低至3.34%,能较好地进行乳腺癌组织病理图像分类,为诊断和治疗提供重要依据。


关键词: 乳腺癌病理图像,  图像分类,  注意力机制,  特征融合,  多尺度特征 
Fig.1 Implementation procedure of coordinate attention mechanism
Fig.2 Overview of the method MFDC-Net algorithm
Fig.3 Procedure of multi-scale pooling transition layer
Fig.4 Multi-scale feature extraction module
Fig.5 Dense block combined with coordinate attention
Fig.6 Histopathology image of breast tissue
数据集类型mAP/%Specificity/%F1/%ACC/%EC/%
筛选前良性病变93.2097.7892.0193.828.23
原位癌91.9397.2693.23
浸润性癌98.4199.4797.79
正常组织91.8297.2692.26
筛选后良性病变96.3498.7696.4297.123.34
原位癌97.4599.1897.18
浸润性癌98.0799.3598.16
正常组织96.6598.8896.73
Table 1 Classification performance of the breast tissue datasets
模型mAP/%Specificity/%F1/%ACC/%EC/%
基准模型87.0294.5387.1987.4414.97
使用MFE模块92.0196.5290.4592.879.81
使用CA和MFE模块94.5896.6193.7594.517.68
使用MFE、MTL和CA模块97.1399.1897.1897.123.34
Table 2 Results of MFE module ablation experiments
模型mAP/%Specificity/%F1/%ACC/%EC/%
基准模型87.0294.5384.0987.4414.97
使用SE机制93.0495.7891.8292.799.49
使用CBAM机制93.8396.8492.3893.757.75
使用CA机制94.3097.4793.1994.147.30
Table 3 Classification performance of different attention mechanisms
卷积核尺寸mAP/%Specificity/%F1/%ACC/%EC/%
k1=3,k2=394.8297.8894.0294.876.17
k1=3,k2=595.7298.6695.1395.855.52
k1=5,k2=394.9698.2294.8894.966.03
k1=5,k2=595.2398.5294.9595.365.60
Table 4 Classification performance of different convolution kernel in multi-transition layer
模型ACC/%Specificity/%F1/%
ResNet 5086.9591.3289.2
Inception-V385.8788.2888.95
Mobile-V289.5292.4389.89
MFDC-Net97.1299.1897.18
Table 5 Compared results of classic models
Fig.7 Comparison of confusion matrix
Fig.8 Visualization results
[1]   SUNG H, FERLAY J, SIEGEL R L, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries[J]. CA:A Cancer Journal for Clinicians, 2021, 71(3): 209-249. DOI:10.3322/caac.21660
doi: 10.3322/caac.21660
[2]   TURKOGLU M. COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble[J]. Applied Intelligence, 2021, 51(3): 1213-1226. DOI:10.1007/s10489-020-01888-w
doi: 10.1007/s10489-020-01888-w
[3]   YU C, CHEN H, LI Y, et al. Breast cancer classification in pathological images based on hybrid features[J]. Multimedia Tools and Applications, 2019, 78(15): 21325-21345. DOI:10.1007/s11042-019-7468-9
doi: 10.1007/s11042-019-7468-9
[4]   KOMURA D, ISHIKAWA S. Machine learning methods for histopathological image analysis[J]. Computational and Structural Biotechnology Journal, 2018, 16: 34-42. DOI:10.1016/j.csbj. 2018.01.001
doi: 10.1016/j.csbj. 2018.01.001
[5]   WANG D, CHEN Z, ZHAO H. Prototype transfer generative adversarial network for unsupervised breast cancer histology image classification[J]. Biomedical Signal Processing and Control, 2021, 68: 102713. DOI:10.1016/j.bspc.2021.102713 .
doi: 10.1016/j.bspc.2021.102713
[6]   GRAHAM S, VU Q D, RAZA S E A, et al. Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images[J]. Medical Image Analysis, 2019, 58: 101563. DOI:10.1016/j.media.2019.101563 .
doi: 10.1016/j.media.2019.101563
[7]   GANDOMKAR Z, BRENNAN P C, MELLO-THOMS C. MuDeRN: Multi-category classification of breast histopathological image using deep residual networks[J]. Artificial Intelligence in Medicine, 2018, 88: 14-24. DOI:10.1016/j.artmed.2018.04.005
doi: 10.1016/j.artmed.2018.04.005
[8]   明涛, 王丹, 郭继昌, 等. 基于多尺度通道重校准的乳腺癌病理图像分类[J]. 浙江大学学报(工学版), 2020, 54(7): 1289-1297. DOI:10.3785/j.issn.1008-973X.2020.07.006
MING T, WANG D, GUO J C, et al. Breast cancer histopathological image classification using multi-scale channel squeeze-and-excitation model[J]. Journal of Zhejiang University (Engineering Science), 2020, 54(7): 1289-1297. DOI:10.3785/j.issn.1008-973X.2020.07.006
doi: 10.3785/j.issn.1008-973X.2020.07.006
[9]   CHATTOPADHYAY S, DEY A, SINGH P K, et al. DRDA-Net: Dense residual dual-shuffle attention network for breast cancer classification using histopathological images[J]. Computers in Biology and Medicine, 2022, 145: 105437. DOI:10.1016/j.compbiomed.2022.105437
doi: 10.1016/j.compbiomed.2022.105437
[10]   王永军, 黄芳琳, 黄珊, 等. 基于融合多网络深层卷积特征和稀疏双关系正则化方法的乳腺癌图像分类研究[J]. 中国生物医学工程学报, 2020, 39(5): 532-540. DOI:10.3969/j.issn.0258-8021.2020.05.003
WANG Y J, HANG F L, HUANG S, et al. Breast cancer image classification based on fusion multi-network deep convolution features and sparse double relation regularization method[J]. Chinese Journal of Biomedical Engineering, 2020, 39(5): 532-540. DOI:10.3969/j.issn.0258-8021.2020.05.003
doi: 10.3969/j.issn.0258-8021.2020.05.003
[11]   SANYAL R, KAR D, SARKAR R. Carcinoma type classification from high-resolution breast microscopy images using a hybrid ensemble of deep convolutional features and gradient boosting trees classifiers[J]. IEEE/ACM Transactions Computer Biology and Bioinformatics, 2022, 19(4): 2124-2136. DOI:10. 1109/tcbb.2021.3071022
doi: 10. 1109/tcbb.2021.3071022
[12]   邹文凯, 陆慧娟, 叶敏超, 等. 基于卷积神经网络的乳腺癌组织病理图像分类[J]. 计算机工程与设计, 2020, 41(6): 1749-1754. DOI:10.16208/j.issn1000-7024.2020.06.040
ZOU W K, LU H J, YE M C, et al. Breast cancer histopathological image classification using convolutional neural network[J]. Computer Engineering and Design, 2020, 41(6): 1749-1754. DOI:10.16208/j.issn1000-7024.2020.06.040
doi: 10.16208/j.issn1000-7024.2020.06.040
[13]   HUANG G, LIU Z, LAURENS V, et al. Densely connected convolutional networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 2261-2269. DOI:10.1109/CVPR.2017.243
doi: 10.1109/CVPR.2017.243
[14]   HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]// 2018 IEEE/CVP Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE, 2018: 7132-7141. DOI:10.1109/CVPR.2018.00745
doi: 10.1109/CVPR.2018.00745
[15]   JONGCHAN P, SANGHYN W, JOON-Y L, et al. BAM: Bottleneck attention module[EB/OL]. arXiv, 2018. .
[16]   SANGHYN W, JONGCHAN P, JOON-Y L, et al. CBAM: Convolutional Block Attention Module[M]. Munich: Springer, 2018: 3-19. DOI:10.1007/978-3-030-01234-2_1
doi: 10.1007/978-3-030-01234-2_1
[17]   HOU Q B, ZHOU D Q, FENG J S. Coordinate attention for efficient mobile network design[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville: IEEE, 2021: 13708-13717. DOI:10.1109/CVPR46437.2021.01350
doi: 10.1109/CVPR46437.2021.01350
[18]   ARESTA G, ARAUJO T, KWOK S, et al. BACH: Grand challenge on breast cancer histology images[J]. Medical Image Analysis, 2019, 56: 122-139. DOI:10.1016/j.media.2019.05.010
doi: 10.1016/j.media.2019.05.010
[19]   SENOUSY Z, ABDELSAMEA M M, GABER M M, et al. MCUA: Multi-level context and uncertainty aware dynamic deep ensemble for breast cancer histology image classification[J]. IEEE Transactions Biomedical Engineering, 2022, 69(2): 818-829. DOI:10.1109/TBME.2021.3107446
doi: 10.1109/TBME.2021.3107446
[20]   HAN Z Y, WEI B Z, ZHENG Y J, et al. Breast cancer multi-classification from histopathological images with structured deep learning model[J]. Scientific Reports, 2017, 7: 4172. DOI:10.1038/s41598-017-04075-z .
doi: 10.1038/s41598-017-04075-z
[21]   CONG C, LIU S, DIIEVA A, et al. Colour adaptive generative networks for stain normalisation of histopathology images[J]. Medical Image Analysis, 2022, 82: 102580. DOI:10.1016/j.media.2022. 102580 .
doi: 10.1016/j.media.2022. 102580
[22]   GOLATKAR A, ANAND D, SETHI A. Classification of breast cancer histology using deep learning[EB/OL]. arXiv, 2018. . doi:10.1007/978-3-319-93000-8_95
doi: 10.1007/978-3-319-93000-8_95
[23]   HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 770-778. DOI:10.1109/CVPR.2016.90
doi: 10.1109/CVPR.2016.90
[24]   SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 2818-2826. DOI:10.1109/CVPR.2016.308
doi: 10.1109/CVPR.2016.308
[25]   SANDLER M, HOWARD A, ZZHU M, et al. MobileNetV2: Inverted residuals and linear bottlenecks[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE, 2018: 4510-4520. DOI:10.1109/CVPR.2018.00474
doi: 10.1109/CVPR.2018.00474
[1] Ruiqi YU,Yuhua LIU,Xilong SHEN,Ruyu ZHAI,Xiang ZHANG,Zhiguang ZHOU. Representation learning driven multiple graph sampling[J]. Journal of Zhejiang University (Science Edition), 2022, 49(3): 271-279.
[2] Jintai ZHU,Jihua YE,Feng GUO,Lu JIANG,Aiwen JIANG. FSAGN:An expression recognition method based on independent selection of video key frames[J]. Journal of Zhejiang University (Science Edition), 2022, 49(2): 141-150.
[3] Ying ZHONG,Song WANG,Hao WU,Zepeng CHENG,Xuejun LI. SEMMA-Based visual exploration of cyber security event[J]. Journal of Zhejiang University (Science Edition), 2022, 49(2): 131-140.
[4] Qiang ZHU,Chaoyi WANG,Jiqing ZHANG,Baocai YIN,Xiaopeng WEI,Xin YANG. UAV target tracking algorithm based on event camera[J]. Journal of Zhejiang University (Science Edition), 2022, 49(1): 10-18.
[5] Meng YANG,Shu DING,Yuntao MA,Jiayi XIE,Ruifeng DUAN. Dynamic simulation method of wheat rust based on texture feature[J]. Journal of Zhejiang University (Science Edition), 2022, 49(1): 1-9.
[6] YU Peng, LIU Lan, CAI Yun, HE Yu, ZHANG Songhai. Home fitness monitoring system based on monocular camera[J]. Journal of Zhejiang University (Science Edition), 2021, 48(5): 521-530.
[7] FU Rujia, XIAN Chuhua, LI Guiqing, WAN Juanjie, CAO Cheng, YANG Cunyi, GAO Yuefang. Rapid 3D reconstruction of bean plant for accurate phenotype identification[J]. Journal of Zhejiang University (Science Edition), 2021, 48(5): 531-539.
[8] GUI Zhiqiang, YAO Yuyou, ZHANG Gaofeng, XU Benzhu, ZHENG Liping. An efficient computation method of 3D-power diagram[J]. Journal of Zhejiang University (Science Edition), 2021, 48(4): 410-417.
[9] XU Min, WANG Ke, DAI Haoran, LUO Xiaobo, YU Weilun, TAO Yubo, LIN Hai. Visual analysis of cohorts and treatments of breast cancer based on electronic health records[J]. Journal of Zhejiang University (Science Edition), 2021, 48(4): 391-401.
[10] ZOU Beiji, YANG Wenjun, LIU Shu, JIANG Lingzi. A three-stage text recognition framework for natural scene images[J]. Journal of Zhejiang University (Science Edition), 2021, 48(1): 1-8.
[11] CHEN Yuanqiong, ZOU Beiji, ZHANG Meihua, LIAO Wangmin, HUANG Jiaer, ZHU Chengzhang. A review on deep learning interpretability in medical image processing[J]. Journal of Zhejiang University (Science Edition), 2021, 48(1): 18-29.
[12] DENG Huijun. Ranking-supported interactive data classification method and its application[J]. Journal of Zhejiang University (Science Edition), 2021, 48(1): 9-17.
[13] LI Huabiao, HOU Xiaogang, WANG Tingting, ZHAO Haiying. An unified generation scheme of traditional patterns based on rule learning[J]. Journal of Zhejiang University (Science Edition), 2020, 47(6): 669-676.
[14] TAN Jieqing, CAO Ningning. A new Midedge scheme of quadrilateral mesh[J]. Journal of Zhejiang University (Science Edition), 2019, 46(2): 154-163.