|
|
Chest X-ray imaging disease diagnosis model assisted by deformable Transformer |
Jin-bo HU1(),Wei-zhi NIE1,Dan SONG1,*(),Zhuo GAO2,Yun-peng BAI3,Feng ZHAO3 |
1. School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China 2. School of Information, Changchun Polytechnic, Changchun 130033, China 3. Department of Cardiovascular Surgery, Tianjin Chest Hospital, Tianjin 300222, China |
|
|
Abstract A disease diagnosis model for chest X-ray images assisted by deformable Transformer was proposed, aiming at the problems of gray fog phenomenon and overlapping lesion areas in chest X-ray images. The extended residual network ResNet50 was used as a feature extraction network. A compressed dual attention module was added to enhance the feature difference between the lesion area and the non-lesion area, further reduced the interference of redundant information and improved the feature extraction of image data. Through the cross-attention module inside the deformable Transformer decoder, category representations were introduced as the priori knowledge to guide further fusion of image features and improve the feature discrimination of different diseases in the case of overlapping image regions. Output of the decoder was passed into the classifier to obtain the final diagnosis. Both the compressed dual attention module and the deformable Transformer can reduce the computational complexity of the model. The asymmetric loss function was introduced to solve the imbalance of positive and negative samples. The proposed model was subjected to multiple sets of experiments on public datasets ChestX-Ray14 and CheXpert. The area under curve (AUC) on two datasets reached 0.839 8 and 0.906 1 respectively, indicating the correctness and validity of the model for disease diagnosis on chest X-ray images.
|
Received: 01 September 2022
Published: 18 October 2023
|
|
Fund: 国家自然科学基金资助项目(61902277,62272337) |
Corresponding Authors:
Dan SONG
E-mail: hjb@tju.edu.cn;dan.song@ tju.edu.cn
|
可形变Transformer辅助的胸部X光影像疾病诊断模型
针对胸部X光影像中的灰雾现象、病变区域重叠等问题,提出可形变Transformer辅助的胸部X光影像疾病诊断模型. 将扩展后的ResNet50作为特征提取网络,添加压缩型双注意力模块,增强病变区域与非病变区域之间的特征差异,降低冗余信息的干扰,提高图像数据的特征提取效果;通过可形变Transformer解码器内部的交叉注意力模块,引入类别表征作为先验知识,引导影像特征进一步融合,提高不同疾病在影像区域重叠情况下的特征区分度;将解码器的输出传入分类器中以获得最终的诊断结果. 压缩型双注意力模块和可形变Transformer均起到降低模型计算复杂度的作用,引入非对称损失函数可以更好地解决正负样本不均衡. 利用所提模型在公开数据集ChestX-Ray14和CheXpert上进行多组实验,在2个数据集上的受试者操作的特征曲线下面积值(AUC)分别达到0.839 8和0.906 1,表明该模型在胸部X光影像的疾病诊断方面具有正确性和有效性.
关键词:
胸部X光图像分类,
可形变Transformer,
压缩型双注意力,
非对称损失函数,
先验知识
|
|
[1] |
石连红 放射科医生的透视眼—CT与核磁共振[J]. 特别健康, 2020, 8: 35 SHI Lian-hong The fluoroscopy eye of the radiologist -CI and MRI[J]. Special Health, 2020, 8: 35
|
|
|
[2] |
ELDAHSHAN E S A, MOHSEN H M, REVETT K, et al Computer-aided diagnosis of human brain tumor through MRI: a survey and a new algorithm[J]. Expert Systems with Applications, 2014, 41 (11): 5526- 5545
doi: 10.1016/j.eswa.2014.01.021
|
|
|
[3] |
CHEN J, YU H, FENG R, et al. Flow-Mixup: classifying multi-labeled medical images with corrupted labels [C]// BIBM. Piscataway: IEEE, 2020: 534-541.
|
|
|
[4] |
于玉海, 林鸿飞, 孟佳娜, 等 跨模态多标签生物医学图像分类建模识别[J]. 中国图像图形学报, 2018, 23 (6): 917- 927 YU Yu-hai, LIN Hong-fei, MENG Jia-na, et al Cross- modal multi-label biomedical image classification modeling and recognition[J]. Journal of Image and Graphics, 2018, 23 (6): 917- 927
|
|
|
[5] |
LUO Y, TAO D, GENG B, et al Manifold regularized multitask learning for semi-supervised multilabel image classification[J]. IEEE Transactions on Image Processing, 2013, 22 (2): 523- 536
doi: 10.1109/TIP.2012.2218825
|
|
|
[6] |
WEI YUNCHAO, XIA WEI, LIN MIN, et al HCP: a flexible CNN framework for multi-label image classification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38 (9): 1901- 1907
doi: 10.1109/TPAMI.2015.2491929
|
|
|
[7] |
BENBARUCH E, RIDNIK T, ZAMIR N, et al. Asymmetric loss for multi-label classification [C]// ICCV. Piscataway: IEEE, 2021: 82- 91.
|
|
|
[8] |
GUO G R, ARNALDO M, ELIE B A, et al Artificial intelligence in healthcare: review and prediction case studies[J]. Engineering, 2020, 6 (3): 291- 301
doi: 10.1016/j.eng.2019.08.015
|
|
|
[9] |
MILLER D D, BROWN E W Artificial intelligence in medical practice: the question to the answer[J]. The American Journal of Medicine, 2018, 131 (2): 129- 133
doi: 10.1016/j.amjmed.2017.10.035
|
|
|
[10] |
ESTEVA A, KUPREL B, NOVOA R A Dermatologist level classification of skin cancer with deep neural networks[J]. Oncologie, 2017, 19 (11/12): 407- 408
doi: 10.1007/s10269-017-2730-4
|
|
|
[11] |
MCKINNEY S M, SIENIEK M, GODBOLE V, et al International evaluation of an AI system for breast cancer screening[J]. Nature, 2020, 577 (7788): 89- 94
doi: 10.1038/s41586-019-1799-6
|
|
|
[12] |
ZHANG J W, HE J T, CHEN T F, et al Abnormal region detection in cervical smear images based on fully convolutional network[J]. IET Image Processing, 2019, 13 (4): 583- 590
doi: 10.1049/iet-ipr.2018.6032
|
|
|
[13] |
WANG X S, PENG Y F, LU L, et al. ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases [C]// 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 3462-3471.
|
|
|
[14] |
HE K M, ZHANG X Y, REN S Q. Deep residual learning for image recognition [C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778.
|
|
|
[15] |
LI Y, ERIC P, DMITRY D, et al. Learning to diagnose from scratch by exploiting dependencies among labels [C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 7925-7937.
|
|
|
[16] |
HUANG G, LIU Z, LAURENS V D M, et al. Densely connected convolutional networks [C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2261-2269.
|
|
|
[17] |
GUENDEL S, GRBIC S, GEORGESCU B, et al. Learning to recognize abnormalities in chest X-rays with location-aware Dense Networks [C]// Iberoamerican Congress on Pattern Recognition. Berlin: Springer, 2018: 757-765.
|
|
|
[18] |
CHEN X L, GUPTA ABHINAV. Webly supervised learning of convolutional networks [C]// IEEE International Conference on Computer Vision. Piscataway: IEEE, 2016: 1431- 1439.
|
|
|
[19] |
RAJPURKAR P, IRVIN J, ZHU K, et al. CheXNet: radiologist-level pneumonia detection on chest X-Rays with deep learning [C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2698-2705.
|
|
|
[20] |
LIU S L, ZHANG L, YANG X, et al. Query2Label: a simple transformer way to multi-label classification [C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 391-407.
|
|
|
[21] |
BELLO I, FEDUS W, DU X, et al Revisiting ResNets: improved training and scaling strategies[J]. Advances in Neural Information Processing Systems, 2021, 34: 22614- 22627
|
|
|
[22] |
LIN T Y, GOYAL P, GIRSHICK R, et al Focal loss for dense object detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42 (2): 318- 327
doi: 10.1109/TPAMI.2018.2858826
|
|
|
[23] |
IRVIN J, RAJPURKAR P, KO M, et al. CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison [C]// 33rd AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2019: 590-597.
|
|
|
[24] |
YAN C, YAO J, LI R, et al. Weakly supervised deep learning for thoracic disease classification and localization on chest X-Rays [C]// Proc of ICBCB. New York: ACM, 2018: 103-110.
|
|
|
[25] |
MA C, WANG H, HOI S C H. Multi-label thoracic disease image classification with cross-attention networks. international conference on medical image computing and computer-assisted intervention [C]// International Conference on Medical Image Computing and Computer Assisted Intervention. Berlin: Springer, 2019: 730-738.
|
|
|
[26] |
TEIXEIRA V, BRAZ L, PEDRINI H, et al. DuaLAnet: dual lesion attention network for thoracic disease classification in chest X-rays [C]// International Conference on Systems, Signals and Image Processing. Piscataway: IEEE, 2020: 69-74.
|
|
|
[27] |
LUO L, YU L, CHEN H, et al Deep mining external imperfect data for chest x-ray disease screening[J]. IEEE Trans Med Image, 2020, 39 (11): 3583- 3594
doi: 10.1109/TMI.2020.3000949
|
|
|
[28] |
GUAN Q, HUANG Y, LUO Y, et al Discriminative feature learning for thorax disease classification in chest X-ray images[J]. IEEE Transactions on Image Processing, 2021, 99: 1- 2
|
|
|
[29] |
PHAM H H, LE T T, TRAN D Q, et al Interpreting chest x-rays via cnns that exploit hierarchical disease dependencies and uncertainty labels[J]. Neurocomputing, 2021, 437: 186- 194
doi: 10.1016/j.neucom.2020.03.127
|
|
|
[30] |
KAMAL U, ZUNAED M, NIZAM N B, et al Anatomy X-Net: a semi-supervised anatomy aware convolutional neural network for thoracic disease classification[J]. IEEE Journal of Biomedical and Health Informatics, 2021, 26 (11): 5518- 5528
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|