Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2025, Vol. 59 Issue (9): 1826-1837    DOI: 10.3785/j.issn.1008-973X.2025.09.006
    
Multi-scale parallel magnetic resonance imaging reconstruction based on variational model and Transformer
Jizhong DUAN(),Haiyuan LI
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China
Download: HTML     PDF(7078KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A multi-scale parallel MRI reconstruction model based on a variational model and Transformer (VNTM) was proposed, to enhance the quality of reconstructed MR images from undersampled multi-coil MR data. First, undersampled multi-coil k-space data were used to estimate sensitivity maps, with an intermediate-stage enhancement strategy applied to improve the accuracy of these maps. Next, the undersampled multi-coil k-space data and estimated sensitivity maps were input into a variational model for reconstruction. In the variational model, resolution was reduced through a pre-processing module to reduce computational load; multi-scale features were then effectively fused through a multi-scale U-shaped network with the Transformer. Finally, a post-processing module was applied to restore resolution, and data consistency operations were performed on the output to ensure fidelity. Extensive quantitative and qualitative experiments were conducted on publicly available datasets to validate the effectiveness of the proposed method. The experimental results indicate that the proposed reconstruction model achieves superior reconstruction quality and more stable performance in terms of peak signal-to-noise ratio, structural similarity, and visual effects. In addition, a series of ablation studies and robustness evaluations with varying auto-calibration signal (ACS) region sizes were carried out, confirming that VNTM maintained consistently high reconstruction performance under diverse conditions.



Key wordsparallel magnetic resonance imaging      deep learning      image reconstruction      sensitivity map      wavelet transforms     
Received: 29 August 2024      Published: 25 August 2025
CLC:  TP 391  
Fund:  国家自然科学基金地区科学基金资助项目(61861023);云南省基础研究计划资助项目(202301AT070452).
Cite this article:

Jizhong DUAN,Haiyuan LI. Multi-scale parallel magnetic resonance imaging reconstruction based on variational model and Transformer. Journal of ZheJiang University (Engineering Science), 2025, 59(9): 1826-1837.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2025.09.006     OR     https://www.zjujournals.com/eng/Y2025/V59/I9/1826


基于变分模型和Transformer的多尺度并行磁共振成像重建

为了提高欠采样并行磁共振成像的图像重建质量,提出基于变分模型和Transformer的多尺度并行磁共振成像重建模型(VNTM). 该模型利用欠采样多线圈k空间数据来估计灵敏度图,并利用中期增强策略以提高灵敏度图的准确性. 将欠采样多线圈k空间数据和估计的灵敏度图输入变分模型进行重建,在变分模型中,通过前处理模块对图像数据进行降分辨率处理,以减少计算负担. 通过具有Transformer的多尺度U型网络,实现多尺度特征的有效融合. 使用后处理模块恢复分辨率,并对输出数据进行数据一致性操作以确保保真度. 在公开数据集进行大量定量和定性实验以验证所提方法的有效性. 结果表明,在峰值信噪比、结构相似度和视觉效果方面,所提出的重建模型均表现出更优的重建质量和更稳定的重建性能. 多组消融实验和不同自校准信号(ACS)区域大小的鲁棒性实验,验证了VNTM在不同条件下均能保持良好的重建性能.


关键词: 并行磁共振成像,  深度学习,  图像重建,  灵敏度图,  小波变换 
Fig.1 Overview of VNTM network architecture
参数数值
Batch Size1
卷积核大小3×3
初始学习率1×10?3
最小学习率1×10?6
Epoch100
级联数量T8
优化器Adam
损失函数L1 Loss
早停策略阈值30
Tab.1 VNTM model parameters
采样掩码ACS采样掩码ACS
3× 1DRU320×2010× 2DRU3×6
5× 1DRU320×205× RADU29×29
3× 1DUU320×2110× RADU19×13
5× 1DUU320×225× 2DPU20×20
5× 2DRU8×1210× 2DPU20×20
Tab.2 ACS sizes of different undersampling masks
评价指标方法3× 1DRU5× 1DRU3× 1DUU5× 1DUU5× 2DRU10× 2DRU5× RADU10× RADU5× 2DPU10× 2DPU
PSNR/dBE2E-VN36.3934.7534.3332.2636.2733.1335.8631.8234.6432.11
RecurrentVN37.6135.4635.1732.7136.4433.0036.0833.9535.7534.00
Deep-SLR33.7532.2231.0528.7436.6834.2336.6433.2035.9634.29
Deepcomplex36.5533.9533.4330.1338.7336.1938.2934.7938.8236.64
DONet36.8534.5834.0831.3139.0236.3738.5235.0839.3436.88
SwinMR33.9533.1032.8931.9935.8632.8536.6732.8335.1333.65
VNTM37.7935.8036.3633.5739.2236.7638.9135.5239.7637.37
SSIME2E-VN0.9380.9160.9210.8860.9400.9040.9340.8800.9330.898
RecurrentVN0.9490.9200.9260.8850.9370.8990.9310.8960.9400.907
Deep-SLR0.9020.8680.8740.8110.9380.9050.9370.8820.9350.902
Deepcomplex0.9380.8980.8990.8390.9530.9230.9490.9020.9560.929
DONet0.9410.9080.9100.8610.9550.9250.9500.9060.9590.932
SwinMR09080.8860.8930.8680.9230.8840.9270.8660.9190.887
VNTM0.9490.9210.9350.8930.9560.9300.9520.9110.9620.937
Tab.3 Evaluation metrics for reconstruction results of different methods on knee data (Coronal-PD)
评价指标方法3× 1DRU5× 1DRU3× 1DUU5× 1DUU5× 2DRU10× 2DRU5× RADU10× RADU5× 2DPU10× 2DPU
PSNR/dBE2E-VN40.0937.3639.1136.1237.5435.2037.8735.8836.8336.77
RecurrentVN40.3238.0639.4636.3239.1136.9237.6236.0737.1534.73
Deep-SLR36.4735.2634.5532.7937.7235.2537.2934.4537.6336.04
Deepcomplex38.5836.0237.2232.9238.2736.1938.2535.1438.5636.90
DONet38.8136.3537.4733.9738.4436.2638.4735.3338.6136.93
SwinMR36.3135.1635.9134.0436.9134.6337.9934.3136.3531.41
VNTM40.4438.2239.6736.7139.9938.0339.9637.0340.0438.56
SSIME2E-VN0.9720.9590.9680.9510.9640.9520.9660.9520.9620.956
RecurrentVN0.9730.9620.9690.9510.9680.9580.9640.9520.9620.947
Deep-SLR0.9590.9460.9430.9210.9630.9470.9600.9380.9620.949
Deepcomplex0.9660.9500.9570.9200.9640.9510.9630.9420.9640.952
DONet0.9670.9510.9590.9300.9650.9520.9640.9440.9640.953
SwinMR0.9500.9400.9480.9300.9520.9340.9590.9320.9470.913
VNTM0.9730.9620.9690.9520.9710.9620.9700.9550.9710.963
Tab.5 Evaluation metrics for reconstruction results of different networks on fastMRI-T2Brain
评价指标方法3× 1DRU5× 1DRU3× 1DUU5× 1DUU5× 2DRU10× 2DRU5× RADU10× RADU5× 2DPU10× 2DPU
PSNR/dBE2E-VN36.6434.2536.2732.5237.9535.1836.6833.1236.1733.49
RecurrentVN38.8334.3136.7532.8338.4435.5237.4733.7937.3136.40
Deep-SLR36.2733.9134.6930.4138.6836.0637.9234.3438.4035.98
Deepcomplex37.6735.0235.8631.0639.9337.2938.9735.2439.8737.78
DONet38.2235.7536.5032.0739.9537.5839.1735.6340.3238.00
SwinMR33.9932.6233.4630.8136.3433.2636.9332.9035.3333.79
VNTM39.8337.0838.5333.3340.7638.1839.9036.1841.0838.85
SSIME2E-VN0.9400.9110.9370.8750.9490.9210.9420.8910.9440.913
RecurrentVN0.9540.9020.9380.8750.9490.9200.9390.8860.9440.926
Deep-SLR0.9300.8910.9110.8190.9490.9190.9420.8870.9480.916
Deepcomplex0.9440.9080.9250.8370.9590.9330.9510.9020.9590.937
DONet0.9470.9160.9310.8560.9590.9360.9520.9080.9620.939
SwinMR0.8990.8690.8940.8300.9250.8880.9250.8600.9150.882
VNTM0.9590.9310.9490.8810.9640.9420.9570.9160.9660.946
Tab.4 Evaluation metrics for reconstruction results of different networks on knee data (Sagittal-PD)
Fig.2 Overall comparison of PSNR performance of different algorithms across multiple datasets
Fig.3 Overall comparison of SSIM performance of different algorithms across multiple datasets
Fig.4 Visual comparison of seven networks on knee data (Coronal-PD) under 3× acceleration with a 1DUU mask
Fig.5 Visual comparison of seven networks on knee data (Sagittal-PD) under 3× acceleration with a 1DRU mask
Fig.6 Visual comparison of seven networks on fastMRI-T2Brain under 5× acceleration with a RADU sampling mask
方法DCρ中期增强UMRBPSNR/dBSSIM
前后处理UMRNet
Baseline36.270.940
VNTM-A36.880.942
VNTM-B37.610.946
VNTM-C39.170.955
VNTM-D37.900.947
VNTM-E38.910.953
VNTM (本研究)39.220.956
Tab.6 Results of VNTM ablation experiment
方法PSNR/dBSSIM
VNTM-T/439.150.955
VNTM-T/239.220.956
VNTM-3T/439.140.955
Tab.7 Results of experiments on mid-phase and non-mid-phase enhancements of sensitivity maps
方法FLOPs /1012Para/108tI/s
VNTM-E1.281.270.417
VNTM-F0.501.280.144
Tab.8 Comparison of computational efficiency before and after module integration
方法TPSNR/dBSSIM
VNTM238.690.953
VNTM439.030.955
VNTM639.160.955
VNTM839.220.956
VNTM1039.140.955
VNTM1239.250.956
Tab.9 Ablation study results on effect of cascade number on model performance
Fig.7 Robustness experimental results of different methods for various ACS sizes
Fig.8 Visual comparison of 4×4 ACS region
Fig.9 Visual comparison of 20×20 ACS region
[1]   PRUESSMANN K P Encoding and reconstruction in parallel MRI[J]. NMR in Biomedicine, 2006, 19 (3): 288- 299
doi: 10.1002/nbm.1042
[2]   LARKMAN D J, NUNES R G Parallel magnetic resonance imaging[J]. Physics in Medicine and Biology, 2007, 52 (7): R15
doi: 10.1088/0031-9155/52/7/R01
[3]   LUSTIG M, PAULY J M SPIRiT: iterative self-consistent parallel imaging reconstruction from arbitrary k-space[J]. Magnetic Resonance in Medicine, 2010, 64 (2): 457- 471
doi: 10.1002/mrm.22428
[4]   DONOHO D L Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52 (4): 1289- 1306
doi: 10.1109/TIT.2006.871582
[5]   LUSTIG M, DONOHO D, PAULY J M Sparse MRI: the application of compressed sensing for rapid MR imaging[J]. Magnetic Resonance in Medicine, 2007, 58 (6): 1182- 1195
doi: 10.1002/mrm.21391
[6]   PRUESSMANN K P, WEIGER M, SCHEIDEGGER M B, et al SENSE: sensitivity encoding for fast MRI[J]. Magnetic Resonance in Medicine, 1999, 42 (5): 952- 962
doi: 10.1002/(SICI)1522-2594(199911)42:5<952::AID-MRM16>3.0.CO;2-S
[7]   UECKER M, LAI P, MURPHY M J, et al ESPIRiT: an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA[J]. Magnetic Resonance in Medicine, 2014, 71 (3): 990- 1001
doi: 10.1002/mrm.24751
[8]   LUSTIG M, DONOHO D L, SANTOS J M, et al Compressed sensing MRI[J]. IEEE Signal Processing Magazine, 2008, 25 (2): 72- 82
doi: 10.1109/MSP.2007.914728
[9]   RUDIN L I, OSHER S, FATEMI E Nonlinear total variation based noise removal algorithms[J]. Physica D: Nonlinear Phenomena, 1992, 60 (1/2/3/4): 259- 268
[10]   RAVISHANKAR S, BRESLER Y. Sparsifying transform learning for compressed sensing MRI [C]// IEEE 10th International Symposium on Biomedical Imaging. San Francisco: IEEE, 2013: 17–20.
[11]   ZHANG K, ZUO W, CHEN Y, et al Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising[J]. IEEE Transactions on Image Processing, 2017, 26 (7): 3142- 3155
doi: 10.1109/TIP.2017.2662206
[12]   LIU G, REDA F A, SHIH K J, et al. Image inpainting for irregular holes using partial convolutions [C]// European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 89–105.
[13]   LIM B, SON S, KIM H, et al. Enhanced deep residual networks for single image super-resolution [C]// Computer Vision and Pattern Recognition Workshops. Honolulu: IEEE, 2017: 1132–1140.
[14]   WU Z, LIAO W, YAN C, et al Deep learning based MRI reconstruction with transformer[J]. Computer Methods and Programs in Biomedicine, 2023, 233: 107452
doi: 10.1016/j.cmpb.2023.107452
[15]   HAMMERNIK K, KLATZER T, KOBLER E, et al Learning a variational network for reconstruction of accelerated MRI data[J]. Magnetic Resonance in Medicine, 2018, 79 (6): 3055- 3071
doi: 10.1002/mrm.26977
[16]   SRIRAM A, ZBONTAR J, MURRELL T, et al. End-to-end variational networks for accelerated MRI reconstruction [C]// Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2020: 64–73.
[17]   YIASEMIS G, SONKE J J, SÁNCHEZ C, et al. Recurrent variational network: a deep learning inverse problem solver applied to the task of accelerated MRI reconstruction [C] // IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 722–731.
[18]   GUO P, MEI Y, ZHOU J, et al ReconFormer: accelerated MRI reconstruction using recurrent transformer[J]. IEEE Transactions on Medical Imaging, 2024, 43 (1): 582- 593
doi: 10.1109/TMI.2023.3314747
[19]   HUANG J, XING X, GAO Z, et al. Swin deformable attention U-Net transformer (SDAUT) for explainable fast MRI [C]// Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2022: 538–548.
[20]   HUANG J, FANG Y, WU Y, et al Swin transformer for fast MRI[J]. Neurocomputing, 2024, 493: 281- 304
[21]   WANG B, LIAN Y, XIONG X, et al DCT-net: dual-domain cross-fusion transformer network for MRI reconstruction[J]. Magnetic Resonance Imaging, 2024, 107: 69- 79
doi: 10.1016/j.mri.2024.01.007
[22]   LIU X, XU W, YE X. The ill-posed problem and regularization in parallel magnetic resonance imaging [C]// 3rd International Conference on Bioinformatics and Biomedical Engineering. Beijing: IEEE, 2009: 1–4.
[23]   ROEMER P B, EDELSTEIN W A, HAYES C E, et al The NMR phased array[J]. Magnetic Resonance in Medicine, 1990, 16 (2): 192- 225
doi: 10.1002/mrm.1910160203
[24]   LIANG J, CAO J, SUN G, et al. SwinIR: image restoration using swin transformer [C]// International Conference on Computer Vision Workshops. Montreal: IEEE, 2021: 1833–1844.
[25]   FABIAN Z, TINAZ B, SOLTANOLKOTABI M. HUMUS-Net: hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction [EB/OL]. (2023-03-17)[2024-08-01]. https://arxiv.org/abs/2203.08213.
[26]   LAI Z, QU X, LIU Y, et al Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform[J]. Medical Image Analysis, 2016, 27: 93- 104
doi: 10.1016/j.media.2015.05.012
[27]   KNOLL F, ZBONTAR J, SRIRAM A, et al fastMRI: a publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning[J]. Radiology Artificial Intelligence, 2020, 2 (1): e190007
doi: 10.1148/ryai.2020190007
[28]   PASZKE A, GROSS S, MASSA F, et al. PyTorch: an imperative style, high-performance deep learning library [EB/OL]. (2019-12-03)[2024-08-01]. https://arxiv.org/abs/1912.01703.
[29]   HORÉ A, ZIOU D. Image quality metrics: psnr vs. SSIM [C]// 20th International Conference on Pattern Recognition. Istanbul: IEEE, 2010: 2366–2369.
[30]   WANG Z, BOVIK A C, SHEIKH H R, et al Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13 (4): 600- 612
doi: 10.1109/TIP.2003.819861
[31]   PRAMANIK A, AGGARWAL H K, JACOB M Deep generalization of structured low-rank algorithms (deep-SLR)[J]. IEEE Transactions on Medical Imaging, 2020, 39 (12): 4186- 4197
doi: 10.1109/TMI.2020.3014581
[32]   WANG S, CHENG H, YING L, et al Deep complexMRI: exploiting deep residual network for fast parallel MR imaging with complex convolution[J]. Magnetic Resonance Imaging, 2020, 68: 136- 147
doi: 10.1016/j.mri.2020.02.002
[1] Hong ZHANG,Xuecheng ZHANG,Guoqiang WANG,Panlong GU,Nan JIANG. Real-time positioning and control of soft robot based on three-dimensional vision[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(8): 1574-1582.
[2] Dongping ZHANG,Dawei WANG,Shuji HE,Siliang TANG,Zhiyong LIU,Zhongqiu LIU. Remaining useful life prediction of aircraft engines based on cross-dimensional feature fusion[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1504-1513.
[3] Shengju WANG,Zan ZHANG. Missing value imputation algorithm based on accelerated diffusion model[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1471-1480.
[4] Huizhi XU,Xiuqing WANG. Perception of distance and speed of front vehicle based on vehicle image features[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1219-1232.
[5] Yongqing CAI,Cheng HAN,Wei QUAN,Wudi CHEN. Visual induced motion sickness estimation model based on attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1110-1118.
[6] Lihong WANG,Xinqian LIU,Jing LI,Zhiquan FENG. Network intrusion detection method based on federated learning and spatiotemporal feature fusion[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1201-1210.
[7] Zan CHEN,Ran LI,Yuanjing FENG,Yongqiang LI. Video snapshot compressive imaging reconstruction based on temporal super-resolution[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(5): 956-963.
[8] Qiaohong CHEN,Menghao GUO,Xian FANG,Qi SUN. Image captioning based on cross-modal cascaded diffusion model[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(4): 787-794.
[9] Zhengyu GU,Feifei LAI,Chen GENG,Ximing WANG,Yakang DAI. Knowledge-guided infarct segmentation of ischemic stroke[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(4): 814-820.
[10] Li MA,Yongshun WANG,Yao HU,Lei FAN. Pre-trained long-short spatiotemporal interleaved Transformer for traffic flow prediction applications[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(4): 669-678.
[11] Kaibo YANG,Mingen ZHONG,Jiawei TAN,Zhiying DENG,Mengli ZHOU,Ziji XIAO. Small-scale sparse smoke detection in multiple fire scenarios based on semi-supervised learning[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(3): 546-556.
[12] Liming LIANG,Pengwei LONG,Jiaxin JIN,Renjie LI,Lu ZENG. Steel surface defect detection algorithm based on improved YOLOv8s[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(3): 512-522.
[13] Minghui YAO,Yueyan WANG,Qiliang WU,Yan NIU,Cong WANG. Siamese networks algorithm based on small human motion behavior recognition[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(3): 504-511.
[14] Zhichao CHEN,Jie YANG,Fan LI,Zhicheng FENG. Review on deep learning-based key algorithm for train running environment perception[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(1): 1-17.
[15] Yi ZHAO,Chun AN,Minghao LI,Jianxiao MA,Shuo HUAI. Selection of lane-changing distance for vehicles in urban expressway interchange weaving section[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(1): 205-212.