Please wait a minute...
浙江大学学报(工学版)  2025, Vol. 59 Issue (9): 1826-1837    DOI: 10.3785/j.issn.1008-973X.2025.09.006
计算机技术     
基于变分模型和Transformer的多尺度并行磁共振成像重建
段继忠(),李海源
昆明理工大学 信息工程与自动化学院,云南 昆明 650504
Multi-scale parallel magnetic resonance imaging reconstruction based on variational model and Transformer
Jizhong DUAN(),Haiyuan LI
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China
 全文: PDF(7078 KB)   HTML
摘要:

为了提高欠采样并行磁共振成像的图像重建质量,提出基于变分模型和Transformer的多尺度并行磁共振成像重建模型(VNTM). 该模型利用欠采样多线圈k空间数据来估计灵敏度图,并利用中期增强策略以提高灵敏度图的准确性. 将欠采样多线圈k空间数据和估计的灵敏度图输入变分模型进行重建,在变分模型中,通过前处理模块对图像数据进行降分辨率处理,以减少计算负担. 通过具有Transformer的多尺度U型网络,实现多尺度特征的有效融合. 使用后处理模块恢复分辨率,并对输出数据进行数据一致性操作以确保保真度. 在公开数据集进行大量定量和定性实验以验证所提方法的有效性. 结果表明,在峰值信噪比、结构相似度和视觉效果方面,所提出的重建模型均表现出更优的重建质量和更稳定的重建性能. 多组消融实验和不同自校准信号(ACS)区域大小的鲁棒性实验,验证了VNTM在不同条件下均能保持良好的重建性能.

关键词: 并行磁共振成像深度学习图像重建灵敏度图小波变换    
Abstract:

A multi-scale parallel MRI reconstruction model based on a variational model and Transformer (VNTM) was proposed, to enhance the quality of reconstructed MR images from undersampled multi-coil MR data. First, undersampled multi-coil k-space data were used to estimate sensitivity maps, with an intermediate-stage enhancement strategy applied to improve the accuracy of these maps. Next, the undersampled multi-coil k-space data and estimated sensitivity maps were input into a variational model for reconstruction. In the variational model, resolution was reduced through a pre-processing module to reduce computational load; multi-scale features were then effectively fused through a multi-scale U-shaped network with the Transformer. Finally, a post-processing module was applied to restore resolution, and data consistency operations were performed on the output to ensure fidelity. Extensive quantitative and qualitative experiments were conducted on publicly available datasets to validate the effectiveness of the proposed method. The experimental results indicate that the proposed reconstruction model achieves superior reconstruction quality and more stable performance in terms of peak signal-to-noise ratio, structural similarity, and visual effects. In addition, a series of ablation studies and robustness evaluations with varying auto-calibration signal (ACS) region sizes were carried out, confirming that VNTM maintained consistently high reconstruction performance under diverse conditions.

Key words: parallel magnetic resonance imaging    deep learning    image reconstruction    sensitivity map    wavelet transforms
收稿日期: 2024-08-29 出版日期: 2025-08-25
CLC:  TP 391  
基金资助: 国家自然科学基金地区科学基金资助项目(61861023);云南省基础研究计划资助项目(202301AT070452).
作者简介: 段继忠(1984—),男,副教授,博士,从事图像处理和深度学习研究. orcid.org/0000-0002-5854-6239. E-mail:duanjz@kust.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
段继忠
李海源

引用本文:

段继忠,李海源. 基于变分模型和Transformer的多尺度并行磁共振成像重建[J]. 浙江大学学报(工学版), 2025, 59(9): 1826-1837.

Jizhong DUAN,Haiyuan LI. Multi-scale parallel magnetic resonance imaging reconstruction based on variational model and Transformer. Journal of ZheJiang University (Engineering Science), 2025, 59(9): 1826-1837.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2025.09.006        https://www.zjujournals.com/eng/CN/Y2025/V59/I9/1826

图 1  VNTM 网络结构示意图
参数数值
Batch Size1
卷积核大小3×3
初始学习率1×10?3
最小学习率1×10?6
Epoch100
级联数量T8
优化器Adam
损失函数L1 Loss
早停策略阈值30
表 1  VNTM模型参数表
采样掩码ACS采样掩码ACS
3× 1DRU320×2010× 2DRU3×6
5× 1DRU320×205× RADU29×29
3× 1DUU320×2110× RADU19×13
5× 1DUU320×225× 2DPU20×20
5× 2DRU8×1210× 2DPU20×20
表 2  不同欠采样掩码的ACS大小
评价指标方法3× 1DRU5× 1DRU3× 1DUU5× 1DUU5× 2DRU10× 2DRU5× RADU10× RADU5× 2DPU10× 2DPU
PSNR/dBE2E-VN36.3934.7534.3332.2636.2733.1335.8631.8234.6432.11
RecurrentVN37.6135.4635.1732.7136.4433.0036.0833.9535.7534.00
Deep-SLR33.7532.2231.0528.7436.6834.2336.6433.2035.9634.29
Deepcomplex36.5533.9533.4330.1338.7336.1938.2934.7938.8236.64
DONet36.8534.5834.0831.3139.0236.3738.5235.0839.3436.88
SwinMR33.9533.1032.8931.9935.8632.8536.6732.8335.1333.65
VNTM37.7935.8036.3633.5739.2236.7638.9135.5239.7637.37
SSIME2E-VN0.9380.9160.9210.8860.9400.9040.9340.8800.9330.898
RecurrentVN0.9490.9200.9260.8850.9370.8990.9310.8960.9400.907
Deep-SLR0.9020.8680.8740.8110.9380.9050.9370.8820.9350.902
Deepcomplex0.9380.8980.8990.8390.9530.9230.9490.9020.9560.929
DONet0.9410.9080.9100.8610.9550.9250.9500.9060.9590.932
SwinMR09080.8860.8930.8680.9230.8840.9270.8660.9190.887
VNTM0.9490.9210.9350.8930.9560.9300.9520.9110.9620.937
表 3  不同方法在膝盖数据(冠状面质子密度加权序列)上重建结果的评价指标
评价指标方法3× 1DRU5× 1DRU3× 1DUU5× 1DUU5× 2DRU10× 2DRU5× RADU10× RADU5× 2DPU10× 2DPU
PSNR/dBE2E-VN40.0937.3639.1136.1237.5435.2037.8735.8836.8336.77
RecurrentVN40.3238.0639.4636.3239.1136.9237.6236.0737.1534.73
Deep-SLR36.4735.2634.5532.7937.7235.2537.2934.4537.6336.04
Deepcomplex38.5836.0237.2232.9238.2736.1938.2535.1438.5636.90
DONet38.8136.3537.4733.9738.4436.2638.4735.3338.6136.93
SwinMR36.3135.1635.9134.0436.9134.6337.9934.3136.3531.41
VNTM40.4438.2239.6736.7139.9938.0339.9637.0340.0438.56
SSIME2E-VN0.9720.9590.9680.9510.9640.9520.9660.9520.9620.956
RecurrentVN0.9730.9620.9690.9510.9680.9580.9640.9520.9620.947
Deep-SLR0.9590.9460.9430.9210.9630.9470.9600.9380.9620.949
Deepcomplex0.9660.9500.9570.9200.9640.9510.9630.9420.9640.952
DONet0.9670.9510.9590.9300.9650.9520.9640.9440.9640.953
SwinMR0.9500.9400.9480.9300.9520.9340.9590.9320.9470.913
VNTM0.9730.9620.9690.9520.9710.9620.9700.9550.9710.963
表 5  不同方法在T2大脑数据上重建结果的评价指标
评价指标方法3× 1DRU5× 1DRU3× 1DUU5× 1DUU5× 2DRU10× 2DRU5× RADU10× RADU5× 2DPU10× 2DPU
PSNR/dBE2E-VN36.6434.2536.2732.5237.9535.1836.6833.1236.1733.49
RecurrentVN38.8334.3136.7532.8338.4435.5237.4733.7937.3136.40
Deep-SLR36.2733.9134.6930.4138.6836.0637.9234.3438.4035.98
Deepcomplex37.6735.0235.8631.0639.9337.2938.9735.2439.8737.78
DONet38.2235.7536.5032.0739.9537.5839.1735.6340.3238.00
SwinMR33.9932.6233.4630.8136.3433.2636.9332.9035.3333.79
VNTM39.8337.0838.5333.3340.7638.1839.9036.1841.0838.85
SSIME2E-VN0.9400.9110.9370.8750.9490.9210.9420.8910.9440.913
RecurrentVN0.9540.9020.9380.8750.9490.9200.9390.8860.9440.926
Deep-SLR0.9300.8910.9110.8190.9490.9190.9420.8870.9480.916
Deepcomplex0.9440.9080.9250.8370.9590.9330.9510.9020.9590.937
DONet0.9470.9160.9310.8560.9590.9360.9520.9080.9620.939
SwinMR0.8990.8690.8940.8300.9250.8880.9250.8600.9150.882
VNTM0.9590.9310.9490.8810.9640.9420.9570.9160.9660.946
表 4  不同方法在膝盖数据(矢状面质子密度加权序列)上重建结果的评价指标
图 2  不同算法在多个数据集上PSNR 的整体表现比较
图 3  不同算法在多个数据集上SSIM 的整体表现比较
图 4  3倍加速的1DUU采样掩码下7种网络在膝关节数据(Coronal-PD)上的视觉重建效果比较
图 5  3倍加速的1DRU采样掩码下7种网络在膝关节数据(Sagittal-PD)上的视觉重建效果比较
图 6  5倍加速的RADU采样掩码下7种网络在大脑数据上的视觉重建效果比较
方法DCρ中期增强UMRBPSNR/dBSSIM
前后处理UMRNet
Baseline36.270.940
VNTM-A36.880.942
VNTM-B37.610.946
VNTM-C39.170.955
VNTM-D37.900.947
VNTM-E38.910.953
VNTM (本研究)39.220.956
表 6  VNTM消融实验结果
方法PSNR/dBSSIM
VNTM-T/439.150.955
VNTM-T/239.220.956
VNTM-3T/439.140.955
表 7  灵敏度图中期增强与非中期增强实验结果
方法FLOPs /1012Para/108tI/s
VNTM-E1.281.270.417
VNTM-F0.501.280.144
表 8  前后处理模块使用前后计算效率比较结果
方法TPSNR/dBSSIM
VNTM238.690.953
VNTM439.030.955
VNTM639.160.955
VNTM839.220.956
VNTM1039.140.955
VNTM1239.250.956
表 9  级联数量对模型性能的消融实验结果
图 7  不同方法对于不同ACS大小的鲁棒性实验结果
图 8  4×4大小的ACS区域视觉比较
图 9  20×20大小的ACS区域视觉比较
1 PRUESSMANN K P Encoding and reconstruction in parallel MRI[J]. NMR in Biomedicine, 2006, 19 (3): 288- 299
doi: 10.1002/nbm.1042
2 LARKMAN D J, NUNES R G Parallel magnetic resonance imaging[J]. Physics in Medicine and Biology, 2007, 52 (7): R15
doi: 10.1088/0031-9155/52/7/R01
3 LUSTIG M, PAULY J M SPIRiT: iterative self-consistent parallel imaging reconstruction from arbitrary k-space[J]. Magnetic Resonance in Medicine, 2010, 64 (2): 457- 471
doi: 10.1002/mrm.22428
4 DONOHO D L Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52 (4): 1289- 1306
doi: 10.1109/TIT.2006.871582
5 LUSTIG M, DONOHO D, PAULY J M Sparse MRI: the application of compressed sensing for rapid MR imaging[J]. Magnetic Resonance in Medicine, 2007, 58 (6): 1182- 1195
doi: 10.1002/mrm.21391
6 PRUESSMANN K P, WEIGER M, SCHEIDEGGER M B, et al SENSE: sensitivity encoding for fast MRI[J]. Magnetic Resonance in Medicine, 1999, 42 (5): 952- 962
doi: 10.1002/(SICI)1522-2594(199911)42:5<952::AID-MRM16>3.0.CO;2-S
7 UECKER M, LAI P, MURPHY M J, et al ESPIRiT: an eigenvalue approach to autocalibrating parallel MRI: where SENSE meets GRAPPA[J]. Magnetic Resonance in Medicine, 2014, 71 (3): 990- 1001
doi: 10.1002/mrm.24751
8 LUSTIG M, DONOHO D L, SANTOS J M, et al Compressed sensing MRI[J]. IEEE Signal Processing Magazine, 2008, 25 (2): 72- 82
doi: 10.1109/MSP.2007.914728
9 RUDIN L I, OSHER S, FATEMI E Nonlinear total variation based noise removal algorithms[J]. Physica D: Nonlinear Phenomena, 1992, 60 (1/2/3/4): 259- 268
10 RAVISHANKAR S, BRESLER Y. Sparsifying transform learning for compressed sensing MRI [C]// IEEE 10th International Symposium on Biomedical Imaging. San Francisco: IEEE, 2013: 17–20.
11 ZHANG K, ZUO W, CHEN Y, et al Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising[J]. IEEE Transactions on Image Processing, 2017, 26 (7): 3142- 3155
doi: 10.1109/TIP.2017.2662206
12 LIU G, REDA F A, SHIH K J, et al. Image inpainting for irregular holes using partial convolutions [C]// European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 89–105.
13 LIM B, SON S, KIM H, et al. Enhanced deep residual networks for single image super-resolution [C]// Computer Vision and Pattern Recognition Workshops. Honolulu: IEEE, 2017: 1132–1140.
14 WU Z, LIAO W, YAN C, et al Deep learning based MRI reconstruction with transformer[J]. Computer Methods and Programs in Biomedicine, 2023, 233: 107452
doi: 10.1016/j.cmpb.2023.107452
15 HAMMERNIK K, KLATZER T, KOBLER E, et al Learning a variational network for reconstruction of accelerated MRI data[J]. Magnetic Resonance in Medicine, 2018, 79 (6): 3055- 3071
doi: 10.1002/mrm.26977
16 SRIRAM A, ZBONTAR J, MURRELL T, et al. End-to-end variational networks for accelerated MRI reconstruction [C]// Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2020: 64–73.
17 YIASEMIS G, SONKE J J, SÁNCHEZ C, et al. Recurrent variational network: a deep learning inverse problem solver applied to the task of accelerated MRI reconstruction [C] // IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 722–731.
18 GUO P, MEI Y, ZHOU J, et al ReconFormer: accelerated MRI reconstruction using recurrent transformer[J]. IEEE Transactions on Medical Imaging, 2024, 43 (1): 582- 593
doi: 10.1109/TMI.2023.3314747
19 HUANG J, XING X, GAO Z, et al. Swin deformable attention U-Net transformer (SDAUT) for explainable fast MRI [C]// Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2022: 538–548.
20 HUANG J, FANG Y, WU Y, et al Swin transformer for fast MRI[J]. Neurocomputing, 2024, 493: 281- 304
21 WANG B, LIAN Y, XIONG X, et al DCT-net: dual-domain cross-fusion transformer network for MRI reconstruction[J]. Magnetic Resonance Imaging, 2024, 107: 69- 79
doi: 10.1016/j.mri.2024.01.007
22 LIU X, XU W, YE X. The ill-posed problem and regularization in parallel magnetic resonance imaging [C]// 3rd International Conference on Bioinformatics and Biomedical Engineering. Beijing: IEEE, 2009: 1–4.
23 ROEMER P B, EDELSTEIN W A, HAYES C E, et al The NMR phased array[J]. Magnetic Resonance in Medicine, 1990, 16 (2): 192- 225
doi: 10.1002/mrm.1910160203
24 LIANG J, CAO J, SUN G, et al. SwinIR: image restoration using swin transformer [C]// International Conference on Computer Vision Workshops. Montreal: IEEE, 2021: 1833–1844.
25 FABIAN Z, TINAZ B, SOLTANOLKOTABI M. HUMUS-Net: hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction [EB/OL]. (2023-03-17)[2024-08-01]. https://arxiv.org/abs/2203.08213.
26 LAI Z, QU X, LIU Y, et al Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform[J]. Medical Image Analysis, 2016, 27: 93- 104
doi: 10.1016/j.media.2015.05.012
27 KNOLL F, ZBONTAR J, SRIRAM A, et al fastMRI: a publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning[J]. Radiology Artificial Intelligence, 2020, 2 (1): e190007
doi: 10.1148/ryai.2020190007
28 PASZKE A, GROSS S, MASSA F, et al. PyTorch: an imperative style, high-performance deep learning library [EB/OL]. (2019-12-03)[2024-08-01]. https://arxiv.org/abs/1912.01703.
29 HORÉ A, ZIOU D. Image quality metrics: psnr vs. SSIM [C]// 20th International Conference on Pattern Recognition. Istanbul: IEEE, 2010: 2366–2369.
30 WANG Z, BOVIK A C, SHEIKH H R, et al Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13 (4): 600- 612
doi: 10.1109/TIP.2003.819861
31 PRAMANIK A, AGGARWAL H K, JACOB M Deep generalization of structured low-rank algorithms (deep-SLR)[J]. IEEE Transactions on Medical Imaging, 2020, 39 (12): 4186- 4197
doi: 10.1109/TMI.2020.3014581
32 WANG S, CHENG H, YING L, et al Deep complexMRI: exploiting deep residual network for fast parallel MR imaging with complex convolution[J]. Magnetic Resonance Imaging, 2020, 68: 136- 147
doi: 10.1016/j.mri.2020.02.002
[1] 张弘,张学成,王国强,顾潘龙,江楠. 基于三维视觉的软体机器人实时定位与控制[J]. 浙江大学学报(工学版), 2025, 59(8): 1574-1582.
[2] 侯越,王甜甜,张鑫,尹杰. 多分辨率趋势周期解耦交互的交通流预测[J]. 浙江大学学报(工学版), 2025, 59(7): 1362-1372.
[3] 章东平,王大为,何数技,汤斯亮,刘志勇,刘中秋. 基于跨维度特征融合的航空发动机寿命预测[J]. 浙江大学学报(工学版), 2025, 59(7): 1504-1513.
[4] 王圣举,张赞. 基于加速扩散模型的缺失值插补算法[J]. 浙江大学学报(工学版), 2025, 59(7): 1471-1480.
[5] 徐慧智,王秀青. 基于车辆图像特征的前车距离与速度感知[J]. 浙江大学学报(工学版), 2025, 59(6): 1219-1232.
[6] 蔡永青,韩成,权巍,陈兀迪. 基于注意力机制的视觉诱导晕动症评估模型[J]. 浙江大学学报(工学版), 2025, 59(6): 1110-1118.
[7] 王立红,刘新倩,李静,冯志全. 基于联邦学习和时空特征融合的网络入侵检测方法[J]. 浙江大学学报(工学版), 2025, 59(6): 1201-1210.
[8] 蒋沁诚,陶建峰,王洋洋,张宇磊,刘成良. 基于EWT-LSTM的工业机器人关节异常检测[J]. 浙江大学学报(工学版), 2025, 59(5): 982-994.
[9] 陈赞,李冉,冯远静,李永强. 基于时间维超分辨率的视频快照压缩成像重构[J]. 浙江大学学报(工学版), 2025, 59(5): 956-963.
[10] 陈巧红,郭孟浩,方贤,孙麒. 基于跨模态级联扩散模型的图像描述方法[J]. 浙江大学学报(工学版), 2025, 59(4): 787-794.
[11] 顾正宇,赖菲菲,耿辰,王希明,戴亚康. 基于知识引导的缺血性脑卒中梗死区分割方法[J]. 浙江大学学报(工学版), 2025, 59(4): 814-820.
[12] 马莉,王永顺,胡瑶,范磊. 预训练长短时空交错Transformer在交通流预测中的应用[J]. 浙江大学学报(工学版), 2025, 59(4): 669-678.
[13] 杨凯博,钟铭恩,谭佳威,邓智颖,周梦丽,肖子佶. 基于半监督学习的多场景火灾小规模稀薄烟雾检测[J]. 浙江大学学报(工学版), 2025, 59(3): 546-556.
[14] 梁礼明,龙鹏威,金家新,李仁杰,曾璐. 基于改进YOLOv8s的钢材表面缺陷检测算法[J]. 浙江大学学报(工学版), 2025, 59(3): 512-522.
[15] 姚明辉,王悦燕,吴启亮,牛燕,王聪. 基于小样本人体运动行为识别的孪生网络算法[J]. 浙江大学学报(工学版), 2025, 59(3): 504-511.