Please wait a minute...
浙江大学学报(工学版)  2023, Vol. 57 Issue (11): 2160-2169    DOI: 10.3785/j.issn.1008-973X.2023.11.003
计算机技术     
基于物理信息神经网络的Burgers-Fisher方程求解方法
徐健(),朱海龙*(),朱江乐,李春忠
安徽财经大学 统计与应用数学学院,安徽 蚌埠 233030
Solution approach of Burgers-Fisher equation based on physics-informed neural networks
Jian XU(),Hai-long ZHU*(),Jiang-le ZHU,Chun-zhong LI
School of Statistics and Applied Mathematics, Anhui University of Finance and Economics, Bengbu 233030, China
 全文: PDF(1371 KB)   HTML
摘要:

为了探索基于物理信息的神经网络(PINN)求解微分方程时,物理信息在训练神经网络中的作用,提出将物理信息分为规律信息和数值信息2类,以阐释PINN求解微分方程的逻辑,以及物理信息的数据驱动方式和神经网络可解释性.设计基于2类信息的神经网络综合损失函数,并从训练采样和训练强度2方面建立信息的训练平衡度,从而利用PINN求解Burgers-Fisher方程. 实验表明,PINN能够获得较好的方程求解精度和稳定性;在求解方程的神经网络训练中,Burgers-Fisher方程的数值信息比规律信息能更好地促进神经网络逼近方程解;随着训练采样和迭代次数的增加,以及2类信息的平衡,神经网络训练效果得到提高;增加神经网络规模可以提高方程求解精度,但也增加了网络训练迭代时间,在固定训练时间下并非神经网络规模越大效果越好.

关键词: Burgers-Fisher方程基于物理信息的神经网络规律信息数值信息数据驱动可解释性训练平衡度    
Abstract:

Physical information was divided into rule information and numerical information, in order to explore the role of physical information in training neural network when solving differential equations with physics-informed neural network (PINN). The logic of PINN for solving differential equations was explained, as well as the data-driven approach of physical information and neural network interpretability. Synthetic loss function of neural network was designed based on the two types of information, and the training balance degree was established from the aspects of training sampling and training intensity. The experiment of solving the Burgers-Fisher equation by PINN showed that PINN can obtain good solution accuracy and stability. In the training of neural networks for solving the equation, numerical information of the Burgers-Fisher equation can better promote neural network to approximate the equation solution than rule information. The training effect of neural network was improved with the increase of training sampling, training epoch, and the balance between the two types of information. In addition, the solving accuracy of the equation was improved with the increasing of the scale of neural network, but the training time of each epoch was also increased. In a fixed training time, it is not true that the larger scale of the neural network, the better the effect.

Key words: Burgers-Fisher equation    physics-informed neural network    regularity information    numerical information    data-driven    interpretability    training balance
收稿日期: 2023-01-31 出版日期: 2023-12-11
CLC:  TP 3  
基金资助: 国家自然科学基金资助项目(72131006,71971001,71803001);安徽省教育厅高校自然科学研究重点资助项目(KJ2021A0473,KJ2021A0481,2022AH050608)
通讯作者: 朱海龙     E-mail: jianx1982@vip.163.com;hai-long-zhu@163.com
作者简介: 徐健(1982—),男,讲师,博士,从事智能系统与数据科学研究. orcid.org/0000-0003-1472-5910. E-mail: jianx1982@vip.163.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
徐健
朱海龙
朱江乐
李春忠

引用本文:

徐健,朱海龙,朱江乐,李春忠. 基于物理信息神经网络的Burgers-Fisher方程求解方法[J]. 浙江大学学报(工学版), 2023, 57(11): 2160-2169.

Jian XU,Hai-long ZHU,Jiang-le ZHU,Chun-zhong LI. Solution approach of Burgers-Fisher equation based on physics-informed neural networks. Journal of ZheJiang University (Engineering Science), 2023, 57(11): 2160-2169.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2023.11.003        https://www.zjujournals.com/eng/CN/Y2023/V57/I11/2160

图 1  PINN求解物理问题的示意图
图 2  PINN求解PDE的逻辑解释
图 3  PINN求解Burgers-Fisher方程过程示意图
$ x $ $ t $ 参数情况1 参数情况2
$ {R_{{\text{Pre}}}} $/10?1 $ {R_{{\text{Exa}}}} $/10?1 $ {E_{{\text{Abs}}}} $/10?3 $ {R_{{\text{Pre}}}} $/10?1 $ {R_{{\text{Exa}}}} $/10?1 $ {E_{{\text{Abs}}}} $/10?4
0.0 0.0 4.9957 5.0000 0.4296 7.0735 7.0711 2.4521
0.1 1.0 5.2330 5.2436 1.0540 8.5860 8.5756 10.3780
0.2 2.0 5.4697 5.4860 1.6249 9.4012 9.4095 8.2731
0.3 3.0 5.7092 5.7261 1.6925 9.7709 9.7750 4.0752
0.4 4.0 5.9568 5.9628 0.6045 9.9170 9.9172 0.2986
0.5 5.0 6.2043 6.1952 0.9111 9.9715 9.9700 1.5283
0.6 6.0 6.4406 6.4222 1.8459 9.9921 9.9892 2.9022
0.7 7.0 6.6602 6.6430 1.7280 10.0000 9.9961 3.8970
0.8 8.0 6.8613 6.8568 0.4469 10.0030 9.9986 3.9810
0.9 9.0 7.0435 7.0630 1.9533 10.0020 9.9995 2.7907
1.0 10.0 7.2075 7.2611 5.3639 10.0000 9.9998 0.2772
表 1  网格采样中对角线数据坐标的预测结果
图 4  2种参数情况下的方程解的三维曲面图
图 5  绝对误差随神经网络训练迭代次数的变化
$ {S_{{\text{Net}}}} $ $ {E_{{\text{Max}}}} $/10?2 $ {E_{{\text{Min}}}} $/10?6 $ {E_{{\text{Mea}}}} $/10?3 $ {E_{{\text{Sta}}}} $/10?3 $ {T_{{\text{Tim}}}} $/101
L2N10 1.7252 1.5875 5.0858 3.6818 0.8563
L2N20 1.1476 0.5662 2.6179 2.1679 1.0698
L2N40 0.9774 0.5504 1.8555 1.6741 1.2882
L4N10 0.7521 0.5027 2.2651 1.6853 1.2023
L4N20 0.6586 0.3417 1.5532 1.2726 1.5605
L4N40 0.4815 0.3338 1.1123 0.9160 2.2252
L6N10 0.4890 0.3775 1.3585 1.0056 1.6617
L6N20 0.3286 0.1570 0.7378 0.6244 2.1818
L6N40 0.3082 0.1371 0.6828 0.5584 3.0004
表 2  固定迭代次数下预测值绝对误差随神经网络规模变化的描述性统计
$ {S_{{\text{Net}}}} $ $ {E_{{\text{Max}}}} $/10?2 $ {E_{{\text{Min}}}} $/10?6 $ {E_{{\text{Mea}}}} $/10?3 $ {E_{{\text{Sta}}}} $/10?3 $ {T_{{\text{Num}}}} $/102
L2N10 1.3816 1.5199 4.0196 2.9153 11.7200
L2N20 1.3958 1.1166 3.0385 2.5699 9.3000
L2N40 1.2500 0.9080 2.9150 2.3720 6.6600
L4N10 1.0755 0.8524 3.3827 2.4212 8.4700
L4N20 0.8338 0.8027 2.3934 1.7815 6.7100
L4N40 0.9156 0.6179 2.4211 1.8794 5.0600
L6N10 0.8874 1.3987 2.8323 2.0030 6.2000
L6N20 0.6949 0.5106 2.2007 1.5836 4.8400
L6N40 0.8039 0.7987 2.1866 1.6607 3.5000
表 3  固定训练时间下预测值绝对误差随神经网络规模变化的描述性统计
$ {B_{{\text{Sam}}}} $ 方程内部 方程边缘 方程整体
$ {E_{{\text{Mea}}}} $/10?3 $ {E_{{\text{Sta}}}} $/10?3 $ {E_{{\text{Mea}}}} $/10?3 $ {E_{{\text{Sta}}}} $/10?3 $ {E_{{\text{Mea}}}} $/10?3 $ {E_{{\text{Sta}}}} $/10?3
3-0 907.00 82.20 838.00 118.00 872.00 107.00
0-1 2.63 2.30 1.75 1.54 2.19 2.04
3-1 1.59 1.22 1.56 1.36 1.57 1.30
6-0 907.00 82.90 839.00 118.00 873.00 108.00
0-2 2.50 2.07 1.65 1.47 2.08 1.88
6-2 1.55 1.19 1.56 1.35 1.55 1.28
9-0 904.00 82.20 836.00 118.00 870.00 107.00
0-3 2.35 1.98 1.50 1.19 1.92 1.72
9-3 1.41 1.09 1.47 1.25 1.44 1.18
表 4  不同训练采样平衡度下预测值绝对误差的描述性统计
$ {B_{{\text{Int}}}} $ 方程内部 方程边缘 方程整体
$ {E_{{\text{Mea}}}} $/10?4 $ {E_{{\text{Sta}}}} $/10?4 $ {E_{{\text{Mea}}}} $/10?4 $ {E_{{\text{Sta}}}} $/10?4 $ {E_{{\text{Mea}}}} $/10?4 $ {E_{{\text{Sta}}}} $/10?4
1∶1 8.44 3.78 7.12 4.10 7.78 4.01
10∶1 7.73 3.43 8.11 4.43 7.92 5.14
1∶10 8.67 5.33 6.81 4.74 7.74 5.14
50∶1 7.27 2.42 6.41 3.20 6.84 2.93
1∶50 5.45 1.63 4.66 1.68 5.05 1.77
100∶1 13.00 4.06 11.10 5.45 1.20 4.95
1∶100 7.97 2.57 5.92 2.12 6.95 2.69
表 5  不同训练强度平衡度下预测值绝对误差的描述性统计
1 王意存, 邢江宽, 罗坤, 等 基于物理信息神经网络的燃烧化学微分方程求解[J]. 浙江大学学报: 工学版, 2022, 56 (10): 1375- 1383
WANG Yi-cun, XING Jiang-kuan, LUO Kun, et al Solving combustion chemical differential equations via physics-informed neural network[J]. Journal of Zhejiang University: Engineering Science, 2022, 56 (10): 1375- 1383
2 AMES W F. Numerical methods for partial differential equations [M]. [s. l. ]: Academic press, 1992.
3 RAISSI M, PERDIKARIS P, KARNIADAKIS G E Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686- 707
doi: 10.1016/j.jcp.2018.10.045
4 李野, 陈松灿 基于物理信息的神经网络: 最新进展与展望[J]. 计算机科学, 2022, 49 (4): 254- 262
LI Ye, CHEN Song-can Physics-informed neural networks: recent advances and prospects[J]. Computer Science, 2022, 49 (4): 254- 262
doi: 10.11896/jsjkx.210500158
5 LU L, MENG X, MAO Z, et al DeepXDE: a deep learning library for solving differential equations[J]. SIAM Review, 2021, 63 (1): 208- 228
doi: 10.1137/19M1274067
6 LIN S, CHEN Y A two-stage physics-informed neural network method based on conserved quantities and applications in localized wave solutions[J]. Journal of Computational Physics, 2022, 457: 111053
doi: 10.1016/j.jcp.2022.111053
7 LIN S, CHEN Y Physics-informed neural network methods based on Miura transformations and discovery of new localized wave solutions[J]. Physica D: Nonlinear Phenomena, 2023, 445: 133629
doi: 10.1016/j.physd.2022.133629
8 MIAO Z W, CHEN Y Physics-informed neural networks method in high-dimensional integrable systems[J]. Modern Physics Letters B, 2022, 36 (1): 2150531
doi: 10.1142/S021798492150531X
9 PU J C, CHEN Y Data-driven forward-inverse problems for Yajima–Oikawa system using deep learning with parameter regularization[J]. Communications in Nonlinear Science and Numerical Simulation, 2023, 118: 107051
doi: 10.1016/j.cnsns.2022.107051
10 潘悦悦, 吴立飞, 杨晓忠 Burgers-Fisher方程改进的交替分段Crank-Nicolson并行差分方法[J]. 高校应用数学学报A辑, 2021, 36 (2): 193- 207
PAN Yue-yue, WU Li-fei, YANG Xiao-zhong The improved alternating segment Crank-Nicolson parallel difference method for Burgers-Fisher equation[J]. Applied Mathematics A Journal of Chinese Universities: Ser. A, 2021, 36 (2): 193- 207
doi: 10.13299/j.cnki.amjcu.002168
11 KUMAR S, SAHA RAY S Numerical treatment for Burgers-Fisher and generalized Burgers-Fisher equations[J]. Mathematical Sciences, 2021, 15 (1): 21- 28
doi: 10.1007/s40096-020-00356-3
12 ISMAIL H N, RASLAN K, ABD RABBOH A A Adomian decomposition method for Burger’s-Huxley and Burger’s-Fisher equations[J]. Applied Mathematics and Computation, 2004, 159 (1): 291- 301
doi: 10.1016/j.amc.2003.10.050
13 RASHIDI M, GANJI D, DINARVAND S Explicit analytical solutions of the generalized Burger and Burger-Fisher equations by homotopy perturbation method[J]. Numerical Methods for Partial Differential Equations, 2009, 25 (2): 409- 417
doi: 10.1002/num.20350
14 JAVIDI M Spectral collocation method for the solution of the generalized Burger-Fisher equation[J]. Applied Mathematics and Computation, 2006, 174 (1): 345- 352
doi: 10.1016/j.amc.2005.04.084
15 ZHAO T G, LI C, ZANG Z L, et al Chebyshev-Legendre pseudo-spectral method for the generalised Burgers-Fisher equation[J]. Applied Mathematical Modelling, 2012, 36 (3): 1046- 1056
doi: 10.1016/j.apm.2011.07.059
16 GOLBABAI A, JAVIDI M A spectral domain decomposition approach for the generalized Burger's-Fisher equation[J]. Chaos Solitons and Fractals, 2009, 39 (1): 385- 392
doi: 10.1016/j.chaos.2007.04.013
17 ALOTAIBI B M, SHAH R S, NONLAOPON K, et al Investigation of the time-fractional generalized Burgers-Fisher equation via novel techniques[J]. Symmetry-Basel, 2023, 15 (1): 108
18 赵国忠, 郭怀民, 郭鹏云, 等 求解广义Burgers-Huxley方程和广义Burgers-Fisher方程的一类局部间断Petrov-Galerkin方法研究(英文)[J]. 高等学校计算数学学报, 2020, 42 (3): 193- 208
ZHAO Guo-zhong, GUO Huai-min, GUO Peng-yun, et al A local discontinuous Petrov-Galerkin method for the generalized Burgers-Huxley equation and Burgers-Fisher equation[J]. Numerical Mathematics: A Journal of Chinese Universities, 2020, 42 (3): 193- 208
19 SINGH A, DAHIYA S, SINGH S P A fourth-order B-spline collocation method for nonlinear Burgers-Fisher equation[J]. Mathematical Sciences, 2020, 14 (1): 75- 85
doi: 10.1007/s40096-019-00317-5
20 JAGTAP A D, KAWAGUCHI K, EM KARNIADAKIS G Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks[J]. Proceedings of the Royal Society A: Mathematical Physical and Engineering Sciences, 2020, 476 (2239): 20200334
doi: 10.1098/rspa.2020.0334
21 VON RUEDEN L, MAYER S, BECKH K, et al Informed machine learning: a taxonomy and survey of integrating prior knowledge into learning systems[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 35 (1): 614- 633
[1] 钟崴,林雪茹,林小杰,周懿. 基于代理模型模式匹配的锅炉实时操作优化[J]. 浙江大学学报(工学版), 2023, 57(7): 1428-1438.
[2] 陈曦泽,贾俊峰,白玉磊,郭彤,杜修力. 基于XGBoost-SHAP的钢管混凝土柱轴向承载力预测模型[J]. 浙江大学学报(工学版), 2023, 57(6): 1061-1070.
[3] 章超波,刘永政,李宏波,赵阳,张丽珠,王子豪. 基于加权残差聚类的建筑负荷预测区间估计[J]. 浙江大学学报(工学版), 2022, 56(5): 930-937.
[4] 宋家成,闫茂德,杨盼盼,巨永锋,岳靖斐. 基于数据驱动的鲁棒反步自适应巡航控制[J]. 浙江大学学报(工学版), 2022, 56(3): 485-493.
[5] 孔祥玉,王晓兵,李红增,罗家宇. 面向工业平稳/非平稳复杂系统的在线故障监测技术[J]. 浙江大学学报(工学版), 2021, 55(10): 1856-1866.
[6] 蔡畅,黄亦翔,邢宏文. 基于梯度提升树的飞机机身对接状态识别[J]. 浙江大学学报(工学版), 2019, 53(7): 1274-1281.
[7] 张燕,王建宙,李威,王婕,陈玲玲,杨鹏. 基于数据驱动的膝关节外骨骼控制[J]. 浙江大学学报(工学版), 2019, 53(10): 2024-2033.
[8] 宋筱轩,冯天恒,黄平捷,侯迪波,张光新. 基于动态数据驱动的突发水污染事故仿真方法[J]. 浙江大学学报(工学版), 2015, 49(1): 63-68.