Physical information was divided into rule information and numerical information, in order to explore the role of physical information in training neural network when solving differential equations with physics-informed neural network (PINN). The logic of PINN for solving differential equations was explained, as well as the data-driven approach of physical information and neural network interpretability. Synthetic loss function of neural network was designed based on the two types of information, and the training balance degree was established from the aspects of training sampling and training intensity. The experiment of solving the Burgers-Fisher equation by PINN showed that PINN can obtain good solution accuracy and stability. In the training of neural networks for solving the equation, numerical information of the Burgers-Fisher equation can better promote neural network to approximate the equation solution than rule information. The training effect of neural network was improved with the increase of training sampling, training epoch, and the balance between the two types of information. In addition, the solving accuracy of the equation was improved with the increasing of the scale of neural network, but the training time of each epoch was also increased. In a fixed training time, it is not true that the larger scale of the neural network, the better the effect.
Jian XU,Hai-long ZHU,Jiang-le ZHU,Chun-zhong LI. Solution approach of Burgers-Fisher equation based on physics-informed neural networks. Journal of ZheJiang University (Engineering Science), 2023, 57(11): 2160-2169.
Fig.1Schematic of PINN dealing with physical problems
Fig.2Logical explanation of solving PDE with PINN
Fig.3Schematic of PINN solving Burgers-Fisher equation
$ x $
$ t $
参数情况1
参数情况2
$ {R_{{\text{Pre}}}} $/10?1
$ {R_{{\text{Exa}}}} $/10?1
$ {E_{{\text{Abs}}}} $/10?3
$ {R_{{\text{Pre}}}} $/10?1
$ {R_{{\text{Exa}}}} $/10?1
$ {E_{{\text{Abs}}}} $/10?4
0.0
0.0
4.9957
5.0000
0.4296
7.0735
7.0711
2.4521
0.1
1.0
5.2330
5.2436
1.0540
8.5860
8.5756
10.3780
0.2
2.0
5.4697
5.4860
1.6249
9.4012
9.4095
8.2731
0.3
3.0
5.7092
5.7261
1.6925
9.7709
9.7750
4.0752
0.4
4.0
5.9568
5.9628
0.6045
9.9170
9.9172
0.2986
0.5
5.0
6.2043
6.1952
0.9111
9.9715
9.9700
1.5283
0.6
6.0
6.4406
6.4222
1.8459
9.9921
9.9892
2.9022
0.7
7.0
6.6602
6.6430
1.7280
10.0000
9.9961
3.8970
0.8
8.0
6.8613
6.8568
0.4469
10.0030
9.9986
3.9810
0.9
9.0
7.0435
7.0630
1.9533
10.0020
9.9995
2.7907
1.0
10.0
7.2075
7.2611
5.3639
10.0000
9.9998
0.2772
Tab.1Prediction results of diagonal data coordinates in grid sampling
Fig.4Three-dimensional surface of equation solution on two cases
Fig.5Change of absolute error with number of epochs of neural network training
$ {S_{{\text{Net}}}} $
$ {E_{{\text{Max}}}} $/10?2
$ {E_{{\text{Min}}}} $/10?6
$ {E_{{\text{Mea}}}} $/10?3
$ {E_{{\text{Sta}}}} $/10?3
$ {T_{{\text{Tim}}}} $/101
L2N10
1.7252
1.5875
5.0858
3.6818
0.8563
L2N20
1.1476
0.5662
2.6179
2.1679
1.0698
L2N40
0.9774
0.5504
1.8555
1.6741
1.2882
L4N10
0.7521
0.5027
2.2651
1.6853
1.2023
L4N20
0.6586
0.3417
1.5532
1.2726
1.5605
L4N40
0.4815
0.3338
1.1123
0.9160
2.2252
L6N10
0.4890
0.3775
1.3585
1.0056
1.6617
L6N20
0.3286
0.1570
0.7378
0.6244
2.1818
L6N40
0.3082
0.1371
0.6828
0.5584
3.0004
Tab.2Descriptive statistic of absolute error of predicted values with scale of neural networks under a fixed epoch
$ {S_{{\text{Net}}}} $
$ {E_{{\text{Max}}}} $/10?2
$ {E_{{\text{Min}}}} $/10?6
$ {E_{{\text{Mea}}}} $/10?3
$ {E_{{\text{Sta}}}} $/10?3
$ {T_{{\text{Num}}}} $/102
L2N10
1.3816
1.5199
4.0196
2.9153
11.7200
L2N20
1.3958
1.1166
3.0385
2.5699
9.3000
L2N40
1.2500
0.9080
2.9150
2.3720
6.6600
L4N10
1.0755
0.8524
3.3827
2.4212
8.4700
L4N20
0.8338
0.8027
2.3934
1.7815
6.7100
L4N40
0.9156
0.6179
2.4211
1.8794
5.0600
L6N10
0.8874
1.3987
2.8323
2.0030
6.2000
L6N20
0.6949
0.5106
2.2007
1.5836
4.8400
L6N40
0.8039
0.7987
2.1866
1.6607
3.5000
Tab.3Descriptive statistic of absolute error of predicted values with scale of neural networks under a fixed training time
$ {B_{{\text{Sam}}}} $
方程内部
方程边缘
方程整体
$ {E_{{\text{Mea}}}} $/10?3
$ {E_{{\text{Sta}}}} $/10?3
$ {E_{{\text{Mea}}}} $/10?3
$ {E_{{\text{Sta}}}} $/10?3
$ {E_{{\text{Mea}}}} $/10?3
$ {E_{{\text{Sta}}}} $/10?3
3-0
907.00
82.20
838.00
118.00
872.00
107.00
0-1
2.63
2.30
1.75
1.54
2.19
2.04
3-1
1.59
1.22
1.56
1.36
1.57
1.30
6-0
907.00
82.90
839.00
118.00
873.00
108.00
0-2
2.50
2.07
1.65
1.47
2.08
1.88
6-2
1.55
1.19
1.56
1.35
1.55
1.28
9-0
904.00
82.20
836.00
118.00
870.00
107.00
0-3
2.35
1.98
1.50
1.19
1.92
1.72
9-3
1.41
1.09
1.47
1.25
1.44
1.18
Tab.4Descriptive statistic of absolute error of predicted values with different training sampling balance
$ {B_{{\text{Int}}}} $
方程内部
方程边缘
方程整体
$ {E_{{\text{Mea}}}} $/10?4
$ {E_{{\text{Sta}}}} $/10?4
$ {E_{{\text{Mea}}}} $/10?4
$ {E_{{\text{Sta}}}} $/10?4
$ {E_{{\text{Mea}}}} $/10?4
$ {E_{{\text{Sta}}}} $/10?4
1∶1
8.44
3.78
7.12
4.10
7.78
4.01
10∶1
7.73
3.43
8.11
4.43
7.92
5.14
1∶10
8.67
5.33
6.81
4.74
7.74
5.14
50∶1
7.27
2.42
6.41
3.20
6.84
2.93
1∶50
5.45
1.63
4.66
1.68
5.05
1.77
100∶1
13.00
4.06
11.10
5.45
1.20
4.95
1∶100
7.97
2.57
5.92
2.12
6.95
2.69
Tab.5Descriptive statistic of absolute error of predicted values with different training intensity balance
[1]
王意存, 邢江宽, 罗坤, 等 基于物理信息神经网络的燃烧化学微分方程求解[J]. 浙江大学学报: 工学版, 2022, 56 (10): 1375- 1383 WANG Yi-cun, XING Jiang-kuan, LUO Kun, et al Solving combustion chemical differential equations via physics-informed neural network[J]. Journal of Zhejiang University: Engineering Science, 2022, 56 (10): 1375- 1383
[2]
AMES W F. Numerical methods for partial differential equations [M]. [s. l. ]: Academic press, 1992.
[3]
RAISSI M, PERDIKARIS P, KARNIADAKIS G E Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686- 707
doi: 10.1016/j.jcp.2018.10.045
[4]
李野, 陈松灿 基于物理信息的神经网络: 最新进展与展望[J]. 计算机科学, 2022, 49 (4): 254- 262 LI Ye, CHEN Song-can Physics-informed neural networks: recent advances and prospects[J]. Computer Science, 2022, 49 (4): 254- 262
doi: 10.11896/jsjkx.210500158
[5]
LU L, MENG X, MAO Z, et al DeepXDE: a deep learning library for solving differential equations[J]. SIAM Review, 2021, 63 (1): 208- 228
doi: 10.1137/19M1274067
[6]
LIN S, CHEN Y A two-stage physics-informed neural network method based on conserved quantities and applications in localized wave solutions[J]. Journal of Computational Physics, 2022, 457: 111053
doi: 10.1016/j.jcp.2022.111053
[7]
LIN S, CHEN Y Physics-informed neural network methods based on Miura transformations and discovery of new localized wave solutions[J]. Physica D: Nonlinear Phenomena, 2023, 445: 133629
doi: 10.1016/j.physd.2022.133629
[8]
MIAO Z W, CHEN Y Physics-informed neural networks method in high-dimensional integrable systems[J]. Modern Physics Letters B, 2022, 36 (1): 2150531
doi: 10.1142/S021798492150531X
[9]
PU J C, CHEN Y Data-driven forward-inverse problems for Yajima–Oikawa system using deep learning with parameter regularization[J]. Communications in Nonlinear Science and Numerical Simulation, 2023, 118: 107051
doi: 10.1016/j.cnsns.2022.107051
[10]
潘悦悦, 吴立飞, 杨晓忠 Burgers-Fisher方程改进的交替分段Crank-Nicolson并行差分方法[J]. 高校应用数学学报A辑, 2021, 36 (2): 193- 207 PAN Yue-yue, WU Li-fei, YANG Xiao-zhong The improved alternating segment Crank-Nicolson parallel difference method for Burgers-Fisher equation[J]. Applied Mathematics A Journal of Chinese Universities: Ser. A, 2021, 36 (2): 193- 207
doi: 10.13299/j.cnki.amjcu.002168
[11]
KUMAR S, SAHA RAY S Numerical treatment for Burgers-Fisher and generalized Burgers-Fisher equations[J]. Mathematical Sciences, 2021, 15 (1): 21- 28
doi: 10.1007/s40096-020-00356-3
[12]
ISMAIL H N, RASLAN K, ABD RABBOH A A Adomian decomposition method for Burger’s-Huxley and Burger’s-Fisher equations[J]. Applied Mathematics and Computation, 2004, 159 (1): 291- 301
doi: 10.1016/j.amc.2003.10.050
[13]
RASHIDI M, GANJI D, DINARVAND S Explicit analytical solutions of the generalized Burger and Burger-Fisher equations by homotopy perturbation method[J]. Numerical Methods for Partial Differential Equations, 2009, 25 (2): 409- 417
doi: 10.1002/num.20350
[14]
JAVIDI M Spectral collocation method for the solution of the generalized Burger-Fisher equation[J]. Applied Mathematics and Computation, 2006, 174 (1): 345- 352
doi: 10.1016/j.amc.2005.04.084
[15]
ZHAO T G, LI C, ZANG Z L, et al Chebyshev-Legendre pseudo-spectral method for the generalised Burgers-Fisher equation[J]. Applied Mathematical Modelling, 2012, 36 (3): 1046- 1056
doi: 10.1016/j.apm.2011.07.059
[16]
GOLBABAI A, JAVIDI M A spectral domain decomposition approach for the generalized Burger's-Fisher equation[J]. Chaos Solitons and Fractals, 2009, 39 (1): 385- 392
doi: 10.1016/j.chaos.2007.04.013
[17]
ALOTAIBI B M, SHAH R S, NONLAOPON K, et al Investigation of the time-fractional generalized Burgers-Fisher equation via novel techniques[J]. Symmetry-Basel, 2023, 15 (1): 108
[18]
赵国忠, 郭怀民, 郭鹏云, 等 求解广义Burgers-Huxley方程和广义Burgers-Fisher方程的一类局部间断Petrov-Galerkin方法研究(英文)[J]. 高等学校计算数学学报, 2020, 42 (3): 193- 208 ZHAO Guo-zhong, GUO Huai-min, GUO Peng-yun, et al A local discontinuous Petrov-Galerkin method for the generalized Burgers-Huxley equation and Burgers-Fisher equation[J]. Numerical Mathematics: A Journal of Chinese Universities, 2020, 42 (3): 193- 208
[19]
SINGH A, DAHIYA S, SINGH S P A fourth-order B-spline collocation method for nonlinear Burgers-Fisher equation[J]. Mathematical Sciences, 2020, 14 (1): 75- 85
doi: 10.1007/s40096-019-00317-5
[20]
JAGTAP A D, KAWAGUCHI K, EM KARNIADAKIS G Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks[J]. Proceedings of the Royal Society A: Mathematical Physical and Engineering Sciences, 2020, 476 (2239): 20200334
doi: 10.1098/rspa.2020.0334
[21]
VON RUEDEN L, MAYER S, BECKH K, et al Informed machine learning: a taxonomy and survey of integrating prior knowledge into learning systems[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 35 (1): 614- 633