2. College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, Henan Province, China;
3. Henan Province Synergy Innovation Center of Aviation Economic Development, Zhengzhou 450015, China;
4. School of Management Engineering, Zhengzhou University of Aeronautics, Zhengzhou 450015, China;
5. Public Art Teaching, Zhengzhon University of Aeronautics, Zhengzhou 450015, China
2. 河南师范大学 数学与信息科学学院, 河南 新乡 453007;
3. 航空经济发展河南省协同创新中心, 河南 郑州 450015;
4. 郑州航空工业管理学院 管理工程学院, 河南 郑州 450015;
5. 郑州航空工业管理学院 公共艺术教学部, 河南 郑州 450015
Consider the linear complementarity problem abbreviated as LCP(q, A), for finding a pair of real vectors r and z∈Rn such that
$ \mathit{\boldsymbol{r}}: = \mathit{\boldsymbol{Az}} + \mathit{\boldsymbol{q}} \ge 0,\mathit{\boldsymbol{z}} \ge 0\;{\rm{and}}\;{\mathit{\boldsymbol{z}}^{\rm{T}}}\left( {\mathit{\boldsymbol{Az}} + \mathit{\boldsymbol{q}}} \right) = 0, $ | (1) |
where A=(aij)Rn×n is a given large, sparse and real matrix and q=(q1, q2, …, qn)T∈Rn is a given real vector. Here, zT and ≥ denote the transpose of the vector z and the componentwise defined partial ordering between two vectors, respectively.
Many problems in scientific computing and engineering applications may lead to solutions of LCPs of the form (1). For example, the linear complementarity problem may arise from application problems such as the convex quadratic programming, the Nash equilibrium point of the bimatrix game, the free boundary problems of fiuid dynamics etc[1-2]. Some solvers for LCP(q, A) with a special matrix A were proposed [3-12]. Recently, many researches have focused on the solver of LCP(q, A) with an algebra equation [8-22]. In particular, BAI et al[8-9]proposed a modulus-based matrix multisplitting iterative method for solving LCP(q, A) and presented convergence analysis for the proposed methods. ZHANG et al[13]extended the condition of a compatible H splitting to that of an H-splitting. LI [23] extended the modulus-based matrix splitting iteration method to more general cases. BAI [24] presented parallel matrix block multisplitting relaxation iterative methods, and established the convergence theory of these new methods in a thorough manner. LI et al [25] studied synchronous block multisplitting iterative methods. ZHANG et al[15] generalized the methods of BAI et al[26] and studied modulus-based synchronous multisplitting multi-parameters methods for linear complemen-tarity problem.
In this paper, we generalize the corresponding methods of BAI et al [26] and ZHANG et al[15] from point form to block form according to the modulus-based synchronous multisplitting iteration methods and consider relaxed modulus-based synchronous multisplitting multi-parameter method based on block splitting for solving LCP(q, A). Moreover, we give some theoretical analysis and improve some existing convergence results in [25-26].
The rest of this paper is organized as follows: In section 1, we give some notations and lemmas. In section 2, we propose relaxed modulus-based synchronous block multisplitting multi-parameters method for solving LCP(q, A). In section 3, we give the convergence analysis for the proposed method.
1 Notations and lemmasIn order to study mudulus-based synchronous block multisplitting iteration methods for solving LCP(q, A), let us introduce some definitions and lemmas.
A matrix A=(aij) is called an M-matrix if aij≤0 for i≠j and A-1≥0. The comparison matrix 〈A〉=(αij) of matrix A=(aij) is defined by: αij=|aij|, if i=j; αij=-|aij|, if i≠j. A is called an H-matrix if 〈A〉 is an M-matrix and is called an H+-matrix if it is an H-matrix with positive diagonal entries[20]. Let ρ(A) denote the spectral radius of A. A representatio n A=M-N is called a splitting of A when M is nonsingular. Let A and B be M-matrix. If A≤B, then A-1≥B-1. Finally, we define R+n={x|x≥0, x∈Rn}.
Definition 1[24] Define the set:
(1) Ln, I(n1, n2, …, np)={A=Aij∈Ln(n1, n2, …, np)|Aii∈L(Rni) is nonsingular (i=1, 2, …, p)};
(2) Ln, Id(n1, n2, …, np)={A=diag(A11, A22, …, App)|Aii∈L(Rni) is nonsingular (i=1, 2, …, p)}.
Definition 2[27] Let A∈Ln, I(n1, n2, …, np), and (Ⅰ)-type block comparison matrix 〈M〉=(〈M〉ij)∈L(Rn) and (Ⅱ)-type block comparison matrix 〈〈M〉〉=(〈〈M〉〉ij)∈L(Rn) are defined as
$ {\left\langle \mathit{\boldsymbol{M}} \right\rangle _{ij}} = \left\{ \begin{array}{l} \left\| {M_{ii}^{ - 1}} \right\| - 1,\;\;\;i = j\\ - \left\| {{M_{ij}}} \right\|,\;\;\;i \ne j \end{array} \right.i,j = 1,2, \cdots ,p, $ |
$ {\left\langle {\left\langle \mathit{\boldsymbol{M}} \right\rangle } \right\rangle _{ij}} = \left\{ \begin{array}{l} 1,\;\;\;i = j\\ - \left\| {M_{ii}^{ - 1}{M_{ij}}} \right\|,i \ne j \end{array} \right.i,j = 1,2, \cdots ,p, $ |
respectively.
Moreover, based on block matrix A∈Ln, I(n1, n2, …, np), and L∈Ln, I(n1, n2, …, np), let D(L)=diag (L11, L22, …, Lpp), B(L)=D(L)-L, J(A)=D(A)-1B(A), μ1(A)=ρ(J〈A〉), μ2(A)=ρ(I-〈〈A〉〉), using definition 2, we easily verify
$ \begin{array}{*{20}{c}} {\left\langle {\mathit{\boldsymbol{I}} - \mathit{\boldsymbol{J}}\left( \mathit{\boldsymbol{A}} \right)} \right\rangle = \left\langle {\left\langle {\mathit{\boldsymbol{I}} - \mathit{\boldsymbol{J}}\left( \mathit{\boldsymbol{A}} \right)} \right\rangle } \right\rangle = \left\langle {\left\langle \mathit{\boldsymbol{A}} \right\rangle } \right\rangle ,}\\ {{\mu _2}\left( \mathit{\boldsymbol{A}} \right) \le {\mu _2}\left( \mathit{\boldsymbol{A}} \right).} \end{array} $ |
Definition 3[27] Let A∈Ln, I(n1, n2, …, np), if there exist P, Q∈Ln, Id(n1, n2, …, np), such that 〈PAQ〉 is M-matrix, then A is called (Ⅰ)-type block H-matrix (HB(Ⅰ)(P, Q)-matrix) about nonsingular block matrices P, Q; Such that〈〈PAQ〉〉 is M-matrix, then A is called (Ⅱ)-type block H-matrix (HB(Ⅱ)(P, Q)-matrix) about nonsingular block matrices P, Q.
Definition 4[27] Let A∈Ln, I(n1, n2, …, np), then [A]=(‖Mij‖)∈L(Rn) is called block absolute value of block matrix A. Similarly, we may define block absolute value of block vector x∈Vn(n1, n2, …, np) as [x]∈Rn.
Lemma 1[24] Let A, B∈Ln, I(n1, n2, …, np), x, y∈Vn(n1, n2, …, np), γ∈R1, then
(1) |[A]-[B]|≤[A+B]≤[A]+[B](|[x]-[y]|)≤[x+y]≤[x]+[y];
(2) [AB]≤[A][B]([Ax]≤[A][x]);
(3) [γA]≤|γ|[A]([γx]≤|γ|[x]), q∈Rn;
(4) ρ(A)≤ρ(|A|)≤ρ([A]).
Lemma 2[24] Let A, B∈Ln, I(n1, n2, …, np) is HBⅠ(P, Q)-matrix, then
(1) A is nonsingular;
(2) [(PAQ)-1]≤〈PAQ〉-1;
(3) μ1(PAQ) < 1.
Lemma 3[24] Let A, B∈Ln, I(n1, n2, …, np) is HⅡB(P, Q)-matrix, then
(1) A is nonsingular;
(2) [(PAQ)-1]≤〈〈PAQ〉〉-1[D(PAQ)-1];
(3) μ2(PAQ) < 1.
Definition 5[24] Define the set:
(1) ΩBI(M)={F=(Fij)∈Ln, I(n1, n2, …, np)|‖Fii-1‖=‖Mii-1‖, ‖Fij‖=‖Mij‖(i≠j), i, j=1, 2, …, p};
(2) ΩBⅡ(M)={F=(Fij)∈Ln, I(n1, n2, …, np)|‖Fii-1Fij‖=‖Mii-1Mij‖, i, j=1, 2, …, p}.
Express the same mode set of (Ⅰ)-type and (Ⅱ)-type associated with the matrix M=(Mij)∈Ln, I(n1, n2, …, np), respectively.
Lemma 4[28] Let A be an H-matrix. Then A is nonsingular, and |A-1|≤〈A〉-1.
Lemma 5[29] Let A=(aij)∈Zn×n which has all positive diagonal entries. A is an M-matrix if and only if ρ(B) < 1, where B=D-1C, D=diag(A), A=D-C.
Lemma 6[5] Let A∈Rn×n be an H+-matrix. Then LCP(q, A) has a unique solution for any q∈Rn.
Lemma 7[8] Let A=M-N be a splitting of the matrix A∈Rn×n, Ω be a positive diagonal matrix, and γ be a positive constant. Then, for the LCP(q, A), the following statements hold true:
(ⅰ) If (z, r) is a solution of the LCP(q, A), then
$ \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{M}}} \right)\mathit{\boldsymbol{x}} = \mathit{\boldsymbol{Nx}} + \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{A}}} \right)\left| \mathit{\boldsymbol{x}} \right| - \gamma q; $ | (2) |
(ⅱ) If x satisfies the implicit fixed-point equation (2), then
$ \mathit{\boldsymbol{z}} = {\gamma ^{ - 1}}\left( {\left| \mathit{\boldsymbol{x}} \right| + \mathit{\boldsymbol{x}}} \right),r = {\gamma ^{ - 1}}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }}\left( {\left| \mathit{\boldsymbol{x}} \right| - \mathit{\boldsymbol{x}}} \right) $ | (3) |
is a solution of the LCP(q, A).
2 RMS-BMMTOR methodsFirstly, we introduce the concept of multisplitting method and the detailed process of parallel iterative method.
{Mk, Nk, Ek}k=1l is a multisplitting of block matrix A if
1) A=Mk-NK, det(Mk)≠0 is a splitting for k=1, 2, …, l;
2) Ek=diag(E11k, E22k, …, Eppk), k=1, 2, …, l, and
Given a positive diagonal matrix Ω and a positive constant γ, form lemma 7, we know that if x satisfies either of the implicit fixed-point equations
$ \begin{array}{l} \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + {\mathit{\boldsymbol{M}}_k}} \right)\mathit{\boldsymbol{x}} = {\mathit{\boldsymbol{N}}_k}\mathit{\boldsymbol{x}} + \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{A}}} \right)\left| {\mathit{\boldsymbol{x}} - \gamma \mathit{\boldsymbol{q}}} \right.,\\ \;\;\;\;\;k = 1,2, \cdots ,l, \end{array} $ | (4) |
then
$ \mathit{\boldsymbol{z}} = {\gamma ^{ - 1}}\left( {\left| \mathit{\boldsymbol{x}} \right| + \mathit{\boldsymbol{x}}} \right),\mathit{\boldsymbol{r}} = {\gamma ^{ - 1}}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }}\left( {\left| \mathit{\boldsymbol{x}} \right| - \mathit{\boldsymbol{x}}} \right) $ | (5) |
is a solution of the LCP(q, A).
Based on block matrix A ∈ Rm×m, the corresponding block diagonal matrix is D= diag(A11, A22, …, App), and Lk, Fk is block strictly triangular matrices, Uk =D-Lk-Fk A, then (D-Lk-Fk, Uk, Ek) is a multisplitting of block matrix A∈Rm×m. With the equivalent reformulations (4), (5) and TOR method[16] of the LCP(q, A), we can establish the following relaxed modulus-based synchronous block multisplitting multi-parameters TOR method (RMSBMMTOR), which is similar to method 3.1 in [2] and method 3.1 in [15].
Method 1(The RMS-BMMTOR method for LCP(q, A))
Let {Mk, Nk, Ek}k=1l be a multisplitting of the system matrix A∈Rn×n. Given an initial vector x(0)∈Rn for m=0, 1, …, until the iteration sequence {z(m)}m=0∞⊂R+n is convergent, compute z(m+1)∈R+n and x(m+1)∈R+n by
$ {\mathit{\boldsymbol{z}}^{\left( {m + 1} \right)}} = \frac{1}{\gamma }\left( {\left| {{\mathit{\boldsymbol{x}}^{\left( {m + 1} \right)}}} \right| + {\mathit{\boldsymbol{x}}^{\left( {m + 1} \right)}}} \right) $ |
and x(m, k)∈Rn according to
$ {\mathit{\boldsymbol{x}}^{\left( {m + 1} \right)}} = \omega \sum\limits_{k = 1}^l {{\mathit{\boldsymbol{E}}_k}{\mathit{\boldsymbol{x}}^{\left( {m,k} \right)}}} + \left( {1 - \mathit{\boldsymbol{\omega }}} \right){\mathit{\boldsymbol{x}}^{\left( m \right)}}, $ |
where x(m, k), k=1, 2, …, l are obtained by solving the linear systems
$ \begin{array}{l} \left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{D}} - \left( {{\beta _k}{\mathit{\boldsymbol{L}}_k} + {\gamma _k}{\mathit{\boldsymbol{F}}_k}} \right)} \right){x^{\left( {m,k} \right)}} = \left[ {\left( {1 - {\alpha _k}} \right)\mathit{\boldsymbol{D + }}} \right.\\ \;\;\;\;\;\;\;\;\left. {\left( {{\alpha _k} - {\beta _k}} \right){\mathit{\boldsymbol{L}}_k} + \left( {{\alpha _k} - {\gamma _k}} \right){\mathit{\boldsymbol{F}}_k} + {\alpha _k}{\mathit{\boldsymbol{U}}_k}} \right]{x^{\left( m \right)}} + \\ \;\;\;\;\;\;\;\;{\alpha _k}\left[ {\left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{A}}} \right)\left| {{x^{\left( m \right)}}} \right| - \gamma q} \right],\;\;\;\;\;\;k = 1,2, \cdots ,l, \end{array} $ | (6) |
respectively.
Remark 1 In method 1, when the coefficient matrix A is point form and αk = α, βk = β, γk = γ, ω =1, the RMS-BMMTOR method reduces to the MSMTOR method; When the coefficient matrix A is point form and αk = α, βk = β = γk, ω = 1, the RMS-BMMTOR method reduces to the MSMAOR method[26]; When the coefficient matrix A is point form and ω=1, the RMS-BMMTOR method reduces to the MSMMTOR method; When the coefficient matrix A is point form and γk = βk, ω =1, the RMS-BMMTOR method reduces to the MSMMAOR method [15]; When ω = 1, the RMS-BMMTOR method reduces to the MSBMMTOR method; When γk = βk, ω =1, the RMS-BMMTOR method reduces to the MSBMMAOR method [25]; When the parameters (αk, βk, γk, ω) = (αk, αk, αk, 1), (1, 1, 1, 1) and (1, 0, 0, 1), the RMS-BMMTOR method reduces to the MSBMMSOR, MSBMGS and MSBMJ methods, respectively; When the parameters (αk, βk, ω) = (αk, αk, αk, ω), (1, 1, 1, ω) and (1, 0, 0, ω), the RMS-BMMTOR method reduces to the RMS-BMMSOR, RMSBMGS and RMSBMJ methods, respectively.
3 Convergence analysisBased on the modulus-based synchronous multisplitting AOR method, BAI et al[26] obtained the following results in 2013.
Theorem 1 Let A∈Rn×n be an H+-matrix, with D=diag(A) and B=D-A, and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a multisplitting and a triangular multisplitting of the matrix A, respectively. Assume that γ>0 and the positive diagonal matrix Ω satisfies Ω≥D. If A=D-Lk-Uk(k=1, 2, …, l) satisfies 〈A〉=D-|Lk|-|Uk|(k=1, 2, …, l), then the iteration sequence {z(m)}m=0∞ generated by the MSMAOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)∈R+n, provided relaxation parameters α and β satisfy
$ 0 < \beta \le \alpha < \frac{1}{{\rho \left( {{D^{ - 1}}\left| B \right|} \right)}}. $ |
Based on the modulus-based synchronous block multisplitting AOR method, LI et al[25] analyzed the following results in 2013.
Theorem 2[25] Let A∈Ln, I(n1, n2, …, np) be a block HBI(P, Q)-matrix, with H∈ΩBI(PAQ), and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a block multisplitting and a block triangular multisplitting of block H-matrix, respectively. Assume that γ>0 and the positive matrix Ω satisfies Ω≥D(H) and diag(Ω)=diag(D(H)). If H=D-Lk-Uk(k=1, 2, …, l) satisfies 〈H〉=〈D〉-[Lk]-[Uk]=D〈H〉-B〈H〉(k=1, 2, …, l), then the iteration sequence {zm=0(m)∞} generated by the MSBMAOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)∈R+n, provided the relaxation parameters αk and βk satisfy
$ 0 < \beta \le \alpha < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}}. $ |
Based on the modulus-based synchronous multisplitting multi-parameters AOR method, ZHANG et al[15] gave the following results in 2014.
Theorem 3[15] Let A∈Rn×n be an H+-matrix, with D=diag(A) and B=D-A, and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a multisplitting and a triangular multisplitting of the matrix A, respectively. Assume that γ>0 and the positive diagonal matrix Ω satisfies Ω≥D. If A=D-Lk-Uk(k=1, 2, …, l) satisfies 〈A〉=D-|Lk|-|Uk|(k=1, 2, …, l), then the iteration sequence {z(m)}m=0∞ generated by the MSMMAOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)∈R+n, provided relaxation parameters αk and βk satisfy
$ \begin{array}{*{20}{c}} {0 < {\beta _k} \le {\alpha _k} \le 1\;{\rm{or}}\;{\rm{0}} < {\beta _k} < \frac{1}{{\rho \left( {{\mathit{\boldsymbol{D}}^{ - 1}}\left| \mathit{\boldsymbol{B}} \right|} \right)}},}\\ {1 < {\alpha _k} < \frac{1}{{\rho \left( {{\mathit{\boldsymbol{D}}^{ - 1}}\left| \mathit{\boldsymbol{B}} \right|} \right)}}.} \end{array} $ |
$ 0 \le {\beta _k} \le {\gamma _k},0 \le {\alpha _k} \le {\gamma _k},0 < {\gamma _k} < \frac{1}{\rho },0 < \omega < \frac{1}{{{\rho _{{\gamma _k}}}}}, $ |
Based on the global relaxed non-stationary multisplitting multi-parameter TOR method (GRNMMTOR) for the large sparse linear systems, ZHANG et al [16] obtained the following convergence theorem in 2008.
Theorem 4[16] Let A∈Rn×n be an H-matrix, and for k=1, 2, …, l, Lk and Fk be strictly lower triangular matrices. Define the matrix Uk, k=1, 2, …, l, such that A=D-Lk-Fk-Uk. Assume that we have 〈A〉=|D|-|Lk|-|Fk|-|Uk|=|D|-|B|. If
$ 0 \le {\beta _k} \le {\gamma _k},0 \le {\alpha _k} \le {\gamma _k},0 < {\gamma _k} < \frac{1}{\rho },0 < \omega < \frac{1}{{{\rho _{{\gamma _k}}}}}, $ |
then GRNMMTOR method converges for any initial vector x(0), where ρ=ρ(J), J=|D|-1×|B|, ργk=
Based on relaxed modulus-based synchronous block multisplitting multi-parameters TOR method, we will present a convergence results of the multisplitting methods for the linear complementarity problem when the system matrix is a block H+-matrix, which is as follows:
Theorem 5 Let A∈Ln, I(n1, n2, …, np) be a block HBI(P, Q)-matrix with H∈ΩBI(PAQ), and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a block multisplitting and a block triangular multisplitting of block H-matrix, respectively. Assume that γ>0 and the positive matrix Ω satisfy Ω≥D(H) and diag(Ω)=diag(D(H)). If H=D-Lk-Uk(k=1, 2, …, l) satisfies 〈H〉=〈D-〉-[Lk]-[Uk]=D〈H〉-B〈H〉(k=1, 2, …, l), then the iteration sequence {zm=0(m)∞} generated by the RMSBMMTOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)∈R+n, prcers αk and βk satisfy
$ \begin{array}{l} 0 < {\beta _k},{\gamma _k} \le {\alpha _k} \le 1,0 < \omega < \frac{2}{{1 + \rho '}}{\rm{or}}\\ 0 < {\beta _k},{\gamma _k} < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},1 < {\alpha _k} < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;0 < \omega < \frac{2}{{1 + \rho '}}, \end{array} $ | (7) |
where μ1(PAQ)=ρ(D〈H〉-1B〈H〉)=ρ(J〈H〉), ρ′=
Proof From lemma 6 and (6), for the RMS-BMMTOR method, it holds that
$ \begin{array}{l} \left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + {\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right){x_ * } = \left[ {\left( {1 - {\alpha _k}} \right)\mathit{\boldsymbol{\bar D + }}} \right.\\ \;\;\;\;\;\;\left. {\left( {{\alpha _k} - {\beta _k}} \right){{\mathit{\boldsymbol{\bar L}}}_k} + \left( {{\alpha _k} - {\gamma _k}} \right){{\mathit{\boldsymbol{\bar F}}}_k} + {\alpha _k}{{\mathit{\boldsymbol{\bar U}}}_k}} \right]{x_ * } + \\ \;\;\;\;\;\;{\alpha _k}\left[ {\left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right)\left| {{x_ * }} \right| - \gamma q} \right],k = 1,2, \cdots ,l. \end{array} $ | (8) |
By subtracting (8) and (6), we have
$ \begin{array}{l} {x^{\left( {m + 1} \right)}} - {x_ * } = \omega \sum\limits_{k = 1}^l {{\mathit{\boldsymbol{E}}_k}{{\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + {\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right)}^{ - 1}}} \times \\ \;\;\;\;\;\;\left[ {\left( {1 - {\alpha _k}} \right)\mathit{\boldsymbol{\bar D}} + \left( {{\alpha _k} - {\beta _k}} \right){{\mathit{\boldsymbol{\bar L}}}_k} + \left( {{\alpha _k} - {\gamma _k}} \right){{\mathit{\boldsymbol{\bar F}}}_k} + } \right.\\ \;\;\;\;\;\;\left. {{\alpha _k}{{\mathit{\boldsymbol{\bar U}}}_k}} \right]\left( {{x^{\left( m \right)}} - {x_ * }} \right) + \omega \sum\limits_{k = 1}^l {{E_k}\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + } \right.} \right.} \\ \;\;\;\;\;\;\left. {\left. {{\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right) - 1{\alpha _k}\left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right)\left( {\left| {{x^{\left( m \right)}}} \right| - \left| {{x_ * }} \right|} \right) + \\ \;\;\;\;\;\;\left( {1 - \omega } \right)\left( {{x^{\left( m \right)}} - {x_ * }} \right). \end{array} $ | (9) |
By taking absolute values on both sides of the equality (9), estimating |[x(m)]-[x*]|≤[x(m)-x*] and amplifying, we may obtain
$ \begin{array}{l} \left[ {{x^{\left( m \right)}} - {x_ * }} \right] \le \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left[ {\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\mathit{\boldsymbol{\beta }}_k}{{\mathit{\boldsymbol{\bar L}}}_k} + } \right.} \right.} \right.} \\ \;\;\;\;\;\;\left. {{{\left. {\left. {{\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right)}^{ - 1}}} \right]\left[ {\left| {1 - {\alpha _k}} \right|\left[ {\mathit{\boldsymbol{\bar D}}} \right] + \left| {{\alpha _k} - {\beta _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + } \right.\\ \;\;\;\;\;\;\left. {\left| {{\alpha _k} - {\gamma _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + {\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right]} \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \\ \;\;\;\;\;\;\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left[ {{{\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + {\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right)}^{ - 1}}} \right]} \times \\ \;\;\;\;\;\;{\alpha _k}\left[ {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \left| {1 - \omega } \right|\left[ {{x^{\left( m \right)}} - {x_ * }} \right]. \end{array} $ |
Since[Ω-H]=〈Ω〉-(D〈H〉-B〈H〉) and B〈H〉=[Lk]+[Fk]+[Uk], [D]≥D〈H〉, we have
$ \begin{array}{l} \left[ {{x^{\left( m \right)}} - {x_ * }} \right] \le \omega \sum\limits_{k = 1}^l {\left[ {{E_k}} \right]\left[ {\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - } \right.} \right.} \\ \;\;\;\;\;\left. {{{\left. {\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right]} \right)}^{ - 1}}} \right]\left[ {\left| {1 - {\alpha _k}} \right|\left[ {\mathit{\boldsymbol{\bar D}}} \right] + } \right.\\ \;\;\;\;\;\left| {{\alpha _k} - {\beta _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \left| {{\alpha _k} - {\gamma _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + {\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + \\ \;\;\;\;\;\left. {{\alpha _k}\left[ {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right]} \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \left( {1 - \omega } \right)\left[ {{x^{\left( m \right)}} - {x_ * }} \right] \le \\ \;\;\;\;\;\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{{\left( {{\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle + {\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} - \left( {{\beta _k}\left[ {{\mathit{\boldsymbol{L}}_k}} \right] + {\gamma _k}\left[ {{\mathit{\boldsymbol{F}}_k}} \right]} \right)} \right)}^{ - 1}}} \times \\ \;\;\;\;\;\left[ {\left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} + \left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + } \right.\\ \;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + \\ \;\;\;\;\;\left. {{\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle } \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \left( {1 - \omega } \right)\left[ {{x^{\left( m \right)}} - {x_ * }} \right] = \\ \;\;\;\;\;\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + } \\ \;\;\;\;\;\left( {1 - \omega } \right)\left[ {{x^{\left( m \right)}} - {x_ * }} \right] = \left\{ {\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} + } \right.\\ \;\;\;\;\;\left. {\left( {1 - \omega } \right)\mathit{\boldsymbol{I}}} \right\}\left[ {{x^{\left( m \right)}} - {x_ * }} \right] = \\ \;\;\;\;\;{{\bar H}_{{\rm{RMSBMMTOR}}}}\left[ {{x^{\left( m \right)}} - {x_ * }} \right], \end{array} $ | (10) |
where
$ \begin{array}{l} {{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMSBMMTOR}}}} = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} + \left( {1 - \omega } \right)\mathit{\boldsymbol{I}},\\ {\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}} = {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle + {\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} - \left( {{\beta _k}\left[ {{\mathit{\boldsymbol{L}}_k}} \right] + } \right.\\ \;\;\;\;\;\;\;{\left. {\left. {{\gamma _k}\left[ {{\mathit{\boldsymbol{F}}_k}} \right]} \right)} \right)^{ - 1}}\left[ {\left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + } \right.} \right.\\ \;\;\;\;\;\;\;\left. {\left. {{\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right]{\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle } \right]. \end{array} $ | (11) |
The error relationship (10) is the base for proving the convergence of RMS-BMMTOR method. By making use of lemmas 1 and 2, defining ε(m)=x(m)-x* and arranging similar terms together, we can obtain
$ \begin{array}{l} {\varepsilon ^{\left( {m + 1} \right)}} = \left[ {{x^{\left( {m + 1} \right)}} - {x_ * }} \right] \le {{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMSBMMTOR}}}}\left[ {{x^{\left( {m + 1} \right)}} - {x_ * }} \right] = \\ \;\;\;\;\;\;\;\;\left\{ {\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} + \left( {1 - \omega } \right)\mathit{\boldsymbol{I}}} \right\}\left[ {{x^{\left( {m + 1} \right)}} - {x_ * }} \right]. \end{array} $ |
Case 1 Let 0 < βk, γk≤αk≤1,
$ {\mathit{\boldsymbol{M}}_k} = {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle + {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - \left( {{\beta _k}\left[ {{\mathit{\boldsymbol{L}}_k}} \right] + {\gamma _k}\left[ {{\mathit{\boldsymbol{F}}_k}} \right]} \right), $ |
$ \begin{array}{l} \mathit{\boldsymbol{N}}_k^1 = \left[ {\left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + } \right.\\ \;\;\;\;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;\;\;\left( {1 - 2{\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {2{\alpha _k} - {\beta _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \\ \;\;\;\;\;\;\;\;\left( {2{\alpha _k} - {\gamma _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\alpha _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}. \end{array} $ |
So, we have
$ \begin{array}{l} {\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}} = \mathit{\boldsymbol{M}}_k^{ - 1}\mathit{\boldsymbol{N}}_k^1 = \mathit{\boldsymbol{M}}_k^{ - 1}\left( {{\mathit{\boldsymbol{M}}_k} - 2{\alpha _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.\\ \;\;\;\;\;\;\;\left. {2{\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right) = \mathit{\boldsymbol{I}} - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {5_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $ |
Through further analysis, we have
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{M}}_k^{ - 1}\left[ {{\mathit{\boldsymbol{M}}_k} - 2{\alpha _k}\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right)} \right] \le \\ \;\;\;\;\;\;\;\mathit{\boldsymbol{I}} - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left( {\mathit{\boldsymbol{I}} - {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $ |
Since A∈Ln, I(n1, n2, …, np) is a block HBI(P, Q)-matrix, by lemmas 2 and 3, we know μ1(PAQ)=ρ(D〈H〉-1B〈H〉)=ρ(J〈H〉) < 1, Jε=J〈H〉+εeeT, where e denotes the vector e=(1, 1, …, 1)T∈Rn. Since Jε is nonnegative, the matrix J〈H〉+εeeT has only positive entries and irreducible for any ε>0. By the Perron-Frobenius theorem, for any ε>0, there is a vector xε>0 such that
$ \left( {{\mathit{\boldsymbol{J}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right){x_\varepsilon } = {\rho _\varepsilon }{x_\varepsilon }, $ |
where ρε=ρ(J〈H〉+εeeT)=ρ(Jε). Moreover, if ε>0 is small enough, we have ρε < 1 by continuity of the spectral radius. Because of 0 < αk≤1, we also have 1-2αk+2αkρ < 1, and 1-2αk+2αkρε < 1. So
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} \right] \le I - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - \left( {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.} \right.\\ \;\;\;\;\;\;\left. {\left. {\varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right)} \right] = I - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\mathit{\boldsymbol{J}}_\varepsilon }} \right]. \end{array} $ |
Multiplying xε in two sides of the above inequality, and Mk-1≥D〈H〉-1, we can obtain
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } \le {\mathit{\boldsymbol{x}}_\varepsilon } - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - \rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;{\mathit{\boldsymbol{x}}_\varepsilon } - 2{\alpha _k}\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - \rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;\left( {1 - 2{\alpha _k} + 2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right){\mathit{\boldsymbol{x}}_\varepsilon }. \end{array} $ |
Based on Ek and the definition of [·], we know that
$ \begin{array}{l} \left[ {{{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left( {1 - 2{\alpha _k} + 2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right){x_\varepsilon } + } \\ \;\;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \omega \left( {1 - 2{\alpha _k} + 2{\alpha _k}{\rho _\varepsilon }} \right){\mathit{\boldsymbol{x}}_\varepsilon } + \\ \;\;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \left( {\omega \rho ' + \left| {1 - \omega } \right|} \right){\mathit{\boldsymbol{x}}_\varepsilon } = {\theta _1}{\mathit{\boldsymbol{x}}_\varepsilon }\left( {\varepsilon \to {0^ + }} \right), \end{array} $ |
where θ1=ωρ′+|1-ω| < 1.
Case 2 0 < βk,
$ 1 < {\alpha _k} < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},\;\;\;0 < \omega < \frac{2}{{1 + \rho '}}. $ |
Subcase 1 αk≥βk and αk≥γk. Define
$ \begin{array}{l} \mathit{\boldsymbol{N}}_k^2 = \left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \\ \;\;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}. \end{array} $ |
So
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{M}}_k^{ - 1}\left[ {{\mathit{\boldsymbol{M}}_k} - 2\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right)} \right] \le \\ \;\;\;\;\;\;\mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left( {\mathit{\boldsymbol{I}} - {\alpha _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $ |
Similar to the case 1, let e denote the vector e=(1, 1, …, 1)T∈Rn, and xε>0 such that Jεxε=(J〈H〉+εeeT)xε=ρ(Jε)xε. Moreover, if ε>0 is small enough, we have ρε < 1 by continuity of the spectral radius. Because of
$ 2{\alpha _k}\rho - 1 < 1\;{\rm{and}}\;{\rm{2}}{\alpha _k}{\rho _\varepsilon } - 1 < 1. $ |
So
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\alpha _k}\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.} \right.\\ \;\;\;\;\;\;\;\left. {\left. {\varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right)} \right] = \mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\alpha _k}{\mathit{\boldsymbol{J}}_\varepsilon }} \right]. \end{array} $ |
Multiplying xε in two sides of the above inequality, and Mk-1≥D〈H〉-1, we can obtain
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right]{x_\varepsilon } \le {x_\varepsilon } - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - {\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;{\mathit{\boldsymbol{x}}_\varepsilon } - 2\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - {\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \left( {2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }. \end{array} $ |
Based on Ek and the definition of [·], we know that
$ \begin{array}{l} \left[ {{{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left( {2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }} + \\ \;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \omega \left( {2{\alpha _k}{\rho _\varepsilon } - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon } + \left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \\ \;\;\;\;\;\;\;\left( {\omega \rho ' + \left| {1 - \omega } \right|} \right){\mathit{\boldsymbol{x}}_\varepsilon } = {\theta _2}{\mathit{\boldsymbol{x}}_\varepsilon }\left( {\varepsilon \to {0^ + }} \right),\\ {\rm{where}}\;\;\;\;\;\;\;\;\;\;{\theta _2} = \omega \rho ' + \left| {1 - \omega } \right| < 1. \end{array} $ |
where θ2=ωρ′+|1-ω| < 1.
Subcase 2 αk≤βk and αk≤γk. Define
$ \begin{array}{l} \mathit{\boldsymbol{N}}_k^3 = \left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \\ \;\;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\beta _k}\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + + 2{\gamma _k}\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] \le \\ \;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\delta _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}, \end{array} $ |
where δk=max{βk, γk}. So
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{M}}_k^{ - 1}\left[ {{\mathit{\boldsymbol{M}}_k} - 2\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {\delta _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right)} \right] \le \\ \;\;\;\;\;\;\;\mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left( {\mathit{\boldsymbol{I}} - {\delta _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $ |
Similar to the case 1, let e denote the vector e=(1, 1, …, 1)T∈Rn, and xε>0 such that Jεxε=(J〈H〉+εeeT)xε=ρ(Jε)xε. Moreover, if ε>0 is small enough, we have ρε < 1 by continuity of the spectral radius. Because of 0 < βk,
$ 2{\delta _k}\rho - 1 < 1\;{\rm{and}}\;{\rm{2}}{\delta _k}{\rho _\varepsilon } - 1 < 1. $ |
So
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\delta _k}\left( {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.} \right.\\ \;\;\;\;\;\;\;\left. {\left. {\varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right)} \right] = I - 2M_k^{ - 1}{D_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\delta _k}{\mathit{\boldsymbol{J}}_\varepsilon }} \right]. \end{array} $ |
Multiplying xε in two sides of the above inequality, and Mk-1≥D〈H〉-1, we can obtain
$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } \le {\mathit{\boldsymbol{x}}_\varepsilon } - 2\left[ {1 - {\delta _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;\left( {2{\delta _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }. \end{array} $ |
Based on Ek and the definition of [·], we know that
$ \begin{array}{l} \left[ {{{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left( {2{\delta _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }} + \\ \;\;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \omega \left( {2{\delta _k}{\rho _\varepsilon } - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon } + \left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \\ \;\;\;\;\;\;\;\;\left( {\omega \rho ' + \left| {1 - \omega } \right|} \right){\mathit{\boldsymbol{x}}_\varepsilon } = {\theta _3}{\mathit{\boldsymbol{x}}_\varepsilon }\left( {\varepsilon \to {0^ + }} \right), \end{array} $ |
where θ3=ωρ′+|1-ω| < 1.
Remark 2 In the fact application, the parameters can be adjusted suitably so that the convergence property of method can be substantially improved. That is to say, we have more choices for the splitting A=B-C which makes the multisplitting iteration methods converge.Therefore, our convergence theories extend the scope of multisplitting iterative methods in applications.
Through the similar proving process of theorem 4, we can obtain the following con-vergence results.
Theorem 6 Let A∈Ln, I(n1, n2, …, np) be a block HBⅡ(P, Q)-matrix, with H∈ΩBⅡ(PAQ), and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a block multisplitting and a block triangular multisplitting of block H-matrix, respectively. Assume that γ>0 and the positive matrix Ω satisfy Ω≥D(H) and diag(Ω)=diag(D(H)). If H=D-Lk-Uk(k=1, 2, …, l) satisfies 〈〈H〉〉=I-[D-1Lk]-[D-1Fk]-[D-1Uk]=I-B〈〈H〉〉(k=1, 2, …, l), then the iteration sequence {zm=0(m)∞} generated by the GRM-SBMMTOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)∈R+n, provided the relaxation parameters αk and βk satisfy
$ \begin{array}{c} 0 < {\beta _k},{\gamma _k} \le {\alpha _k} \le 1,0 < \omega < \frac{2}{{1 + \rho '}}{\rm{or}}\\ 0 < {\beta _k},{\gamma _k} < \frac{1}{{{\mu _2}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},1 < {\alpha _k} < \frac{1}{{{\mu _2}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},0 < \omega < \frac{2}{{1 + \rho '}}, \end{array} $ | (7) |
where μ2(PAQ)=ρ(D〈〈H〉〉-1B〈〈H〉〉)=ρ(J〈〈H〉〉), ρ′=
Remark 3 From table 1, we obviously see that the MSMMAOR method in [30] and the MSBMAOR method in [23] use the same parameters α, β in different processors, but the RMS-BMMTOR method in this paper uses different parameters αk, βk, γk(k=1, 2, …, l) in different processors. Moreover, when computing x(m+1) in method 2, we utilize relaxation extrapolation technique and add a relaxation parameter. Therefore, we may choose proper relaxation parameters to increase convergence speed and reduce the computation time when doing numerical experiments. In RMS-BMMTOR method, we may choose proper Ek to balance the load of each processor and avoid synchronization.
![]() |
Table 1 The modulus-based synchronous (blokc) multisplitting multi-parameters method and the corresponding convergence results |
The manuscript discusses relaxed modulus-based synchronous block multisplitting multi-parameters methods for linear complementarity problems. We have analyzed and got the new convergence theorem under certain conditions. However, table 1 shows some existing methods for linear complementarity problems are special cases of new methods with appropriate parameters. Moreover, new convergence theorems are more general and practical.
[1] | COTTLE R W, PANG J S, STONE R E. The Linear Complementarity Problem[M]. San Diego: Academic Press, 1992. |
[2] | FERRIS M C, PANG J S. Engineering and economic applications of complementarity problems[J]. SIAM Review, 1997, 39(4): 669–713. DOI:10.1137/S0036144595285963 |
[3] | BAI Z Z. On the convergence of the multisplitting methods for the linear complementarity problem[J]. SIAM Journal on Matrix Analysis and Applications, 1991, 21(1): 67–78. |
[4] | BAI Z Z. The convergence of parallel iteration algorithms for linear complementarity problems[J]. Computers and Mathematics with Applications, 1996, 32(9): 1–17. DOI:10.1016/0898-1221(96)00172-1 |
[5] | BAI Z Z, EVANS D J. Matrix multisplitting relaxation methods for linear complementarity problems[J]. International Journal of Computer Mathematics, 1997, 63(3/4): 309–326. |
[6] | BAI Z Z. On the monotone convergence of matrix multisplitting relaxation methods for the linear complementarity problem[J]. IMA Journal of Numerical Analysis, 1998, 18(4): 509–518. DOI:10.1093/imanum/18.4.509 |
[7] | BAI Z Z, EVANS D J. Matrix multisplitting methods with applications to linear complemen-tarity problems:Parallel synchronous and chaotic methods[J]. Réseaux et Systèmes Répartis:Calculateurs Paralleles, 2001, 13: 125–154. |
[8] | BAI Z Z. Modulus-based matrix splitting iteration methods for linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2010, 17(6): 917–933. DOI:10.1002/nla.v17.6 |
[9] | BAI Z Z, ZHANG L L. Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems[J]. Numerical Algorithms, 2013, 62(1): 59–77. DOI:10.1007/s11075-012-9566-x |
[10] | LEENAERTS D, BOKHOVEN W M G D. Piecewise-Linear Modelling and Analysis[M]. NewYork: Springer, 1998. |
[11] | DONG J L, JIANG M Q. A modified modulus method for symmetric positive-definite linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2009, 16: 129–143. DOI:10.1002/nla.v16:2 |
[12] | HADJIDIMOS A, TZOUMAS M. Nonstationary extrapolated modulus algorithms for the solution of the linear complementarity problem[J]. Linear Algebra and Its Applications, 2009, 431(1/2): 197–210. |
[13] | ZHANG L L, REN Z R. Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems[J]. Applied Mathematics Letters, 2013, 26: 638–642. DOI:10.1016/j.aml.2013.01.001 |
[14] | ZHANG L T, HUANG T Z, CHENG S H, et al. The weaker convergence of non-stationary matrix multisplitting methods for almost linear systems[J]. Taiwanese Journal of Mathematics, 2011, 15(4): 1423–1436. DOI:10.11650/twjm/1500406354 |
[15] | ZHANG L T, LI J T. The weaker convergence of modulus-based synchronous multisplitting multi-parameters methods for linear complementarity problems[J]. Computers and Mathematics with Application, 2014, 67(10): 1954–1959. DOI:10.1016/j.camwa.2014.04.018 |
[16] | ZHANG L T, HUANG T Z, GU T X. Global relaxed non-stationary multisplitting multi-parameters methods[J]. International Journal of Computer Mathematics, 2008, 85(2): 211–224. DOI:10.1080/00207160701405451 |
[17] | ZHANG L T, HUANG T Z, GU T X, et al. Convergence of relaxed multisplitting USAOR method for an[WTHX]H-matrix[J]. Applied Mathematics and Computation, 2008, 202(1): 121–132. DOI:10.1016/j.amc.2008.01.034 |
[18] | ZHANG L T, HUANG T Z, GU T X, et al. Convergent improvement of SSOR multisplitting method[J]. Journal of Computational and Applied Mathematics, 2009, 225(2): 393–397. DOI:10.1016/j.cam.2008.07.051 |
[19] | ZHANG L T, HUANG T Z, CHENG S H, et al. A note on parallel multisplitting TOR method for an[WTHX]H-matrix[J]. International Journal of Computer Mathematics, 2011, 88(3): 501–507. DOI:10.1080/00207160903501917 |
[20] | ZHANG L T, ZUO X Y, GU T X, et al. Improved convergence theorems of multisplitting methods for the linear complementarity problem[J]. Applied Mathematics and Computation, 2014, 243: 982–987. DOI:10.1016/j.amc.2014.06.038 |
[21] | ZHANG L T, LI J W, CHEN S H, et al. Convergence of relaxed matrix multisplitting chaotic methods for[WTHX]H-matrices[J]. Journal of Applied Mathematics, 2014, Article ID 594185. |
[22] | ZHANG L T, ZHOU Y M, GU T X, et al. Convergence improvement of relaxed multisplitting USAOR methods for[WTHX]H-matrices linear systems[J]. Applied Mathematics and Computation, 2014, 247(C): 225–232. |
[23] | LI W. A general modulus-based matrix splitting method for linear complementarity problems of[J]. Applied Mathematics Letters[WTHX]H-matrices, 2013, 26(12): 1159–1164. |
[24] | BAI Z Z. Parallel matrix multisplitting block relaxation iteration methods[J]. Mathematica Numerica Sinica, 1995, 17(3): 238–252. |
[25] | LI Y, WANG X, SUN C C. Convergence analysis of linear complementarity problems based on synchronous block multisplitting iteration methods[J]. Journal of Nanchang University (Natural Science), 2013, 37(4): 307–312. |
[26] | BAI Z Z, ZHANG L L. Modulus-based synchronous multisplitting iteration methods for linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2013, 62(1): 59–77. |
[27] | SONG Y Z. Convergence of block AOR iterative methods[J]. Mathematica Applicata, 1993(1): 39–45. |
[28] | FROMMER A, MAYER G. Convergence of relaxed parallel multisplitting methods[J]. Linear Algebra and Its Applications, 1989, 119: 141–152. DOI:10.1016/0024-3795(89)90074-8 |
[29] | YOUNG D M. Iterative Solution of Large Linear Systems[M]. New York: Academic Press, 1972. |
[30] | BAI Z Z, ZHANG L L. Modulus-based multigrid methods for linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2017, 24(6). |
[31] | ZHANG L T, ZHANG Y X, GU T X, et al. New convergence of modulus-based synchronous block multisplitting multi-parameters methods for linear complementarity problems[J]. Computational and Applied Mathematics, 2017, 36(1): 481–492. DOI:10.1007/s40314-015-0238-z |