文章快速检索     高级检索
  浙江大学学报(理学版)  2018, Vol. 45 Issue (6): 685-693  DOI:10.3785/j.issn.1008-9497.2018.06.007
0

citing the article as [复制中英文]

ZHANG Litao, ZHANG Guohui, ZHAO Yinchao. New modulus-based synchronous multisplitting methods based on block splitting for linear complementarity problems[J]. Journal of Zhejiang University(Science Edition), 2018, 45(6): 685-693. DOI: 10.3785/j.issn.1008-9497.2018.06.007.
[复制英文]
张理涛, 张国辉, 赵莹超. 基于块分裂求解线性互补问题的新模系同步多分裂方法[J]. 浙江大学学报(理学版), 2018, 45(6): 685-693. DOI: 10.3785/j.issn.1008-9497.2018.06.007.
[复制中文]

Fundation item

Supported by the National Natural Science Foundation of China (11226337, 11501525), Excellent Youth Foundation of Science & Technology Innovation of Henan Province (184100510001, 184100510004), Science & Technology Innovation Talents in Universities of Henan Province (16HASTIT040, 17HASTIT012), Aeronautical Science Foundation of China (2016ZG55019, 2017ZD55014)

About the author

ZHANG Litao (1980-), ORCID:http://orcid.org/0000-0002-6087-8611, male, doctor, associate professor, the fields of interest are numerical algebra and scientific computing, E-mail:litaozhang@163.com

Article History

Received Date: August 4, 2017
New modulus-based synchronous multisplitting methods based on block splitting for linear complementarity problems
ZHANG Litao1,2,3 , ZHANG Guohui4 , ZHAO Yinchao5     
1. College of Science, Zhengzhou University of Aeronautics, Zhengzhou 450015, China;
2. College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, Henan Province, China;
3. Henan Province Synergy Innovation Center of Aviation Economic Development, Zhengzhou 450015, China;
4. School of Management Engineering, Zhengzhou University of Aeronautics, Zhengzhou 450015, China;
5. Public Art Teaching, Zhengzhon University of Aeronautics, Zhengzhou 450015, China
Received Date: August 4, 2017
Fundation item: Supported by the National Natural Science Foundation of China (11226337, 11501525), Excellent Youth Foundation of Science & Technology Innovation of Henan Province (184100510001, 184100510004), Science & Technology Innovation Talents in Universities of Henan Province (16HASTIT040, 17HASTIT012), Aeronautical Science Foundation of China (2016ZG55019, 2017ZD55014)
About the author: ZHANG Litao (1980-), ORCID:http://orcid.org/0000-0002-6087-8611, male, doctor, associate professor, the fields of interest are numerical algebra and scientific computing, E-mail:litaozhang@163.com
Abstract: Based on the existing algorithms, we further study relaxed modulus-based synchronous block multisplitting multi-parameters methods for linear complementarity problem. Furthermore, we give the weaker convergence results of our new method in this paper when the system matrix is a block H+-matrix. Therefore, our results provide a guarantee for the optimal relaxation parameters.
Key Words: relaxed modulus-based method    linear complementarity problem    block splitting    block H+-matrix    synchronous multisplitting    
基于块分裂求解线性互补问题的新模系同步多分裂方法
张理涛1,2,3, 张国辉4, 赵莹超5    
1. 郑州航空工业管理学院 理学院, 河南 郑州 450015;
2. 河南师范大学 数学与信息科学学院, 河南 新乡 453007;
3. 航空经济发展河南省协同创新中心, 河南 郑州 450015;
4. 郑州航空工业管理学院 管理工程学院, 河南 郑州 450015;
5. 郑州航空工业管理学院 公共艺术教学部, 河南 郑州 450015
摘要: 在已有算法的基础上,进一步研究了基于松弛模系同步块多分裂多参数的迭代法.当系统矩阵为块H+-矩阵时,给出了较弱条件下的收敛结果.此结果为最佳松弛参数的选择提供了保障.
关键词: 松弛模系法    线性互补问题    块分裂    H+-矩阵    同步多分裂    
0 Introduction

Consider the linear complementarity problem abbreviated as LCP(q, A), for finding a pair of real vectors r and zRn such that

$ \mathit{\boldsymbol{r}}: = \mathit{\boldsymbol{Az}} + \mathit{\boldsymbol{q}} \ge 0,\mathit{\boldsymbol{z}} \ge 0\;{\rm{and}}\;{\mathit{\boldsymbol{z}}^{\rm{T}}}\left( {\mathit{\boldsymbol{Az}} + \mathit{\boldsymbol{q}}} \right) = 0, $ (1)

where A=(aij)Rn×n is a given large, sparse and real matrix and q=(q1, q2, …, qn)TRn is a given real vector. Here, zT and ≥ denote the transpose of the vector z and the componentwise defined partial ordering between two vectors, respectively.

Many problems in scientific computing and engineering applications may lead to solutions of LCPs of the form (1). For example, the linear complementarity problem may arise from application problems such as the convex quadratic programming, the Nash equilibrium point of the bimatrix game, the free boundary problems of fiuid dynamics etc[1-2]. Some solvers for LCP(q, A) with a special matrix A were proposed [3-12]. Recently, many researches have focused on the solver of LCP(q, A) with an algebra equation [8-22]. In particular, BAI et al[8-9]proposed a modulus-based matrix multisplitting iterative method for solving LCP(q, A) and presented convergence analysis for the proposed methods. ZHANG et al[13]extended the condition of a compatible H splitting to that of an H-splitting. LI [23] extended the modulus-based matrix splitting iteration method to more general cases. BAI [24] presented parallel matrix block multisplitting relaxation iterative methods, and established the convergence theory of these new methods in a thorough manner. LI et al [25] studied synchronous block multisplitting iterative methods. ZHANG et al[15] generalized the methods of BAI et al[26] and studied modulus-based synchronous multisplitting multi-parameters methods for linear complemen-tarity problem.

In this paper, we generalize the corresponding methods of BAI et al [26] and ZHANG et al[15] from point form to block form according to the modulus-based synchronous multisplitting iteration methods and consider relaxed modulus-based synchronous multisplitting multi-parameter method based on block splitting for solving LCP(q, A). Moreover, we give some theoretical analysis and improve some existing convergence results in [25-26].

The rest of this paper is organized as follows: In section 1, we give some notations and lemmas. In section 2, we propose relaxed modulus-based synchronous block multisplitting multi-parameters method for solving LCP(q, A). In section 3, we give the convergence analysis for the proposed method.

1 Notations and lemmas

In order to study mudulus-based synchronous block multisplitting iteration methods for solving LCP(q, A), let us introduce some definitions and lemmas.

A matrix A=(aij) is called an M-matrix if aij≤0 for ij and A-1≥0. The comparison matrix 〈A〉=(αij) of matrix A=(aij) is defined by: αij=|aij|, if i=j; αij=-|aij|, if ij. A is called an H-matrix if 〈A〉 is an M-matrix and is called an H+-matrix if it is an H-matrix with positive diagonal entries[20]. Let ρ(A) denote the spectral radius of A. A representatio n A=M-N is called a splitting of A when M is nonsingular. Let A and B be M-matrix. If AB, then A-1B-1. Finally, we define R+n={x|x≥0, xRn}.

Definition 1[24]    Define the set:

(1) Ln, I(n1, n2, …, np)={A=AijLn(n1, n2, …, np)|AiiL(Rni) is nonsingular (i=1, 2, …, p)};

(2) Ln, Id(n1, n2, …, np)={A=diag(A11, A22, …, App)|AiiL(Rni) is nonsingular (i=1, 2, …, p)}.

Definition 2[27]    Let ALn, I(n1, n2, …, np), and (Ⅰ)-type block comparison matrix 〈M〉=(〈Mij)∈L(Rn) and (Ⅱ)-type block comparison matrix 〈〈M〉〉=(〈〈M〉〉ij)∈L(Rn) are defined as

$ {\left\langle \mathit{\boldsymbol{M}} \right\rangle _{ij}} = \left\{ \begin{array}{l} \left\| {M_{ii}^{ - 1}} \right\| - 1,\;\;\;i = j\\ - \left\| {{M_{ij}}} \right\|,\;\;\;i \ne j \end{array} \right.i,j = 1,2, \cdots ,p, $
$ {\left\langle {\left\langle \mathit{\boldsymbol{M}} \right\rangle } \right\rangle _{ij}} = \left\{ \begin{array}{l} 1,\;\;\;i = j\\ - \left\| {M_{ii}^{ - 1}{M_{ij}}} \right\|,i \ne j \end{array} \right.i,j = 1,2, \cdots ,p, $

respectively.

Moreover, based on block matrix ALn, I(n1, n2, …, np), and LLn, I(n1, n2, …, np), let D(L)=diag (L11, L22, …, Lpp), B(L)=D(L)-L, J(A)=D(A)-1B(A), μ1(A)=ρ(JA), μ2(A)=ρ(I-〈〈A〉〉), using definition 2, we easily verify

$ \begin{array}{*{20}{c}} {\left\langle {\mathit{\boldsymbol{I}} - \mathit{\boldsymbol{J}}\left( \mathit{\boldsymbol{A}} \right)} \right\rangle = \left\langle {\left\langle {\mathit{\boldsymbol{I}} - \mathit{\boldsymbol{J}}\left( \mathit{\boldsymbol{A}} \right)} \right\rangle } \right\rangle = \left\langle {\left\langle \mathit{\boldsymbol{A}} \right\rangle } \right\rangle ,}\\ {{\mu _2}\left( \mathit{\boldsymbol{A}} \right) \le {\mu _2}\left( \mathit{\boldsymbol{A}} \right).} \end{array} $

Definition 3[27]    Let ALn, I(n1, n2, …, np), if there exist P, QLn, Id(n1, n2, …, np), such that 〈PAQ〉 is M-matrix, then A is called (Ⅰ)-type block H-matrix (HB(Ⅰ)(P, Q)-matrix) about nonsingular block matrices P, Q; Such that〈〈PAQ〉〉 is M-matrix, then A is called (Ⅱ)-type block H-matrix (HB(Ⅱ)(P, Q)-matrix) about nonsingular block matrices P, Q.

Definition 4[27]    Let ALn, I(n1, n2, …, np), then [A]=(‖Mij‖)∈L(Rn) is called block absolute value of block matrix A. Similarly, we may define block absolute value of block vector xVn(n1, n2, …, np) as [x]∈Rn.

Lemma 1[24]    Let A, BLn, I(n1, n2, …, np), x, yVn(n1, n2, …, np), γR1, then

(1) |[A]-[B]|≤[A+B]≤[A]+[B](|[x]-[y]|)≤[x+y]≤[x]+[y];

(2) [AB]≤[A][B]([Ax]≤[A][x]);

(3) [γA]≤|γ|[A]([γx]≤|γ|[x]), qRn;

(4) ρ(A)≤ρ(|A|)≤ρ([A]).

Lemma 2[24]    Let A, BLn, I(n1, n2, …, np) is HB(P, Q)-matrix, then

(1) A is nonsingular;

(2) [(PAQ)-1]≤〈PAQ-1;

(3) μ1(PAQ) < 1.

Lemma 3[24]    Let A, BLn, I(n1, n2, …, np) is HB(P, Q)-matrix, then

(1) A is nonsingular;

(2) [(PAQ)-1]≤〈〈PAQ〉〉-1[D(PAQ)-1];

(3) μ2(PAQ) < 1.

Definition 5[24]    Define the set:

(1) ΩBI(M)={F=(Fij)∈Ln, I(n1, n2, …, np)|‖Fii-1‖=‖Mii-1‖, ‖Fij‖=‖Mij‖(ij), i, j=1, 2, …, p};

(2) ΩB(M)={F=(Fij)∈Ln, I(n1, n2, …, np)|‖Fii-1Fij‖=‖Mii-1Mij‖, i, j=1, 2, …, p}.

Express the same mode set of (Ⅰ)-type and (Ⅱ)-type associated with the matrix M=(Mij)∈Ln, I(n1, n2, …, np), respectively.

Lemma 4[28]     Let A be an H-matrix. Then A is nonsingular, and |A-1|≤〈A-1.

Lemma 5[29]    Let A=(aij)∈Zn×n which has all positive diagonal entries. A is an M-matrix if and only if ρ(B) < 1, where B=D-1C, D=diag(A), A=D-C.

Lemma 6[5]    Let ARn×n be an H+-matrix. Then LCP(q, A) has a unique solution for any qRn.

Lemma 7[8]    Let A=M-N be a splitting of the matrix ARn×n, Ω be a positive diagonal matrix, and γ be a positive constant. Then, for the LCP(q, A), the following statements hold true:

(ⅰ) If (z, r) is a solution of the LCP(q, A), then $x = \frac{1}{2}\gamma (z - {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }}^{ - 1}}r)$ satisfies the implicit fixed-point equation

$ \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{M}}} \right)\mathit{\boldsymbol{x}} = \mathit{\boldsymbol{Nx}} + \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{A}}} \right)\left| \mathit{\boldsymbol{x}} \right| - \gamma q; $ (2)

(ⅱ) If x satisfies the implicit fixed-point equation (2), then

$ \mathit{\boldsymbol{z}} = {\gamma ^{ - 1}}\left( {\left| \mathit{\boldsymbol{x}} \right| + \mathit{\boldsymbol{x}}} \right),r = {\gamma ^{ - 1}}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }}\left( {\left| \mathit{\boldsymbol{x}} \right| - \mathit{\boldsymbol{x}}} \right) $ (3)

is a solution of the LCP(q, A).

2 RMS-BMMTOR methods

Firstly, we introduce the concept of multisplitting method and the detailed process of parallel iterative method.

{Mk, Nk, Ek}k=1l is a multisplitting of block matrix A if

1) A=Mk-NK, det(Mk)≠0 is a splitting for k=1, 2, …, l;

2) Ek=diag(E11k, E22k, …, Eppk), k=1, 2, …, l, and $\sum\limits_{k = 1}^l {\left\| {\mathit{\boldsymbol{E}}_{ii}^{(k)}} \right\|} = 1$, i=1, 2, …, p, where the block matrices Mk, Nk, EkLn(n1, n2, …, np), and ‖·‖ expressing consistent matrix norm satisfying ‖I‖=1(IL(Rm) is an unit matrix).

Given a positive diagonal matrix Ω and a positive constant γ, form lemma 7, we know that if x satisfies either of the implicit fixed-point equations

$ \begin{array}{l} \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + {\mathit{\boldsymbol{M}}_k}} \right)\mathit{\boldsymbol{x}} = {\mathit{\boldsymbol{N}}_k}\mathit{\boldsymbol{x}} + \left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{A}}} \right)\left| {\mathit{\boldsymbol{x}} - \gamma \mathit{\boldsymbol{q}}} \right.,\\ \;\;\;\;\;k = 1,2, \cdots ,l, \end{array} $ (4)

then

$ \mathit{\boldsymbol{z}} = {\gamma ^{ - 1}}\left( {\left| \mathit{\boldsymbol{x}} \right| + \mathit{\boldsymbol{x}}} \right),\mathit{\boldsymbol{r}} = {\gamma ^{ - 1}}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }}\left( {\left| \mathit{\boldsymbol{x}} \right| - \mathit{\boldsymbol{x}}} \right) $ (5)

is a solution of the LCP(q, A).

Based on block matrix ARm×m, the corresponding block diagonal matrix is D= diag(A11, A22, …, App), and Lk, Fk is block strictly triangular matrices, Uk =D-Lk-Fk A, then (D-Lk-Fk, Uk, Ek) is a multisplitting of block matrix ARm×m. With the equivalent reformulations (4), (5) and TOR method[16] of the LCP(q, A), we can establish the following relaxed modulus-based synchronous block multisplitting multi-parameters TOR method (RMSBMMTOR), which is similar to method 3.1 in [2] and method 3.1 in [15].

Method 1(The RMS-BMMTOR method for LCP(q, A))

Let {Mk, Nk, Ek}k=1l be a multisplitting of the system matrix ARn×n. Given an initial vector x(0)Rn for m=0, 1, …, until the iteration sequence {z(m)}m=0R+n is convergent, compute z(m+1)R+n and x(m+1)R+n by

$ {\mathit{\boldsymbol{z}}^{\left( {m + 1} \right)}} = \frac{1}{\gamma }\left( {\left| {{\mathit{\boldsymbol{x}}^{\left( {m + 1} \right)}}} \right| + {\mathit{\boldsymbol{x}}^{\left( {m + 1} \right)}}} \right) $

and x(m, k)Rn according to

$ {\mathit{\boldsymbol{x}}^{\left( {m + 1} \right)}} = \omega \sum\limits_{k = 1}^l {{\mathit{\boldsymbol{E}}_k}{\mathit{\boldsymbol{x}}^{\left( {m,k} \right)}}} + \left( {1 - \mathit{\boldsymbol{\omega }}} \right){\mathit{\boldsymbol{x}}^{\left( m \right)}}, $

where x(m, k), k=1, 2, …, l are obtained by solving the linear systems

$ \begin{array}{l} \left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{D}} - \left( {{\beta _k}{\mathit{\boldsymbol{L}}_k} + {\gamma _k}{\mathit{\boldsymbol{F}}_k}} \right)} \right){x^{\left( {m,k} \right)}} = \left[ {\left( {1 - {\alpha _k}} \right)\mathit{\boldsymbol{D + }}} \right.\\ \;\;\;\;\;\;\;\;\left. {\left( {{\alpha _k} - {\beta _k}} \right){\mathit{\boldsymbol{L}}_k} + \left( {{\alpha _k} - {\gamma _k}} \right){\mathit{\boldsymbol{F}}_k} + {\alpha _k}{\mathit{\boldsymbol{U}}_k}} \right]{x^{\left( m \right)}} + \\ \;\;\;\;\;\;\;\;{\alpha _k}\left[ {\left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{A}}} \right)\left| {{x^{\left( m \right)}}} \right| - \gamma q} \right],\;\;\;\;\;\;k = 1,2, \cdots ,l, \end{array} $ (6)

respectively.

Remark 1    In method 1, when the coefficient matrix A is point form and αk = α, βk = β, γk = γ, ω =1, the RMS-BMMTOR method reduces to the MSMTOR method; When the coefficient matrix A is point form and αk = α, βk = β = γk, ω = 1, the RMS-BMMTOR method reduces to the MSMAOR method[26]; When the coefficient matrix A is point form and ω=1, the RMS-BMMTOR method reduces to the MSMMTOR method; When the coefficient matrix A is point form and γk = βk, ω =1, the RMS-BMMTOR method reduces to the MSMMAOR method [15]; When ω = 1, the RMS-BMMTOR method reduces to the MSBMMTOR method; When γk = βk, ω =1, the RMS-BMMTOR method reduces to the MSBMMAOR method [25]; When the parameters (αk, βk, γk, ω) = (αk, αk, αk, 1), (1, 1, 1, 1) and (1, 0, 0, 1), the RMS-BMMTOR method reduces to the MSBMMSOR, MSBMGS and MSBMJ methods, respectively; When the parameters (αk, βk, ω) = (αk, αk, αk, ω), (1, 1, 1, ω) and (1, 0, 0, ω), the RMS-BMMTOR method reduces to the RMS-BMMSOR, RMSBMGS and RMSBMJ methods, respectively.

3 Convergence analysis

Based on the modulus-based synchronous multisplitting AOR method, BAI et al[26] obtained the following results in 2013.

Theorem 1    Let ARn×n be an H+-matrix, with D=diag(A) and B=D-A, and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a multisplitting and a triangular multisplitting of the matrix A, respectively. Assume that γ>0 and the positive diagonal matrix Ω satisfies ΩD. If A=D-Lk-Uk(k=1, 2, …, l) satisfies 〈A〉=D-|Lk|-|Uk|(k=1, 2, …, l), then the iteration sequence {z(m)}m=0 generated by the MSMAOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)R+n, provided relaxation parameters α and β satisfy

$ 0 < \beta \le \alpha < \frac{1}{{\rho \left( {{D^{ - 1}}\left| B \right|} \right)}}. $

Based on the modulus-based synchronous block multisplitting AOR method, LI et al[25] analyzed the following results in 2013.

Theorem 2[25]    Let ALn, I(n1, n2, …, np) be a block HBI(P, Q)-matrix, with HΩBI(PAQ), and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a block multisplitting and a block triangular multisplitting of block H-matrix, respectively. Assume that γ>0 and the positive matrix Ω satisfies ΩD(H) and diag(Ω)=diag(D(H)). If H=D-Lk-Uk(k=1, 2, …, l) satisfies 〈H〉=〈D〉-[Lk]-[Uk]=DH-BH(k=1, 2, …, l), then the iteration sequence {zm=0(m)∞} generated by the MSBMAOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)R+n, provided the relaxation parameters αk and βk satisfy

$ 0 < \beta \le \alpha < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}}. $

Based on the modulus-based synchronous multisplitting multi-parameters AOR method, ZHANG et al[15] gave the following results in 2014.

Theorem 3[15]    Let ARn×n be an H+-matrix, with D=diag(A) and B=D-A, and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a multisplitting and a triangular multisplitting of the matrix A, respectively. Assume that γ>0 and the positive diagonal matrix Ω satisfies ΩD. If A=D-Lk-Uk(k=1, 2, …, l) satisfies 〈A〉=D-|Lk|-|Uk|(k=1, 2, …, l), then the iteration sequence {z(m)}m=0 generated by the MSMMAOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)R+n, provided relaxation parameters αk and βk satisfy

$ \begin{array}{*{20}{c}} {0 < {\beta _k} \le {\alpha _k} \le 1\;{\rm{or}}\;{\rm{0}} < {\beta _k} < \frac{1}{{\rho \left( {{\mathit{\boldsymbol{D}}^{ - 1}}\left| \mathit{\boldsymbol{B}} \right|} \right)}},}\\ {1 < {\alpha _k} < \frac{1}{{\rho \left( {{\mathit{\boldsymbol{D}}^{ - 1}}\left| \mathit{\boldsymbol{B}} \right|} \right)}}.} \end{array} $
$ 0 \le {\beta _k} \le {\gamma _k},0 \le {\alpha _k} \le {\gamma _k},0 < {\gamma _k} < \frac{1}{\rho },0 < \omega < \frac{1}{{{\rho _{{\gamma _k}}}}}, $

Based on the global relaxed non-stationary multisplitting multi-parameter TOR method (GRNMMTOR) for the large sparse linear systems, ZHANG et al [16] obtained the following convergence theorem in 2008.

Theorem 4[16]    Let ARn×n be an H-matrix, and for k=1, 2, …, l, Lk and Fk be strictly lower triangular matrices. Define the matrix Uk, k=1, 2, …, l, such that A=D-Lk-Fk-Uk. Assume that we have 〈A〉=|D|-|Lk|-|Fk|-|Uk|=|D|-|B|. If

$ 0 \le {\beta _k} \le {\gamma _k},0 \le {\alpha _k} \le {\gamma _k},0 < {\gamma _k} < \frac{1}{\rho },0 < \omega < \frac{1}{{{\rho _{{\gamma _k}}}}}, $

then GRNMMTOR method converges for any initial vector x(0), where ρ=ρ(J), J=|D|-1×|B|, ργk=$\mathop {\max }\limits_{1 \le k \le \alpha } \{ |1 - {\gamma _k}| + {\gamma _k}{\rho _\varepsilon }\} $, q(m, k)≥1, m=0, 1, …, k=1, 2, …, l.

Based on relaxed modulus-based synchronous block multisplitting multi-parameters TOR method, we will present a convergence results of the multisplitting methods for the linear complementarity problem when the system matrix is a block H+-matrix, which is as follows:

Theorem 5    Let ALn, I(n1, n2, …, np) be a block HBI(P, Q)-matrix with HΩBI(PAQ), and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a block multisplitting and a block triangular multisplitting of block H-matrix, respectively. Assume that γ>0 and the positive matrix Ω satisfy ΩD(H) and diag(Ω)=diag(D(H)). If H=D-Lk-Uk(k=1, 2, …, l) satisfies 〈H〉=〈D-〉-[Lk]-[Uk]=DH-BH(k=1, 2, …, l), then the iteration sequence {zm=0(m)∞} generated by the RMSBMMTOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)R+n, prcers αk and βk satisfy

$ \begin{array}{l} 0 < {\beta _k},{\gamma _k} \le {\alpha _k} \le 1,0 < \omega < \frac{2}{{1 + \rho '}}{\rm{or}}\\ 0 < {\beta _k},{\gamma _k} < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},1 < {\alpha _k} < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;0 < \omega < \frac{2}{{1 + \rho '}}, \end{array} $ (7)

where μ1(PAQ)=ρ(DH-1BH)=ρ(JH), ρ′=$\mathop {\max }\limits_{1 \le k \le l} ${1-2αk+2αkρε, 2δkρε-1, 2αkρε-1}, δk=max{βk, γk}. Moreover, βk, γk should be greater or less than αk at once.

Proof    From lemma 6 and (6), for the RMS-BMMTOR method, it holds that

$ \begin{array}{l} \left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + {\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right){x_ * } = \left[ {\left( {1 - {\alpha _k}} \right)\mathit{\boldsymbol{\bar D + }}} \right.\\ \;\;\;\;\;\;\left. {\left( {{\alpha _k} - {\beta _k}} \right){{\mathit{\boldsymbol{\bar L}}}_k} + \left( {{\alpha _k} - {\gamma _k}} \right){{\mathit{\boldsymbol{\bar F}}}_k} + {\alpha _k}{{\mathit{\boldsymbol{\bar U}}}_k}} \right]{x_ * } + \\ \;\;\;\;\;\;{\alpha _k}\left[ {\left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right)\left| {{x_ * }} \right| - \gamma q} \right],k = 1,2, \cdots ,l. \end{array} $ (8)

By subtracting (8) and (6), we have

$ \begin{array}{l} {x^{\left( {m + 1} \right)}} - {x_ * } = \omega \sum\limits_{k = 1}^l {{\mathit{\boldsymbol{E}}_k}{{\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + {\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right)}^{ - 1}}} \times \\ \;\;\;\;\;\;\left[ {\left( {1 - {\alpha _k}} \right)\mathit{\boldsymbol{\bar D}} + \left( {{\alpha _k} - {\beta _k}} \right){{\mathit{\boldsymbol{\bar L}}}_k} + \left( {{\alpha _k} - {\gamma _k}} \right){{\mathit{\boldsymbol{\bar F}}}_k} + } \right.\\ \;\;\;\;\;\;\left. {{\alpha _k}{{\mathit{\boldsymbol{\bar U}}}_k}} \right]\left( {{x^{\left( m \right)}} - {x_ * }} \right) + \omega \sum\limits_{k = 1}^l {{E_k}\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + } \right.} \right.} \\ \;\;\;\;\;\;\left. {\left. {{\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right) - 1{\alpha _k}\left( {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right)\left( {\left| {{x^{\left( m \right)}}} \right| - \left| {{x_ * }} \right|} \right) + \\ \;\;\;\;\;\;\left( {1 - \omega } \right)\left( {{x^{\left( m \right)}} - {x_ * }} \right). \end{array} $ (9)

By taking absolute values on both sides of the equality (9), estimating |[x(m)]-[x*]|≤[x(m)-x*] and amplifying, we may obtain

$ \begin{array}{l} \left[ {{x^{\left( m \right)}} - {x_ * }} \right] \le \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left[ {\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\mathit{\boldsymbol{\beta }}_k}{{\mathit{\boldsymbol{\bar L}}}_k} + } \right.} \right.} \right.} \\ \;\;\;\;\;\;\left. {{{\left. {\left. {{\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right)}^{ - 1}}} \right]\left[ {\left| {1 - {\alpha _k}} \right|\left[ {\mathit{\boldsymbol{\bar D}}} \right] + \left| {{\alpha _k} - {\beta _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + } \right.\\ \;\;\;\;\;\;\left. {\left| {{\alpha _k} - {\gamma _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + {\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right]} \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \\ \;\;\;\;\;\;\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left[ {{{\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - \left( {{\beta _k}{{\mathit{\boldsymbol{\bar L}}}_k} + {\gamma _k}{{\mathit{\boldsymbol{\bar F}}}_k}} \right)} \right)}^{ - 1}}} \right]} \times \\ \;\;\;\;\;\;{\alpha _k}\left[ {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \left| {1 - \omega } \right|\left[ {{x^{\left( m \right)}} - {x_ * }} \right]. \end{array} $

Since[Ω-H]=〈Ω〉-(DH-BH) and BH=[Lk]+[Fk]+[Uk], [D]≥DH, we have

$ \begin{array}{l} \left[ {{x^{\left( m \right)}} - {x_ * }} \right] \le \omega \sum\limits_{k = 1}^l {\left[ {{E_k}} \right]\left[ {\left( {{\alpha _k}\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} + \mathit{\boldsymbol{\bar D}} - } \right.} \right.} \\ \;\;\;\;\;\left. {{{\left. {\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right]} \right)}^{ - 1}}} \right]\left[ {\left| {1 - {\alpha _k}} \right|\left[ {\mathit{\boldsymbol{\bar D}}} \right] + } \right.\\ \;\;\;\;\;\left| {{\alpha _k} - {\beta _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \left| {{\alpha _k} - {\gamma _k}} \right|\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + {\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + \\ \;\;\;\;\;\left. {{\alpha _k}\left[ {\mathit{\boldsymbol{ \boldsymbol{\varOmega} }} - \mathit{\boldsymbol{H}}} \right]} \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \left( {1 - \omega } \right)\left[ {{x^{\left( m \right)}} - {x_ * }} \right] \le \\ \;\;\;\;\;\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{{\left( {{\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle + {\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} - \left( {{\beta _k}\left[ {{\mathit{\boldsymbol{L}}_k}} \right] + {\gamma _k}\left[ {{\mathit{\boldsymbol{F}}_k}} \right]} \right)} \right)}^{ - 1}}} \times \\ \;\;\;\;\;\left[ {\left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} + \left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + } \right.\\ \;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + \\ \;\;\;\;\;\left. {{\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle } \right]\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + \left( {1 - \omega } \right)\left[ {{x^{\left( m \right)}} - {x_ * }} \right] = \\ \;\;\;\;\;\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}\left[ {{x^{\left( m \right)}} - {x_ * }} \right] + } \\ \;\;\;\;\;\left( {1 - \omega } \right)\left[ {{x^{\left( m \right)}} - {x_ * }} \right] = \left\{ {\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} + } \right.\\ \;\;\;\;\;\left. {\left( {1 - \omega } \right)\mathit{\boldsymbol{I}}} \right\}\left[ {{x^{\left( m \right)}} - {x_ * }} \right] = \\ \;\;\;\;\;{{\bar H}_{{\rm{RMSBMMTOR}}}}\left[ {{x^{\left( m \right)}} - {x_ * }} \right], \end{array} $ (10)

where

$ \begin{array}{l} {{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMSBMMTOR}}}} = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} + \left( {1 - \omega } \right)\mathit{\boldsymbol{I}},\\ {\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}} = {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle + {\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} - \left( {{\beta _k}\left[ {{\mathit{\boldsymbol{L}}_k}} \right] + } \right.\\ \;\;\;\;\;\;\;{\left. {\left. {{\gamma _k}\left[ {{\mathit{\boldsymbol{F}}_k}} \right]} \right)} \right)^{ - 1}}\left[ {\left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle H \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + } \right.} \right.\\ \;\;\;\;\;\;\;\left. {\left. {{\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right]{\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle } \right]. \end{array} $ (11)

The error relationship (10) is the base for proving the convergence of RMS-BMMTOR method. By making use of lemmas 1 and 2, defining ε(m)=x(m)-x* and arranging similar terms together, we can obtain

$ \begin{array}{l} {\varepsilon ^{\left( {m + 1} \right)}} = \left[ {{x^{\left( {m + 1} \right)}} - {x_ * }} \right] \le {{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMSBMMTOR}}}}\left[ {{x^{\left( {m + 1} \right)}} - {x_ * }} \right] = \\ \;\;\;\;\;\;\;\;\left\{ {\omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} + \left( {1 - \omega } \right)\mathit{\boldsymbol{I}}} \right\}\left[ {{x^{\left( {m + 1} \right)}} - {x_ * }} \right]. \end{array} $

Case 1    Let 0 < βk, γkαk≤1, $0 < \omega < \frac{2}{{1 + \rho ' }}$. Define

$ {\mathit{\boldsymbol{M}}_k} = {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle + {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - \left( {{\beta _k}\left[ {{\mathit{\boldsymbol{L}}_k}} \right] + {\gamma _k}\left[ {{\mathit{\boldsymbol{F}}_k}} \right]} \right), $
$ \begin{array}{l} \mathit{\boldsymbol{N}}_k^1 = \left[ {\left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + } \right.\\ \;\;\;\;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;\;\;\left( {1 - 2{\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {2{\alpha _k} - {\beta _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \\ \;\;\;\;\;\;\;\;\left( {2{\alpha _k} - {\gamma _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\alpha _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}. \end{array} $

So, we have

$ \begin{array}{l} {\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}} = \mathit{\boldsymbol{M}}_k^{ - 1}\mathit{\boldsymbol{N}}_k^1 = \mathit{\boldsymbol{M}}_k^{ - 1}\left( {{\mathit{\boldsymbol{M}}_k} - 2{\alpha _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.\\ \;\;\;\;\;\;\;\left. {2{\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right) = \mathit{\boldsymbol{I}} - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {5_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $

Through further analysis, we have

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{M}}_k^{ - 1}\left[ {{\mathit{\boldsymbol{M}}_k} - 2{\alpha _k}\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right)} \right] \le \\ \;\;\;\;\;\;\;\mathit{\boldsymbol{I}} - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left( {\mathit{\boldsymbol{I}} - {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $

Since ALn, I(n1, n2, …, np) is a block HBI(P, Q)-matrix, by lemmas 2 and 3, we know μ1(PAQ)=ρ(DH-1BH)=ρ(JH) < 1, Jε=JH+εeeT, where e denotes the vector e=(1, 1, …, 1)TRn. Since Jε is nonnegative, the matrix JH+εeeT has only positive entries and irreducible for any ε>0. By the Perron-Frobenius theorem, for any ε>0, there is a vector xε>0 such that

$ \left( {{\mathit{\boldsymbol{J}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right){x_\varepsilon } = {\rho _\varepsilon }{x_\varepsilon }, $

where ρε=ρ(JH+εeeT)=ρ(Jε). Moreover, if ε>0 is small enough, we have ρε < 1 by continuity of the spectral radius. Because of 0 < αk≤1, we also have 1-2αk+2αkρ < 1, and 1-2αk+2αkρε < 1. So

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMSBMMTOR}}}}} \right] \le I - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - \left( {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.} \right.\\ \;\;\;\;\;\;\left. {\left. {\varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right)} \right] = I - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\mathit{\boldsymbol{J}}_\varepsilon }} \right]. \end{array} $

Multiplying xε in two sides of the above inequality, and Mk-1DH-1, we can obtain

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } \le {\mathit{\boldsymbol{x}}_\varepsilon } - 2{\alpha _k}\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - \rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;{\mathit{\boldsymbol{x}}_\varepsilon } - 2{\alpha _k}\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - \rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;\left( {1 - 2{\alpha _k} + 2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right){\mathit{\boldsymbol{x}}_\varepsilon }. \end{array} $

Based on Ek and the definition of [·], we know that $\sum\limits_{k = 1}^l {[{\mathit{\boldsymbol{E}}_k}]} = \mathit{\boldsymbol{I}}$. By (11), we have

$ \begin{array}{l} \left[ {{{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left( {1 - 2{\alpha _k} + 2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right){x_\varepsilon } + } \\ \;\;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \omega \left( {1 - 2{\alpha _k} + 2{\alpha _k}{\rho _\varepsilon }} \right){\mathit{\boldsymbol{x}}_\varepsilon } + \\ \;\;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \left( {\omega \rho ' + \left| {1 - \omega } \right|} \right){\mathit{\boldsymbol{x}}_\varepsilon } = {\theta _1}{\mathit{\boldsymbol{x}}_\varepsilon }\left( {\varepsilon \to {0^ + }} \right), \end{array} $

where θ1=ωρ′+|1-ω| < 1.

Case 2 0 < βk, ${\gamma _k} < \frac{1}{{{\mu _1}(\mathit{\boldsymbol{PAQ}})}}$,

$ 1 < {\alpha _k} < \frac{1}{{{\mu _1}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},\;\;\;0 < \omega < \frac{2}{{1 + \rho '}}. $

Subcase 1    αkβk and αkγk. Define

$ \begin{array}{l} \mathit{\boldsymbol{N}}_k^2 = \left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \\ \;\;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}. \end{array} $

So

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{M}}_k^{ - 1}\left[ {{\mathit{\boldsymbol{M}}_k} - 2\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {\alpha _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right)} \right] \le \\ \;\;\;\;\;\;\mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left( {\mathit{\boldsymbol{I}} - {\alpha _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $

Similar to the case 1, let e denote the vector e=(1, 1, …, 1)TRn, and xε>0 such that Jεxε=(JH+εeeT)xε=ρ(Jε)xε. Moreover, if ε>0 is small enough, we have ρε < 1 by continuity of the spectral radius. Because of $1 < {\alpha _k} < \frac{1}{{{\mu _1}(\mathit{\boldsymbol{PAQ}})}}$, we also have

$ 2{\alpha _k}\rho - 1 < 1\;{\rm{and}}\;{\rm{2}}{\alpha _k}{\rho _\varepsilon } - 1 < 1. $

So

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\alpha _k}\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.} \right.\\ \;\;\;\;\;\;\;\left. {\left. {\varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right)} \right] = \mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\alpha _k}{\mathit{\boldsymbol{J}}_\varepsilon }} \right]. \end{array} $

Multiplying xε in two sides of the above inequality, and Mk-1DH-1, we can obtain

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right]{x_\varepsilon } \le {x_\varepsilon } - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - {\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;{\mathit{\boldsymbol{x}}_\varepsilon } - 2\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {1 - {\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \left( {2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }. \end{array} $

Based on Ek and the definition of [·], we know that $\sum\limits_{k = 1}^l {[{\mathit{\boldsymbol{E}}_k}]} = \mathit{\boldsymbol{I}}$. By (11), we have

$ \begin{array}{l} \left[ {{{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left( {2{\alpha _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }} + \\ \;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \omega \left( {2{\alpha _k}{\rho _\varepsilon } - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon } + \left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \\ \;\;\;\;\;\;\;\left( {\omega \rho ' + \left| {1 - \omega } \right|} \right){\mathit{\boldsymbol{x}}_\varepsilon } = {\theta _2}{\mathit{\boldsymbol{x}}_\varepsilon }\left( {\varepsilon \to {0^ + }} \right),\\ {\rm{where}}\;\;\;\;\;\;\;\;\;\;{\theta _2} = \omega \rho ' + \left| {1 - \omega } \right| < 1. \end{array} $

where θ2=ωρ′+|1-ω| < 1.

Subcase 2    αkβk and αkγk. Define

$ \begin{array}{l} \mathit{\boldsymbol{N}}_k^3 = \left( {\left| {1 - {\alpha _k}} \right| - {\alpha _k}} \right){\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + \left( {\left| {{\alpha _k} - {\beta _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + \\ \;\;\;\;\;\;\left( {\left| {{\alpha _k} - {\gamma _k}} \right| + {\alpha _k}} \right)\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] + {\alpha _k}\left\langle \mathit{\boldsymbol{ \boldsymbol{\varOmega} }} \right\rangle = \\ \;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\beta _k}\left[ {{{\mathit{\boldsymbol{\bar L}}}_k}} \right] + + 2{\gamma _k}\left[ {{{\mathit{\boldsymbol{\bar F}}}_k}} \right] + + 2{\alpha _k}\left[ {{{\mathit{\boldsymbol{\bar U}}}_k}} \right] \le \\ \;\;\;\;\;\;{\mathit{\boldsymbol{M}}_k} - 2{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + 2{\delta _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}, \end{array} $

where δk=max{βk, γk}. So

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{M}}_k^{ - 1}\left[ {{\mathit{\boldsymbol{M}}_k} - 2\left( {{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} - {\delta _k}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right)} \right] \le \\ \;\;\;\;\;\;\;\mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left( {\mathit{\boldsymbol{I}} - {\delta _k}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}^{ - 1}{\mathit{\boldsymbol{B}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}} \right). \end{array} $

Similar to the case 1, let e denote the vector e=(1, 1, …, 1)TRn, and xε>0 such that Jεxε=(JH+εeeT)xε=ρ(Jε)xε. Moreover, if ε>0 is small enough, we have ρε < 1 by continuity of the spectral radius. Because of 0 < βk, ${\gamma _k} < \frac{1}{{{\mu _1}(\mathit{\boldsymbol{PAQ}})}}$, we also have

$ 2{\delta _k}\rho - 1 < 1\;{\rm{and}}\;{\rm{2}}{\delta _k}{\rho _\varepsilon } - 1 < 1. $

So

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right] \le \mathit{\boldsymbol{I}} - 2\mathit{\boldsymbol{M}}_k^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\delta _k}\left( {\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }^{ - 1}{\mathit{\boldsymbol{D}}_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }} + } \right.} \right.\\ \;\;\;\;\;\;\;\left. {\left. {\varepsilon \mathit{\boldsymbol{e}}{\mathit{\boldsymbol{e}}^{\rm{T}}}} \right)} \right] = I - 2M_k^{ - 1}{D_{\left\langle \mathit{\boldsymbol{H}} \right\rangle }}\left[ {\mathit{\boldsymbol{I}} - {\delta _k}{\mathit{\boldsymbol{J}}_\varepsilon }} \right]. \end{array} $

Multiplying xε in two sides of the above inequality, and Mk-1DH-1, we can obtain

$ \begin{array}{l} \left[ {{\mathit{\boldsymbol{H}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } \le {\mathit{\boldsymbol{x}}_\varepsilon } - 2\left[ {1 - {\delta _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right)} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \\ \;\;\;\;\;\;\;\left( {2{\delta _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }. \end{array} $

Based on Ek and the definition of [·], we know that $\sum\limits_{k = 1}^l {[{\mathit{\boldsymbol{E}}_k}]} = \mathit{\boldsymbol{I}}$. By (11), we have

$ \begin{array}{l} \left[ {{{\mathit{\boldsymbol{\bar H}}}_{{\rm{RMS - BMMTOR}}}}} \right]{\mathit{\boldsymbol{x}}_\varepsilon } = \omega \sum\limits_{k = 1}^l {\left[ {{\mathit{\boldsymbol{E}}_k}} \right]\left( {2{\delta _k}\rho \left( {{\mathit{\boldsymbol{J}}_\varepsilon }} \right) - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon }} + \\ \;\;\;\;\;\;\;\;\left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \omega \left( {2{\delta _k}{\rho _\varepsilon } - 1} \right){\mathit{\boldsymbol{x}}_\varepsilon } + \left| {1 - \omega } \right|{\mathit{\boldsymbol{x}}_\varepsilon } \le \\ \;\;\;\;\;\;\;\;\left( {\omega \rho ' + \left| {1 - \omega } \right|} \right){\mathit{\boldsymbol{x}}_\varepsilon } = {\theta _3}{\mathit{\boldsymbol{x}}_\varepsilon }\left( {\varepsilon \to {0^ + }} \right), \end{array} $

where θ3=ωρ′+|1-ω| < 1.

Remark 2    In the fact application, the parameters can be adjusted suitably so that the convergence property of method can be substantially improved. That is to say, we have more choices for the splitting A=B-C which makes the multisplitting iteration methods converge.Therefore, our convergence theories extend the scope of multisplitting iterative methods in applications.

Through the similar proving process of theorem 4, we can obtain the following con-vergence results.

Theorem 6    Let ALn, I(n1, n2, …, np) be a block HB(P, Q)-matrix, with HΩB(PAQ), and let (Mk, Nk, Ek)(k=1, 2, …, l) and (D-Lk, Uk, Ek)(k=1, 2, …, l) be a block multisplitting and a block triangular multisplitting of block H-matrix, respectively. Assume that γ>0 and the positive matrix Ω satisfy ΩD(H) and diag(Ω)=diag(D(H)). If H=D-Lk-Uk(k=1, 2, …, l) satisfies 〈〈H〉〉=I-[D-1Lk]-[D-1Fk]-[D-1Uk]=I-B〈〈H〉〉(k=1, 2, …, l), then the iteration sequence {zm=0(m)∞} generated by the GRM-SBMMTOR iteration method converges to the unique solution z* of LCP(q, A) for any initial vector z(0)R+n, provided the relaxation parameters αk and βk satisfy

$ \begin{array}{c} 0 < {\beta _k},{\gamma _k} \le {\alpha _k} \le 1,0 < \omega < \frac{2}{{1 + \rho '}}{\rm{or}}\\ 0 < {\beta _k},{\gamma _k} < \frac{1}{{{\mu _2}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},1 < {\alpha _k} < \frac{1}{{{\mu _2}\left( {\mathit{\boldsymbol{PAQ}}} \right)}},0 < \omega < \frac{2}{{1 + \rho '}}, \end{array} $ (7)

where μ2(PAQ)=ρ(D〈〈H〉〉-1B〈〈H〉〉)=ρ(J〈〈H〉〉), ρ′=$\mathop {\max }\limits_{1 \le k \le l} ${1-2αk+2αkρε, 2δkρε-1, 2αkρε-1}, δk=max {βk, γk}. Moreover, βk, γk should be greater than or less than αk at once.

Remark 3    From table 1, we obviously see that the MSMMAOR method in [30] and the MSBMAOR method in [23] use the same parameters α, β in different processors, but the RMS-BMMTOR method in this paper uses different parameters αk, βk, γk(k=1, 2, …, l) in different processors. Moreover, when computing x(m+1) in method 2, we utilize relaxation extrapolation technique and add a relaxation parameter. Therefore, we may choose proper relaxation parameters to increase convergence speed and reduce the computation time when doing numerical experiments. In RMS-BMMTOR method, we may choose proper Ek to balance the load of each processor and avoid synchronization.

Table 1 The modulus-based synchronous (blokc) multisplitting multi-parameters method and the corresponding convergence results
4 Conclusions

The manuscript discusses relaxed modulus-based synchronous block multisplitting multi-parameters methods for linear complementarity problems. We have analyzed and got the new convergence theorem under certain conditions. However, table 1 shows some existing methods for linear complementarity problems are special cases of new methods with appropriate parameters. Moreover, new convergence theorems are more general and practical.

References
[1] COTTLE R W, PANG J S, STONE R E. The Linear Complementarity Problem[M]. San Diego: Academic Press, 1992.
[2] FERRIS M C, PANG J S. Engineering and economic applications of complementarity problems[J]. SIAM Review, 1997, 39(4): 669–713. DOI:10.1137/S0036144595285963
[3] BAI Z Z. On the convergence of the multisplitting methods for the linear complementarity problem[J]. SIAM Journal on Matrix Analysis and Applications, 1991, 21(1): 67–78.
[4] BAI Z Z. The convergence of parallel iteration algorithms for linear complementarity problems[J]. Computers and Mathematics with Applications, 1996, 32(9): 1–17. DOI:10.1016/0898-1221(96)00172-1
[5] BAI Z Z, EVANS D J. Matrix multisplitting relaxation methods for linear complementarity problems[J]. International Journal of Computer Mathematics, 1997, 63(3/4): 309–326.
[6] BAI Z Z. On the monotone convergence of matrix multisplitting relaxation methods for the linear complementarity problem[J]. IMA Journal of Numerical Analysis, 1998, 18(4): 509–518. DOI:10.1093/imanum/18.4.509
[7] BAI Z Z, EVANS D J. Matrix multisplitting methods with applications to linear complemen-tarity problems:Parallel synchronous and chaotic methods[J]. Réseaux et Systèmes Répartis:Calculateurs Paralleles, 2001, 13: 125–154.
[8] BAI Z Z. Modulus-based matrix splitting iteration methods for linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2010, 17(6): 917–933. DOI:10.1002/nla.v17.6
[9] BAI Z Z, ZHANG L L. Modulus-based synchronous two-stage multisplitting iteration methods for linear complementarity problems[J]. Numerical Algorithms, 2013, 62(1): 59–77. DOI:10.1007/s11075-012-9566-x
[10] LEENAERTS D, BOKHOVEN W M G D. Piecewise-Linear Modelling and Analysis[M]. NewYork: Springer, 1998.
[11] DONG J L, JIANG M Q. A modified modulus method for symmetric positive-definite linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2009, 16: 129–143. DOI:10.1002/nla.v16:2
[12] HADJIDIMOS A, TZOUMAS M. Nonstationary extrapolated modulus algorithms for the solution of the linear complementarity problem[J]. Linear Algebra and Its Applications, 2009, 431(1/2): 197–210.
[13] ZHANG L L, REN Z R. Improved convergence theorems of modulus-based matrix splitting iteration methods for linear complementarity problems[J]. Applied Mathematics Letters, 2013, 26: 638–642. DOI:10.1016/j.aml.2013.01.001
[14] ZHANG L T, HUANG T Z, CHENG S H, et al. The weaker convergence of non-stationary matrix multisplitting methods for almost linear systems[J]. Taiwanese Journal of Mathematics, 2011, 15(4): 1423–1436. DOI:10.11650/twjm/1500406354
[15] ZHANG L T, LI J T. The weaker convergence of modulus-based synchronous multisplitting multi-parameters methods for linear complementarity problems[J]. Computers and Mathematics with Application, 2014, 67(10): 1954–1959. DOI:10.1016/j.camwa.2014.04.018
[16] ZHANG L T, HUANG T Z, GU T X. Global relaxed non-stationary multisplitting multi-parameters methods[J]. International Journal of Computer Mathematics, 2008, 85(2): 211–224. DOI:10.1080/00207160701405451
[17] ZHANG L T, HUANG T Z, GU T X, et al. Convergence of relaxed multisplitting USAOR method for an[WTHX]H-matrix[J]. Applied Mathematics and Computation, 2008, 202(1): 121–132. DOI:10.1016/j.amc.2008.01.034
[18] ZHANG L T, HUANG T Z, GU T X, et al. Convergent improvement of SSOR multisplitting method[J]. Journal of Computational and Applied Mathematics, 2009, 225(2): 393–397. DOI:10.1016/j.cam.2008.07.051
[19] ZHANG L T, HUANG T Z, CHENG S H, et al. A note on parallel multisplitting TOR method for an[WTHX]H-matrix[J]. International Journal of Computer Mathematics, 2011, 88(3): 501–507. DOI:10.1080/00207160903501917
[20] ZHANG L T, ZUO X Y, GU T X, et al. Improved convergence theorems of multisplitting methods for the linear complementarity problem[J]. Applied Mathematics and Computation, 2014, 243: 982–987. DOI:10.1016/j.amc.2014.06.038
[21] ZHANG L T, LI J W, CHEN S H, et al. Convergence of relaxed matrix multisplitting chaotic methods for[WTHX]H-matrices[J]. Journal of Applied Mathematics, 2014, Article ID 594185.
[22] ZHANG L T, ZHOU Y M, GU T X, et al. Convergence improvement of relaxed multisplitting USAOR methods for[WTHX]H-matrices linear systems[J]. Applied Mathematics and Computation, 2014, 247(C): 225–232.
[23] LI W. A general modulus-based matrix splitting method for linear complementarity problems of[J]. Applied Mathematics Letters[WTHX]H-matrices, 2013, 26(12): 1159–1164.
[24] BAI Z Z. Parallel matrix multisplitting block relaxation iteration methods[J]. Mathematica Numerica Sinica, 1995, 17(3): 238–252.
[25] LI Y, WANG X, SUN C C. Convergence analysis of linear complementarity problems based on synchronous block multisplitting iteration methods[J]. Journal of Nanchang University (Natural Science), 2013, 37(4): 307–312.
[26] BAI Z Z, ZHANG L L. Modulus-based synchronous multisplitting iteration methods for linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2013, 62(1): 59–77.
[27] SONG Y Z. Convergence of block AOR iterative methods[J]. Mathematica Applicata, 1993(1): 39–45.
[28] FROMMER A, MAYER G. Convergence of relaxed parallel multisplitting methods[J]. Linear Algebra and Its Applications, 1989, 119: 141–152. DOI:10.1016/0024-3795(89)90074-8
[29] YOUNG D M. Iterative Solution of Large Linear Systems[M]. New York: Academic Press, 1972.
[30] BAI Z Z, ZHANG L L. Modulus-based multigrid methods for linear complementarity problems[J]. Numerical Linear Algebra with Applications, 2017, 24(6).
[31] ZHANG L T, ZHANG Y X, GU T X, et al. New convergence of modulus-based synchronous block multisplitting multi-parameters methods for linear complementarity problems[J]. Computational and Applied Mathematics, 2017, 36(1): 481–492. DOI:10.1007/s40314-015-0238-z