Applied Mathematics and Computation 367 (2020) 124788
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A new error bound for linear complementarity problems with weakly chained diagonally dominant BmatricesR Ruijuan Zhao, Bing Zheng∗, Maolin Liang School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, PR China
a r t i c l e
i n f o
Article history: Received 20 March 2018 Revised 15 September 2019 Accepted 23 September 2019
MSC: 15A48 65G50 90C31
a b s t r a c t Recently, some error bounds for the linear complementarity problem with a weakly chained diagonally dominant Bmatrix have been given under certain conditions. In this paper we provide a new and even optimal error bound for such linear complementarity problems under weaker conditions than those there, greatly improving the existing ones. Some numerical examples are performed to illustrate the sharpness and optimality of our new bound. © 2019 Elsevier Inc. All rights reserved.
Keywords: Linear complementarity problems Weakly chained diagonally dominant Bmatrices Condition constant Error bound
1. Introduction One of the fundamental problems in optimization and mathematical programming [1] is the linear complementarity problem (LCP), which often arises in many scientiﬁc computing, economics and engineering areas such as quadratic programming, nonlinear obstacle problems, invariant capital stock, the Nash equilibrium point of a bimatrix game, optimal stopping, free boundary problem for journal bearings, manufacturing systems, traﬃc equilibriums and so on; for more details, see [1,2] and the references therein. The global error bound on the distance between an arbitrary point in Rn and the solution set of the LCP plays an important role in the convergence analysis of algorithms, including the termination criterion setting and the sensitivity analysis of the LCP when its input data is subject to a small perturbation; see, e.g., [3–6]. Hence the global error bounds for the LCP have been studied extensively by many researchers, see [7–9]. When the involved matrix in the LCP is a Pmatrix, Chen and Xiang [9] presented an elegant global error upper bound for the LCP. Their upper bound is actually a product of a socalled “condition constant” and the natural residual of the LCP (see (3.1)), improving the result given in [8]. However, this condition constant is not easy to be computed for the large scale LCP because the inverse of a nonsingular matrix of large size needs to be computed there. To avoid these highcost computations of the inverse matrix, some easily computable bounds R The work is supported by the National Natural Science Foundation of China (Grant No. 11571004), the Science Foundation of Education Department of Gansu Province (Grant No. 2017A078). ∗ Corresponding author. Email address:
[email protected] (B. Zheng).
https://doi.org/10.1016/j.amc.2019.124788 0 09630 03/© 2019 Elsevier Inc. All rights reserved.
2
R. Zhao, B. Zheng and M. Liang / Applied Mathematics and Computation 367 (2020) 124788
for this condition constant were given out for the different subclass of Pmatrices, such as Bmatrices [10–12], DBmatrices [13], SBmatrices [14,15], BNekrasov matrices [16], and so on. Recently, Li and Li [17] introduced a class of weakly chained diagonally dominant (w.c.d.d.) Bmatrices, which is also a subclass of Pmatrices, and derived a computable upper bound on the condition constant for such class of matrices under certain conditions (see Theorem 2 of [17]). Subsequently, two tighter bounds for the condition constant were established respectively by Wang [18] and Sun and Wang [19] under different conditions. As a continuation of their research, in this paper we establish a new tighter bound for the condition constant with w.c.d.d.Bmatrices under a weaker assumption than that in [17]. This new bound can be attainable by some concrete examples, which means that it, combining with the natural residual of the LCP, can gives the best error bound for the linear complementarity problem with w.c.d.d Bmatrices. Moreover, we present an example which shows that the conditions in [18] and [19] are not satisﬁed when our conditions are satisﬁed (see Example 4.1), and even if all assumption conditions both for ours and theirs are satisﬁed, another two given examples (see Examples 4.2 and 4.3) also demonstrate that our bound can be numerically much better than the ones in [18] and [19] though we can not theoretically compare them with each other. This paper is organized as follows: In Section 2, we recall some basic deﬁnitions and give some lemmas, which will be used in the subsequent analysis. Section 3 proposes a new upper bound for the condition constant under a weaker condition than that in [17], and proves that our new bound is tighter than the one given in [17] and shows that it can be reachable. Section 4 tests some numerical examples which illustrate the sharpness of our bound in contrast with those proposed in [17–19]. Finally, in Section 5, we give some conclusions to end this paper. 2. Preliminaries Consider the linear complementarity problem of ﬁnding a vector x ∈ Rn such that
x ≥ 0, Mx + q ≥ 0, (Mx + q )T x = 0,
(2.1)
where M ∈ Rn×n and q ∈ Rn , denoted by LCP(M, q). A matrix M = (mi j ) ∈ Rn×n is called (1) (2) (3) (4)
a Pmatrix if max1≤i≤n xi (Mx )i > 0 for all x = 0; a Zmatrix if mij ≤ 0 for any i = j; an Mmatrix if M−1 is nonnegative and M is a Zmatrix; a weakly chained diagonally dominant (w.c.c.d.) matrix if M is diagonally dominant, i.e.,
mii  ≥ i (M ), ∀i ∈ N, and for each i ∈ / J (M ) = { j ∈ N : m j j  > j (M )} = ∅ there exists a sequence of nonzero elements of M of the form mii1 , mi1 i2 , . . . , mik j with j ∈ J(M), where i (M ) = nk=1,k=i mik  and N =: {1, 2, . . . , n} (see [20]); (5) a weakly chained diagonally dominant (w.c.d.d.) Bmatrix if it can be written as the form M = B+ + C with B+ being a w.c.d.d. matrix whose diagonal entries are all positive, where
⎛
m11 − r1+ .. B+ = ( bi j ) = ⎝ . mn1 − rn+ and
ri+
··· ···
⎞
⎛
m1n − r1+ r1+ .. ⎠, C = ⎝ .. . . mnn − rn+ rn+
··· ···
⎞
r1+ .. ⎠ . rn+
= max{0, mi j  j = i}, see [17].
For convenience of statements, we will use the following notations throughout this paper:
ri ( A ) =
n
ai j , li ( A ) =
j=1
sn ( A ) =
n−1
i−1
ai j , ui ( A ) =
j=1
i−1 j=1
ai j , i ∈ N;
j=i+1
j=1
an j , si ( A ) =
n
ai j  +
n j=i+1
ai j 
s j (A ) , i = n − 1, . . . , 1. a j j 
Next, we state or prove some technical lemmas which will be used. Lemma 2.1 ([11], Lemma 3). Let γ > 0 and η ≥ 0. Then for any x ∈ [0, 1],
1 1 ≤ 1−x+γx min{γ , 1} and
ηx ≤ 1−x+γx
η . γ
Lemma 2.2 ([17], Lemma 2). Suppose P = ( p1 , · · · , pn )T e, where e = (1, . . . , 1 ) and p1 , . . . , pn ≥ 0, then
(I + P )−1 ∞ ≤ n − 1.
(2.2)
R. Zhao, B. Zheng and M. Liang / Applied Mathematics and Computation 367 (2020) 124788
Lemma 2.3 ([21], Theorem 2.4). Let A = (ai j ) ∈ Rn×n be a w.c.d.d. Mmatrix with aii −
A−1 ∞ ≤
n i i=1 j=1
n k=i+1
3
aik  ska(A) > 0 for all i ∈ N. Then kk
h j (A ) , a j j + l j (A ) − s j (A )
where h1 (A ) = 1, h j (A ) = r j−1 (A ) − s j−1 (A ), j = 2, . . . , n. Using Lemma 2.1 and the deﬁnition of si (A), we can get the following result. Lemma 2.4. Let A = (ai j ) ∈ Rn×n with positive diagonal entries and D = diag{d1 , . . . , dn } with di ∈ [0, 1] for all i ∈ N. Then for any i ∈ N, we have
si ( A ) si (I − D + DA ) ≤ . 1 − di + di aii aii
(2.3)
Proof. Since di ∈ [0, 1] and aii > 0 for any i ∈ N, it is easy to know 1 − di + di aii  = 1 − di + di aii > 0 for all i ∈ N. We now prove (2.3) by induction on the index n − k for k = 0, 1, . . . , n − 1. When k = 0, using Lemma 2.1 we have
−1 n−1 dn nj=1 an j  sn (I − D + DA ) sn ( A ) j=1 an j  = ≤ = . 1 − dn + dn ann 1 − dn + dn ann ann ann Assume that (2.3) is true for l < k, i.e.,
s (A ) sn−l (I − D + DA ) ≤ n−l 1 − dn−l + dn−l an−l,n−l an−l,n−l holds for l = 1, 2, . . . , k − 1. Then for the case of l = k, we have
sn−k (I − D + DA ) = 1 − dn−k + dn−k an−k,n−k = ≤ = =
n−k−1 j=1
= The proof is completed.
n
j=n−k+1
s (I−D+DA )
dn−k an−k, j  1j−d +d a j
j
jj
1 − dn−k + dn−k an−k,n−k n−k−1 −1 s (I−D+DA ) dn−k an−k, j  + kl=0 dn−k an−k,n−l  1−dn−l +d a j=1 n−l
n−l n−l,n−l
1 − dn−k + dn−k an−k,n−k −1 n−k−1 s (A ) dn−k an−k, j  + kl=0 dn−k an−k,n−l  an−l j=1 n−l,n−l
1 − dn−k + dn−k an−k,n−k n−k−1 s (A ) dn−k an−k, j  + nj=n−k+1 dn−k an−k, j  ja j j j=1 1 − dn−k + dn−k an−k,n−k n−k−1 s (A ) dn−k ( j=1 an−k, j  + nj=n−k+1 an−k, j  ja j j ) n−k−1
≤
dn−k an−k, j  +
j=1
1 − dn−k + dn−k an−k,n−k an−k, j  + nj=n−k+1 an−k, j  s ja(jAj ) an−k,n−k
(by Lemma 2.1)
sn−k (A ) . an−k,n−k
3. Main results In this section, we present a new error bound for linear complementarity problems with w.c.d.d. Bmatrices. In Theorem 2.3 of [9], Chen and Xiang obtained the following error bound for the LCP(M, q) when M is a Pmatrix:
x − x∗ ∞ ≤ maxn (I − D + DM )−1 ∞ r (x ) ∞ , d∈[0,1]
(3.1)
where D = diag(d1 , d2 , . . . , dn ) with 0 ≤ di ≤ 1, d = (d1 , d2 , . . . , dn )T and I is an n × n identity matrix; r (x ) = min{x, Mx + q} is a vector whose entries are taken as the componentwise minimum of two vectors and is usually called the natural residual of the LCP(M, q). Obviously, the quantity max (I − D + DM )−1 ∞ plays an important role in the error estimates and therefore d∈[0,1]n
is called “condition constant” of the LCP(M, q). A w.c.d.d. Bmatrix is a Pmatrix, thus the bound in (3.1) holds for M being a w.c.d.d. Bmatrix. Note that computing the inverse of the matrix I − D + DM is a diﬃcult task in practice. Recently, Li and
4
R. Zhao, B. Zheng and M. Liang / Applied Mathematics and Computation 367 (2020) 124788
Li [17] provided the following computable upper bound on the condition constant max (I − D + DM )−1 ∞ for M being a d∈[0,1]n
w.c.d.d. Bmatrix
max
d∈[0,1]n
(I − D + DM ) ∞ ≤ −1
n
i=1
i−1 n − 1 bjj min{β˜i , 1} β˜ j
(3.2)
j=1
−1 under the condition that β˜i = bii − nj=i+1 bi j  > 0 for all i ∈ N, where B+ = (bi j ) is deﬁned by (2.2) and ij=1
bjj β˜ j
= 1 if i = 1.
The following theorem is our main result, which gives a new upper bound on the condition constant maxd∈[0,1]n (I − D + DM )−1 ∞ when M is a w.c.d.d. Bmatrix. Theorem 3.1. Let M = (mi j ) ∈ Rn×n be a w.c.d.d. Bmatrix and B+ = (bi j ) ∈ Rn×n be the matrix of (2.2). If for each i ∈ N
βˆi = bii −
n
bi j 
j=i+1
s j ( B+ ) > 0, bjj
then
max
d∈[0,1]n
where
(I − D + DM ) ∞ ≤
i−1 j=1 1 +
−1
u j ( B+ ) βˆ j
n
(3.3)
i=1
i−1 u j ( B+ ) n−1 1+ min{βˆi , 1} βˆ j
,
(3.4)
j=1
= 1 if i = 1.
Proof. Since M is a w.c.d.d. Bmatrix, it is straightforward to check that MD = I − D + DM is also a w.c.d.d. Bmatrix for each diagonal matrix D = diag{d1 , . . . , dn } with 0 ≤ di ≤ 1 for all i ∈ N. Taking into account that M = B+ + C in which B+ and C is given by (2.2), then we have
MD = I − D + DM = I − D + D(B+ + C ) = I − D + DB+ + DC = B+ D + CD , where B+ = I − D + DB+ and CD = DC. Note that B+ is a w.c.d.d. matrix with positive diagonal elements, so it is easy to get D that B+ is a w.c.d.d. Zmatrix with positive diagonal elements. By Corollary 4 of [20], we know that B+ is an Mmatrix and D D + −1 thus (BD ) is nonnegative. Hence, we have
(MD )−1 = (B+D (I + (B+D )−1CD ))−1 = (I + (B+D )−1CD )−1 (B+D )−1 , which implies that
(MD )−1 ∞ ≤ (I + (B+D )−1CD )−1 ∞ (B+D )−1 ∞ . Note that CD = DC =
(d1 r1+ , . . . , dn rn+ )T e
(3.5)
is nonnegative, where e = (1, . . . , 1 ). Therefore,
( p1 , . . . , pn )T e in which pi ≥ 0 for all i ∈ N. By Lemma 2.2, we get
( B+ )−1CD D
(I + (B+D )−1CD )−1 ∞ ≤ n − 1. We next give an upper bound for all i ∈ N,
1 − di + bii di −
n
di bik 
k=i+1
(3.6)
(B+D )−1 ∞ .
Since
B+ D
is a w.c.d.d. Mmatrix, by (3.3) and Lemma 2.4, we easily get for
sk (I − D + DB+ ) >0 1 − dk + dk bkk
with 0 ≤ di ≤ 1. Thus, using Lemma 2.3 we have
(B+D )−1 ∞ = (I − D + DB+ )−1 ∞ ≤
n i i=1 j=1
=
h j (I − D + DB+ ) 1 −d j + d j b j j + l j (I − D + DB+ ) − s j (I − D + DB+ )
1 1 −d1 + d1 b11 + l1 (I − D + DB+ ) − s1 (I − D + DB+ ) n 1 + 1 −di + di bii + li (I − D + DB+ ) − si (I − D + DB+ ) i=2
×
i−1 j=1
=
can be written as
h j+1 (I − D + DB+ ) 1 −d j + d j b j j + l j (I − D + DB+ ) − s j (I − D + DB+ ) n
1
1 + n (I−D+DB+ ) sk (I−D+DB+ ) 1 −d1 + d1 b11 − nk=2 d1 b1k sk1−d 1 −d + b d − i ii i k=i+1 dibik  1−d +d b i=2 +d b k
k kk
k
k kk
R. Zhao, B. Zheng and M. Liang / Applied Mathematics and Computation 367 (2020) 124788
×
i−1
n
k= j+1
1+
1 −d j + d j b j j −
j=1
≤
1 −d1 + d1 b11 − ×
i−1
n
+ sk ( B+ ) k=2 d1 b1k  b
i=2
1 = + min{βˆ1 , 1} n
i=2
1 min{βˆi , 1}
i−1
k
i=2


i−1 1 min{βˆi , 1}
j=1
k kk
1 −di + bii di −
kk
k= j+1 d j b jk 1 −d j + d j b j j − nk= j+1 d j
1+
n
i=1
(I−D+DB+ ) d j b jk  sk1−d +d b
1 n
1 ≤ + min{βˆ1 , 1}
=
d jb jk 
k= j+1
n
j=1
n
n
+
1+
bjj−
j=1
n k= j+1 n k= j+1
max
d∈[0,1]n
(I − D + DM ) ∞ ≤
n
i=1
The proof is completed.
k=i+1
+ dibik skb(B ) kk
b jk  + b jk  skb(Bkk )
(By Lemma 2.1)
j=1
u j (B ) 1+ βˆ j +
,
which together with (3.5) and (3.6) yields −1
1 n
(By Lemma 2.4)
b jk  skb(Bkk )
i−1 u j ( B+ ) 1 1+ min{βˆi , 1} βˆ j
5
i−1 u j ( B+ ) n−1 1+ min{βˆi , 1} βˆ j
.
j=1
We make some remarks on Theorem 3.1 below. (i) The condition βˆi > 0 in Theorem 3.1 is weaker than β˜i > 0 in (3.2). In fact, B+ in (2.2) is a w.c.d.d. matrix with positive diagonal elements because of M being a w.c.d.d. Bmatrix. Hence, as shown in Remark 2.1 of [21], we have for all i ∈ N, si ( B+ ) bii
≤ 1. Thus we obtain n j=i+1
bi j 
n s j ( B+ ) ≤ bi j  ≤ bii , i ∈ N, bjj
(3.7)
j=i+1
which means that for all i ∈ N, if β˜i > 0, then βˆi > 0, but not vice versa. (ii) The bound of Theorem 3.1 holds for the case that M is a Bmatrix, because a Bmatrix is a w.c.d.d. Bmatrix [17], and s ( B+ ) bii > nj=i+1 bi j  j b holds for all i ∈ N which can be easily derived via the fact that M is a Bmatrix if and only if B+ jj
given in (2.2) is a strictly diagonally dominant matrix with positive diagonal entries [10]. Next, we give an example which indicates that the bound (3.4) in Theorem 3.1 is attainable. In this sense, we say that the bound (3.4) is optimal or the best. Example 3.1. Consider the matrix
1 −1
M=
0 , 1
which is a w.c.d.d Bmatrix and satisﬁes all the conditions in Theorem 3.1. By simple computation, we have
max
d∈[0,1]n
(I − D + DM )−1 ∞ = maxn {1 + di } = 2. d∈[0,1]
On the other hand, since βˆ1 = βˆ2 = 1 and u2 (B+ ) = 0, we have 2
i=1
i−1 u j ( B+ ) 1 1+ min{βˆi , 1} βˆ j
max
= 2,
j=1
which means
d∈[0,1]n
(I − D + DM ) ∞ = −1
2 i=1
i−1 u j ( B+ ) 1 1+ min{βˆi , 1} βˆ j j=1
.
6
R. Zhao, B. Zheng and M. Liang / Applied Mathematics and Computation 367 (2020) 124788
Hence, the bound (3.4) in Theorem 3.1 can be attainable and thus, combined with the natural residual of the LCP, it provides the best error bound for the linear complementarity problem with a w.c.d.d Bmatrix in the sense of (3.1). The following theorem shows that the bound (3.4) in Theorem 3.1 is tighter than the bound (3.2). Theorem 3.2. Let M = (mi j ) ∈ Rn×n be a w.c.d.d. Bmatrix and B+ = (bi j ) ∈ Rn×n be the matrix of (2.2). Then n
i=1
i−1 u j ( B+ ) n−1 1+ min{βˆi , 1} βˆ j
≤
n
i=1
j=1
i−1 n − 1 bjj min{β˜i , 1} β˜ j
,
j=1
where β˜i = bii − nk=i+1 bik  > 0 and βˆi is deﬁned by (3.3). Proof. According to (3.7), we easily get β˜i ≤ βˆi for any i ∈ N. Thus, we have for all i ∈ N
1 min{βˆi , 1}
1
≤
min{β˜i , 1}
,
and
1+
ui ( B+ ) ui ( B+ ) ≤ 1+ β˜i βˆi = 1+ =
bik  +  b  ik + ui (B ) k=i+1 n bii − k=i+1 bik 
bii − n
bii −
Hence, we obtain
i=1
k=i+1
bii . β˜i
=
n
ui ( B+ ) n
i−1 u j ( B+ ) n−1 1+ min{βˆi , 1} βˆ j
≤
j=1
n i=1
i−1 n − 1 bjj min{β˜i , 1} β˜ j
.
j=1
The proof is completed.
In [22], Chen and Xiang introduced a constant for M being a Pmatrix,
β p (M ) = maxn (I − D + DM )−1 D p , d∈[0,1]
where D = diag(di ) with 0 ≤ di ≤ 1 for all i ∈ N and · p is the matrix norm induced by the vector norm for p ≥ 1. This constant was used to measure the sensitivity of the solution of the Pmatrix linear complementarity problem. Similarly to the proof of Theorem 2.4 in [10], we give an upper bound on the constant β ∞ (M) with M being a w.c.d.d. Bmatrix based on Theorem 3.1, which is sharper than that in Theorem 3 of [17]. Theorem 3.3. Let M = (mi j ) ∈ Rn×n be a w.c.d.d. Bmatrix and B+ = (bi j ) ∈ Rn×n be the matrix of (2.2). If βˆi > 0 for any i ∈ N, then
β∞ (M ) ≤ where
i−1 j=1 1 +
n
i=1
i−1 u j ( B+ ) n−1 1+ min{βˆi , 1} βˆ j
,
j=1
u j ( B+ ) βˆ j
= 1 if i = 1.
We remark that the bound in Theorem 3.3 also holds for M being a Bmatrix. 4. Numerical examples In this section, we present some numerical examples to show the sharpness of our proposed bound in contrast with the known ones in [17–19]. We ﬁrst give an example to show that the estimation of maxd∈[0,1]n (I − D + DM )−1 ∞ is obtained by only using the bounds of Theorem 3.1 but not the ones in [17–19]. Example 4.1. Consider the matrix
⎛
4 ⎜1 M=⎝ 1 2
1 5 2 2
1 −2 6 3
⎞
2 0⎟ . 1⎠ 5
R. Zhao, B. Zheng and M. Liang / Applied Mathematics and Computation 367 (2020) 124788
Then M = B+ + C, where
⎛
−1 4 0 −1
2 ⎜0 + B =⎝ −1 −1
−1 −3 4 0
7
⎞
0 −1⎟ . −1⎠ 2
It is easy to verify that M is a w.c.d.d Bmatrix and βˆ1 =
7 3 ˆ ˆ ˆ 8 , β2 = 2 , β3 = 3, β4 = 2. This implies that the matrix M satisﬁes the conditions in Theorem 3.1. However, simple computations show that β˜1 = 0, β˜2 = 0, β˜3 = 3 and β˜4 = 2, which means that the conditions β˜ > 0, i ∈ N for the upper bound (3.2) are not satisﬁed. Similarly, we can verify that the conditions in Theorem 1 of [18] and Theorem 2.3 of [19] are also not satisﬁed. Therefore, we can only use Theorem 3.1 to get the estimation of max (I − D + DM )−1 ∞ as follows: d∈[0,1]n
max
d∈[0,1]n
(I − D + DM )−1 ∞ ≤ 97.1905.
The following two examples demonstrate that the bound (3.4) in Theorem 3.1 can be much tighter than the ones given in [17–19]. Example 4.2. Let the matrix
⎛
1.2 ⎜0 M=⎝ 0.1 0.8
−0.1 3 0 0.8
−0.4 0 ⎟ . −0.8⎠ 1.25
−0.6 0 1 −0.4
−0.4 −1 ⎟ . −0.9⎠ 0.45
Then M = B+ + C, where
⎛
1.2 ⎜ −1 B+ = ⎝ 0 0
−0.1 2 −0.1 0
⎞
−0.6 1 1.1 0.4
⎞
It is not diﬃcult to see that M is a w.c.d.d Bmatrix and satisﬁes all the conditions in Theorem 2 of [17], Theorem 1 of [18], Theorem 2.3 of [19] and Theorem 3.1. Hence, we have
max
(I − D + DM )−1 ∞ ≤ 2386.0 0 0 0,
(Theorem 2 in [17])
max
(I − D + DM )−1 ∞ ≤ 1186.0 0 0 0,
(Theorem 1 of [18])
max
(I − D + DM )−1 ∞ ≤ 1186.0 0 0 0,
(Theorem 2.3 of [19])
max
(I − D + DM )−1 ∞ ≤ 645.3730.
d∈[0,1]n
d∈[0,1]n
d∈[0,1]n
d∈[0,1]n
( Theorem 3.1 )
Example 4.3 ((see [9])). Let M be a tridiagonal n × n matrix with the form
⎛
b + α sin( 1n ) a ⎜
⎜
M=⎜ ⎜
⎝
0 0
c b + α sin( 1n ) .. . ··· ···
0 c .. . a 0
··· ··· .. . b + α sin( 1n ) a
0 0
⎞
⎟ ⎟ ⎟, ⎟ ⎠ c 1 b + α sin( n )
If we take b = 2, a = −0.3, c = −1.7 and α = 0, then we easily verify that M is a w.c.d.d Bmatrix and satisﬁes all the conditions in Theorem 2 of [17], Theorem 1 of [18], Theorem 2.3 of [19] and Theorem 3.1. Thus, we can use these bounds to estimate max (I − D + DM )−1 ∞ . The numerical results are shown in Table 1. d∈[0,1]n
From the above numerical results of Examples 4.2 and 4.3, we see that our bound (3.4) can be much better than those obtained in [17–19].
8
R. Zhao, B. Zheng and M. Liang / Applied Mathematics and Computation 367 (2020) 124788 Table 1 Comparison results on the bounds in [17–19] and Theorem 3.1. n Theorem Theorem Theorem Theorem
2 in [17] 1 of [18] 2.3 of [19] 3.1
10
20
50
100
3.7182e+08 6.6173e+07 6.6173e+07 9.7696e+04
1.3612e+17 2.4226e+16 2.4226e+16 8.9196e+11
1.8308e+42 3.2582e+41 3.2582e+41 3.7674e+36
5.8012e+83 1.0324e+83 1.0324e+83 1.1825e+78
5. Conclusions In this paper we have presented a new upper bound for the condition constant maxd∈[0,1]n (I − D + DM )−1 ∞ with M being a w.c.d.d. Bmatrix under a weaker condition than that given in [17]. Theoretical analysis and numerical tests show that our new bound greatly improves the one in [17]. Although it is hard to theoretically give the comparison between our bound and the known ones in [18] and [19], numerical examples illustrate that our proposed bound is far superior to theirs. Moreover, we provide an example that satisﬁes our condition (3.3) but not those in [17–19], which implies that the condition constant is estimated only by using our proposed bound. Besides, we show that this new bound can be attainable by a numerical example, which combining with the natural residual of the LCP gives the best error bound for an approximate solution of the linear complementarity problem with a w.c.d.d Bmatrix. Acknowledgements The authors would like to thank the anonymous referees for their constructive suggestions and comments which greatly improve the presentation of this paper. This work is supported by the National Natural Science Foundation of China (Grant No. 11571004) and the Science Foundation of Education Department of Gansu Province (Grant No. 2017A078). References [1] K.G. Murty, Linear Complementarity, Linear and Nonlinear Programming, Heldermann, Berlin, 1988. [2] R.W. Cottle, J.S. Pang, R.E. Stone, The Linear Complementarity Problem, Academic Press, San Diego, 1992. [3] Z.Q. Luo, P. Tseng, On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Control Optim. 30 (1992) 408–425. [4] Z.Q. Luo, P. Tseng, Error bound and convergence analysis of matrix splitting algorithms for the aﬃne variational inequality problem, SIAM J. Optim. 2 (1992) 43–54. [5] J.S. Pang, A posteriori error bounds for the linearlyconstrained variational inequality problem, Math. Oper. Res. 12 (1987) 474–484. [6] J.S. Pang, Inexact Newton methods for the nonlinear complementarity problem, Math. Program. 36 (1986) 54–71. [7] X.D. Luo, P. Tseng, On a global projectiontype error bound for the linear complementarity problem, Linear Algebra Appl. 253 (1997) 251–278. [8] R. Mathias, J.S. Pang, Error bounds for the linear complementarity problem with a Pmatrix, Linear Algebra Appl. 132 (1990) 123–136. [9] X.J. Chen, S.H. Xiang, Computation of error bounds for Pmatrix linear complementarity problem, Math. Program. 106 (2006) 513–525. [10] M. GarcíaEsnaola, J.M. Peña, Error bounds for linear complementarity problems for Bmatrices, Appl. Math. Lett. 22 (2009) 1071–1075. [11] C.Q. Li, Y.T. Li, Note on error bounds for linear complementarity problems for Bmatrices, Appl. Math. Lett. 57 (2016) 108–113. [12] C.Q. Li, M.T. Gan, S.R. Yang, A new error bound for linear complementarity problems for Bmatrices, Electron. J. Linear Algebra 31 (2016) 476–484. [13] P.F. Dai, Error bounds for linear complementarity problems of DBmatrices, Linear Algebra Appl. 434 (2011) 830–840. [14] P.F. Dai, Y.T. Li, C.J. Lu, Error bounds for linear complementarity problems for SBmatrices, Numer. Algorithm 61 (2012) 121–139. [15] P.F. Dai, C.J. Lu, Y.T. Li, New error bounds for the linear complementarity problem with an SBmatrix, Numer. Algorithm 64 (2013) 741–757. [16] C.Q. Li, P.F. Dai, Y.T. Li, New error bounds for linear complementarity problems of Nekrasov matrices and BNekrasov matrices, Numer. Algorithm 74 (2017) 997–1009. [17] C.Q. Li, Y.T. Li, Weakly chained diagonally dominant Bmatrices and error bounds for linear complementarity problems, Numer. Algorithm 73 (2016) 985–998. [18] F. Wang, Error bounds for linear complementarity problems of weakly chained diagonally dominant Bmatrices, J. Inequalities Appl. 2017 (2017) 33. [19] D. Sun, F. Wang, New error bounds for linear complementarity problems of weakly chained diagonally dominant Bmatrices, Open Math. 15 (2017) 978–986. [20] P.N. Shivakumar, K.H. Chew, A suﬃcient condition for nonvanishing of determinants, Proc. Am. Math. Soc. 43 (1974) 63–66. [21] W. Li, The inﬁnity norm bound for the inverse of nonsingular diagonal dominant matrices, Appl. Math. Lett. 21 (2008) 258–263. [22] X.J. Chen, S.H. Xiang, Perturbation bounds of Pmatrix linear complementarity problems, SIAM J. Optim. 18 (2007) 1250–1265.