- Email: [email protected]

Contents lists available at SciVerse ScienceDirect

Linear Algebra and its Applications journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / l a a

Error bounds for the linear complementarity problem with a -SDD matrix < M. García-Esnaola, J.M. Peña ∗

Departamento de Matemática Aplicada, Universidad de Zaragoza, 50009 Zaragoza, Spain

ARTICLE INFO

ABSTRACT

Article history: Received 27 April 2012 Accepted 15 September 2012 Available online 29 October 2012

We give new error bounds for the linear complementarity problem when the involved matrix is a -SDD matrix with positive diagonals. The new bounds can improve considerably other previous bounds. We also find a subclass of -SDD matrices for which the new bound is very simple. © 2012 Elsevier Inc. All rights reserved.

Submitted by M. Tsatsomeros AMS classification: 15A48 90C33 90C31 65G50 Keywords: Error bounds Linear complementarity problems H-Matrices Strictly diagonally dominant matrices

1. Introduction Linear complementarity problems arise in many applications (see [2,4]). Given an n matrix M and q ∈ R n , these problems look for solutions x∗ ∈ R n of Mx + q

≥ 0, x ≥ 0, xT (Mx + q) = 0.

× n real (1.1)

The previous problem (1.1) is usually denoted by LCP(M , q).

< Research partially supported by the Spanish Research Grant MTM2009-03388 and by Gobierno de Aragón and Fondo Social Europeo. ∗ Corresponding author. E-mail addresses: [email protected] (M. García-Esnaola), [email protected] (J.M. Peña). 0024-3795/$ - see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.laa.2012.09.018

1340

M. García-Esnaola, J.M. Peña / Linear Algebra and its Applications 438 (2013) 1339–1346

A real matrix is called a P -matrix if all its principal minors are positive. Let us recall (see [4]) that M is a P-matrix if and only if the LCP(M , q) (1.1) has a unique solution x∗ for each q ∈ R n . As recalled in Section 2, a square matrix M is called an H -matrix if there exists a diagonal matrix X with positive diagonal entries such that MX is strictly diagonally dominant. It is well known that H-matrices with positive diagonal entries are P-matrices. For problems (1.1) whose associated matrices M are H-matrices with positive diagonal entries, error bounds for the solutions of these problems have been obtained in [3], as recalled in Section 3. In [10] we obtained new error bounds for these matrices and compared them with those of [3]. Some bounds of [10] required to know the diagonal matrix X mentioned above, which is not always known in advance. However, for the class of -SDD matrices (see Section 2), X can be obtained from the entries of M. -SDD matrices were called S-strictly diagonally dominant matrices in [5] and were also used, for instance, in [8,13]. Error bounds for other classes of P-matrices have been obtained in [6,7,9,11,12]. In this paper we provide error bounds for the LCP(M , q), when M is a -SDD matrix. We include examples of families of matrices for which our bound is a small constant, in contrast to the bound of [3], which can be arbitrarily large. In Section 2 we give a necessary condition (in Theorem 2.7) and a sufficient condition (in Proposition 2.8) to check if a given matrix belongs to the class of -SDD matrices. Section 3 provides the error bounds for the linear complementarity problems and a subclass of -SDD matrices with a very simple error bound is presented. 2. -SDD matrices In this section we provide some basic properties of -SDD matrices. We start by introducing some basic notations. Let S denote a nonempty subset of N := {1, 2, . . . , n}, n 2, and let S := N \S denote its complement in N. Given a matrix A = (aij )1i,jn , we define ri (A) := |aij |, riS (A) := |aij |. j∈N \{i}

j∈S \{i}

Definition 2.1. A matrix A = (aij )1i,jn is called -SSD matrix if there exists a nonempty subset S of N such that the following two conditions hold: (i) |aii |

> riS (A), for each i ∈ S. (ii) (|aii | − riS (A))(|ajj | − rjS (A)) > riS (A)rjS (A), for each i ∈ S and j ∈ S . If the previous definition holds for S = N, then A is strictly diagonal dominant (SDD). If A is SDD, then it satisfies Definition 2.1 for any nonempty subset S of N. In fact, (i) trivially holds and, since

|aii | > riS (A) + riS (A), |ajj | > rjS (A) + rjS (A), (i ∈ S, j ∈ S), we derive (ii).

Observe that the previous definition implies that if A is a -SSD matrix and S is a subset satisfying Definition 2.1, then the following interval is nonempty: ⎛ ⎞ |ajj | − rjS (A) riS (A) ⎠ , min (2.1) IS := ⎝max i∈S |aii | − r S (A) rjS (A) j∈S i assuming that if rjS (A)

= 0, then

ajj −rjS (A) rjS (A)

:= ∞. The proof of Theorem 5 of [5] implies the following

result: Theorem 2.2. Let A be a -SDD matrix and let S be any subset satisfying Definition 2.1 Let W be a diagonal matrix with ⎧ ⎨ γ , if i ∈ S wi = ⎩ 1, if i ∈ S

= diag(wi )

M. García-Esnaola, J.M. Peña / Linear Algebra and its Applications 438 (2013) 1339–1346

for i

1341

= 1, . . . , n, and (0 <)γ ∈ IS , where IS is given by (2.1). Then, AW is an SDD matrix.

Let us recall that a square matrix A is an H -matrix if there exists a diagonal matrix X with positive diagonal entries such that AX is SDD. So, the previous theorem implies that -SDD matrices are Hmatrices. Remark 2.3. From Theorem 2.2, we can deduce that A is an SDD matrix if and only if 1 ∈ IS for some subset S satisfying Definition 2.1. Therefore, for a -SDD matrix A and a subset S satisfying Definition 2.1, the interval IS either lies to the right of 1 or it lies inside of (0, 1). We now show that the set S and its complement S in N play a symmetric role in the definition of these matrices. Remark 2.4. By Remark 2.3, if A is a -SSD matrix and is not SDD, then either IS lies to the left of 1 or to the right of 1. Let us observe that if IS lies to the left of 1 (resp., right of 1), then IS lies to the right (resp., left) of 1. In fact, assume that IS is to the left of 1. By (2.1), this implies that 1

> min j∈S

|ajj | − rjS (A) rjS (A)

1

= max j∈S

or equivalently that max j∈S

rjS (A)

|ajj | − rjS (A)

rjS (A)

|ajj | − rjS (A)

> 1, which implies, again by (2.1), that IS is to the right of 1. An

analogous reasoning holds when IS lies to the right of 1. Remark 2.5. Since a -SDD matrix is an H-matrix, by [1] a -SDD matrix has a row that is SDD. Lemma 2.6. Let S be a nonempty strict subset of N and S its complement in N. Then the fact that there exists S satisfying Definition 2.1 is equivalent to the fact that S satisfies also Definition 2.1. Proof. Clearly it is sufficient to prove that if S satisfies Definition 2.1, then S also satisfies it. Conditions (i) and (ii) of Definition 2.1 imply that |ajj |− rjS (A) with condition (ii) for S.

> 0 for all j ∈ S. Finally, condition (ii) for S coincides

Observe that a -SDD matrix can satisfy Definition 2.1 for several sets S. For instance, it can be checked that the matrix ⎛ ⎞ 12 −3 0 ⎜ ⎟ ⎜ ⎟ A = ⎜ −6 10 1 ⎟ ⎝ ⎠ 6 2 4 satisfies Definition 2.1 with S = {1} and also with S = {1, 2}. The following result provides a necessary condition to recognize -SDD matrices. Theorem 2.7. Let A be a -SDD matrix. Then there exists a nonempty strict subset T of N such that (i) |aii |

> ri (A) for all i ∈ T, (ii) |ajj | > rjT (A) for all j ∈ T. Proof. Let S be a subset for which Definition 2.1 holds and let IS be the interval given by (2.1). If A is an SDD matrix, the result obviously follows. Let us assume that A is not SDD. Then by Remark 2.3

1342

M. García-Esnaola, J.M. Peña / Linear Algebra and its Applications 438 (2013) 1339–1346

either the interval IS of (2.1) lies to the right of 1 or lies to the left of 1. If IS lies to the right of 1, then min

|ajj | − rjS (A) rjS (A)

j∈S

> 1 and all rows with index j ∈ S are strictly diagonal dominant. So, in particular, (i)

= S. Condition (i) of Definition 2.1 implies (ii) for T = S. Let us now assume that IS lies to riS (A) < 1 and all rows with index i ∈ S are strictly diagonal dominant. the left of 1. Then max i∈S |aii | − r S (A) i So, in particular, (i) holds for T = S. By Lemma 2.6, Definition 2.1 holds for S and its condition (i) implies (ii) for T = S. holds for T

Observe that the previous result can be useful to recognize possible candidates S satisfying Definition 2.1. For instance, an n × n matrix A, with n − 1 rows strictly diagonally dominant, satisfies Theorem 2.7 (i) for the set T of indices corresponding to the SDD rows and it also trivially satisfies condition (ii) if ajj = 0 for j ∈ T . According to the proof of Lemma 2.6, we can take either S = T or S = T. Observe that the matrix ⎞ ⎛ 36 −4 1 ⎟ ⎜ ⎟ ⎜ (2.2) A = ⎜ 0 1 6⎟ ⎠ ⎝ 1 1 9 satisfies the previous conditions for T = {1, 3} and so, if it is a -SDD matrix, we can take as a candidate set S of Definition 2.1 either S = {1, 3} or S = {2}. In fact, it can be checked that A is -SDD for S = {1, 3} (or S = {2}). We now provide a sufficient condition to check if a matrix is a -SDD matrix. Recall that Remark 2.5 shows that possessing an SDD row is a necessary condition for -SDD matrices. Proposition 2.8. Let A be a matrix with at least an SDD row and let M not SDD and A satisfies

= {i ∈ N | |aii | ri (A)}. If A is

(i) |aii |

> riM (A), for each i ∈ M, |ajj | − rjM (A) riM (A) (ii) max , < min M i∈M |aii | − r (A) rjM (A) j∈M i then A is a -SDD matrix and the set S = M satisfies Definition 2.1. Proof. Condition (i) of Definition 2.1 coincides with condition (i) of this result. For all i j

∈ M = S, from condition (ii) we deduce that

of Definition 2.1 clearly follows.

riM (A)

|aii | − riM (A)

<

|ajj | − rjM (A) rjM (A)

The following example illustrates the application of Proposition 2.8. Example 2.9. Let A be the matrix ⎛

A

=

3

1

0

⎜ ⎜ 5 1 ⎜ 1 ⎜ ⎜ ⎜ 2 1 7 ⎜ ⎜ ⎜ 1/2 1/2 1/4 ⎜ ⎜ ⎜ 2/5 1/5 1/2 ⎝ 1/3 2/5 2/5

0 1 0

⎞

⎟ ⎟ 1 0 1⎟ ⎟ ⎟ 1 2 0⎟ ⎟. ⎟ 3 1 1⎟ ⎟ ⎟ 2 6 3⎟ ⎠ 1 1 3

∈ M = S and

and then condition (ii)

M. García-Esnaola, J.M. Peña / Linear Algebra and its Applications 438 (2013) 1339–1346

Taking the set M 5/4

1343

= {4, 5, 6}, condition (i) of Proposition 2.8 holds. Moreover,

= max i∈M

riM (A)

|aii | −

< min

riM (A)

j∈M

|ajj | − rjM (A)

= 4/3,

rjM (A)

so, condition (ii) of Proposition 2.8 holds. Then A is a -SDD matrix, with the set S Definition 2.1.

= {4, 5, 6} satisfying

3. Computation of error bounds Let A be an H-matrix with positive diagonal entries. Since A is a P-matrix, we can apply the third inequality of Theorem 2.3 of [3] and obtain for any x ∈ R n the inequality:

x − x∗ ∞ maxd∈[0,1]n (I − D + DA)−1 ∞ r (x)∞ , where I is the n × n identity matrix, D is the diagonal matrix D = diag(di ) with 0 di 1 for all i = 1, . . . , n, x∗ is the solution of the LCP(A, q) and r (x) := min(x, Ax + q), where the min operator denotes the componentwise minimum of two vectors. By (2.4) of [3], given in Theorem 2.1 of [3], when A = (aij )1i,jn is an H-matrix with positive diagonals, then maxd∈[0,1]n (I

− D + DA)−1 ∞ A˜ −1 max(, I )∞ ,

(3.1)

˜ is the comparison matrix of A, is the diagonal part of A ( := diag(aii )) and max(, I ) := where A diag(max{aii , 1}). The following theorem provides a bound for maxd∈[0,1]n (I − D + DA)−1 ∞ when A is a -SDD matrix that is not SDD. The bound is obtained by applying Theorem 2.1 of [10] to A. Bounds for SDD matrices are derived and compared in [10]. Proposition 3.1. Let us assume that A = (aij )1i,jn is a -SDD matrix with positive diagonal entries and A is not SDD. Let γ ( = 1) and W be any real number and any matrix, respectively, satisfying the hypotheses of Theorem 2.2. For any i = 1, . . . , n, let β¯i := aii wi − j=i |aij |wj and β¯ := mini {β¯i }. If γ > 1, then

maxd∈[0,1]n (I and if γ

−1

− D + DA)

∞

γ max ,γ β¯

(3.2)

< 1, then

maxd∈[0,1]n (I

− D + DA)−1 ∞ max

1

,

1

β¯ γ

.

(3.3)

Proof. Observe that, by Remark 2.3, γ = 1. Since A is an H-matrix with positive diagonal entries, we can apply formula (2.2) of Theorem 2.1 of [10] and obtain that

maxd∈[0,1]n (I

−1

− D + DA)

∞ max

maxi {wi } maxi {wi } mini {β¯i }

,

mini {wi }

.

If γ > 1, then maxi {wi } = γ and mini {wi } = 1 and so, (3.2) follows from (3.4), and if γ maxi {wi } = 1 and mini {wi } = γ and so (3.3) follows from (3.4).

(3.4)

< 1, then

The following example shows that the bounds of Proposition 3.1 can be considerably smaller than the bound (3.1) (which corresponds to Theorem 2.1 of [3]). Example 3.2. Let

1344

M. García-Esnaola, J.M. Peña / Linear Algebra and its Applications 438 (2013) 1339–1346

⎛ Ak

=

⎜ ⎜ ⎜ ⎝

k+1

−k

−k −k 3k

⎞ ⎟ ⎟

, k 1. −k ⎟ ⎠

−k −k 3k

Observe that Ak is a -SDD matrix with S

= {1} and IS =

k }= Proposition 3.1, obtaining that β¯ = min{1, k+ 1

maxd∈[0,1]n (I

k

k+1

⎧ ⎨ 2k+1

− D + DAk )−1 ∞ max ⎩

k+1 k k+1

,

2k ,2 k+1

and then ⎫ 2k + 1 ⎬ k+1 ⎭

while the bound (3.1) (corresponding to Theorem 2.1 of [3]) is A˜k can be arbitrarily large.

. We can take γ

= −1

2k + 1 k

=

2k+1 (> k+1

1) in

(≤ 3),

max(, I )∞

= 4k + 1, and it

We now introduce a subclass of -SSD matrices with interesting spectral properties when the diagonal entries are positive. Definition 3.3. A matrix A = (aij )1i,jn is called 1 -SSD matrix if there exists a nonempty subset S of N such that the following two conditions hold: (i) |aii |

> riS (A) + 1, for each i ∈ S, (ii) (|aii | − riS (A) − 1)(|ajj | − rjS (A) − 1) > riS (A)rjS (A), for each i ∈ S and j ∈ S. The first part of the following result shows a relationship of 1 -SSD matrices and -SSD matrices. The second part shows that, if A has positive diagonal entries, the real parts of its eigenvalues are always greater than 1. Proposition 3.4. Let A

= (aij )1i,jn be a 1 -SDD matrix. Then:

(i) A − I is a -SDD matrix. (ii) If A has positive diagonal entries and λ is an eigenvalue of A, then Re(λ)

> 1.

Proof. (i) It follows from Definition 3.3, taking into account that |akk − 1| |akk | − 1 > rkS (A) = S rk (A − I ) for all k = 1, . . . , n and S the set corresponding to that definition or its complementary set. (ii) By (i), A − I is a -SDD matrix. Taking into account that A is a 1 -SDD matrix with positive diagonal entries, A − I has also positive diagonal entries. By Theorem 2.2, A − I is an H-matrix. It is well known that the real parts of the eigenvalues of H-matrices with positive diagonal entries are greater than 0. Therefore the eigenvalues of A − I satisfy this property and (ii) follows.

Observe that the property “A − I is a -SDD matrix” does not imply that “A is a 1 -SDD matrix”. In fact, the matrix ⎞ ⎛ 4/3 1 0 ⎟ ⎜ ⎟ ⎜ A = ⎜ 0 −1 1 ⎟ ⎠ ⎝ 0 1 3 satisfies that A − I is a -SDD matrix with S = {1}, but it can be checked that A is not -SDD and so it is not a 1 -SDD matrix. However, it is straightforward to prove that, if a matrix B satisfies that B − I is a -SDD matrix with positive diagonal entries, then B is a 1 -SDD matrix.

M. García-Esnaola, J.M. Peña / Linear Algebra and its Applications 438 (2013) 1339–1346

1345

The following result provides a very simple bound for maxd∈[0,1]n (I − D + DA)−1 ∞ when A is a

1 -SSD matrix with positive diagonal entries.

Theorem 3.5. Let A be a 1 -SSD matrix with positive diagonal entries and IS∗ the interval given by IS∗ := ⎡ ⎞ ¯ ajj − rjS (A) − 1 ajj − rjS (A) − 1 riS (A) ⎣max ⎠, assuming that if r S (A) = 0, then := ∞. , min j i∈S aii − r S (A) − 1 j∈S¯ rjS (A) rjS (A) i If γ

∈ IS∗ , then

maxd∈[0,1]n (I

− D + DA)−1 ∞ max γ ,

1

γ

.

(3.5)

Moreover, if A is not SDD and β¯ is the number defined in Proposition 3.1 using the previous bound (3.5) is smaller than or equal to the corresponding bound of Proposition 3.1.

γ , then the

Proof. Taking into account condition (ii) of Definition 3.3, we obtain that ¯

riS (A) aii

− riS (A) − 1

<

ajj

− rjS (A) − 1 rjS (A)

, for each i ∈ S, j ∈ S¯ .

Then the interval IS∗ is nonempty and strictly contained in the interval IS of (2.1). Let W be the matrix ˆ = (ˆaij )1i,jn := W −1 AW , βˆi := aˆ ii − j=i |ˆaij | constructed in Theorem 2.2 with γ ∈ IS∗ and let A and βˆ

:= mini {βˆi }.

Taking into account that βˆi

=

β¯i , if i ∈ S, and βˆj = β¯j , if j ∈ S¯ , the definition of the numbers β¯i γ

in Proposition 3.1 and the definition of the matrix W in Theorem 2.2, we have

βˆi = aii − riS (A) − For all i aii

¯

riS (A)

γ

¯ , if i ∈ S and βˆj = ajj − rjS (A)γ − rjS (A), if j ∈ S¯ .

∈ S, since γ ∈ IS∗ , we have γ

− riS (A) − 1

¯

riS

γ

(3.6)

¯

riS (A) aii

− riS (A) − 1

. From this inequality we deduce that

and then we derive from (3.6):

βˆi = aii − riS (A) −

¯

riS (A)

γ

We also have that, for all j

1, for all i ∈ S.

∈ S¯ , γ <

ajj

− rjS¯ (A) − 1 rjS (A)

(3.7)

, and then we obtain from (3.6)

¯ βˆj = ajj − rjS (A)γ − rjS (A) > 1, for all j ∈ S¯ .

Besides, for each diagonal matrix D

(3.8)

= diag(di ), with 0 di 1, we have

(I − D + DA)−1 ∞ W ∞ (I − D + DAˆ )−1 ∞ W −1 ∞

= (I − D + DAˆ )−1 ∞ max γ ,

1

γ

(3.9)

1346

M. García-Esnaola, J.M. Peña / Linear Algebra and its Applications 438 (2013) 1339–1346

From (3.7) and (3.8) we have that

βˆ = mini {βˆi } 1.

(3.10)

ˆ is an SDD Taking into account that A and applying formula (2.4) of [10], we obtain from (3.10)

matrix 1 ˆ )−1 ∞ max that (I − D + DA , 1 = 1 and now (3.5) follows from these inequalities and (3.9). βˆ

It remains to compare (3.5) with the bounds ofProposition 3.1 when A is not SDD. If γ < 1,the bound 1 1 of Proposition 3.1 is given by (3.3): max ¯ , γ . Observe that max γ , γ1 = γ1 max 1¯ , γ1 . If β β γ γ > 1, then the bound of Proposition 3.1 is given by (3.2): max β¯ , γ . Observe that max γ , γ1 = γ max γ¯ , γ . β

We finish with an example of a family of 1 -SDD matrices whose bound (3.5) is less than 2, but whose bound (3.1) (corresponding to Theorem 2.1 of [3]), can be arbitrarily large. Example 3.6. Let ⎛ Ak

⎜ ⎜ ⎝

=⎜

(k + 1)2 −k2 −k2

⎞ ⎟ ⎟

, k 2. −k2 ⎟ ⎠

−k2

3k2

−k2

−k2 3k2

Observe that Ak is a 1 -SDD-matrix with S

=

{1} and I ∗

⎧ ⎨ 2k , then the bound (3.5) of Theorem 3.5 is max ⎩k + 2 (corresponding to (2.4) of [3]) is A˜k

−1

max(, I )∞

=

2k

,

2k2

⎫ k+2 1 ⎬ 2k = 2k ⎭ k+2 S

−1

k2

. If we take γ

=

2k k+2

,

< 2. In contrast, the bound (3.1)

k+2

= 2k +

1 , 2k+1

and it can be arbitrarily large.

References [1] A.A. Ahac, D.D. Olesky, A stable method for the LU factorization of M-matrices, SIAM J. Algebr. Discrete Methods 7 (1986) 368–378. [2] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, in: Classics in Applied Mathematics, vol. 9, SIAM, Philadelphia, 1994. [3] X. Chen, S. Xiang, Computation of error bounds for P-matrix linear complementarity problems, Math. Program. Ser. A 106 (2006) 513–525. [4] R.W. Cottle, J.-S. Pang, R.E. Stone, The Linear Complementarity Problems, Academic Press, Boston, MA, 1992. [5] L. Cvetkovic, ´ V. Kostic, R.S. Varga, A new Geršgorin-type eigenvalue inclusion set, Electron. Trans. Numer. Anal. 18 (2004) 73–80. [6] P.-F. Dai, Error bounds for linear complementarity problems of DB-matrices, Linear Algebra Appl. 434 (2011) 830–840. [7] P.-F. Dai, Y.-T. Li, C.-J. Lu, Error bounds for linear complementarity problems for SB-matrices, Numer. Algorithms 61 (2012) 121–139. [8] Y.M. Gao, X.H. Wang, Criteria for generalized diagonally dominant matrices and M-matrices, Linear Algebra Appl. 169 (1992) 257–268. [9] M. García-Esnaola, J.M. Peña, Error bounds for linear complementarity problems for B-matrices, Appl. Math. Lett. 22 (2009) 1071–1075. [10] M. García-Esnaola, J.M. Peña, A comparison of error bounds for linear complementarity problems of H-matrices, Linear Algebra Appl. 433 (2010) 956–964. [11] M. García-Esnaola, J.M. Peña, Error bounds for linear complementarity problems of BS -matrices, Appl. Math. Lett. 25 (2012) 1379–1383. [12] R. Mathias, J.-S. Pang, Error bounds for the linear complementarity problem with a P-matrix, Linear Algebra Appl. 132 (1990) 123–136. [13] N. Moraˇca, Upper bounds for the infinity norm of the inverse of SDD and S-SDD matrices, J. Comput. Appl. Math. 206 (2007) 666–678.