# Smoothing Levenberg–Marquardt method for general nonlinear complementarity problems under local error bound

## Smoothing Levenberg–Marquardt method for general nonlinear complementarity problems under local error bound

Applied Mathematical Modelling 35 (2011) 1337–1348 Contents lists available at ScienceDirect Applied Mathematical Modelling journal homepage: www.el...

Applied Mathematical Modelling 35 (2011) 1337–1348

Contents lists available at ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

Smoothing Levenberg–Marquardt method for general nonlinear complementarity problems under local error bound q Haodong Yu ⇑, Dingguo Pu Department of Mathematics, Tongji University, Shanghai 200092, PR China

a r t i c l e

i n f o

a b s t r a c t

Article history: Received 21 July 2009 Received in revised form 30 August 2010 Accepted 8 September 2010 Available online 24 September 2010 Keywords: Nonlinear complementarity problems Smoothing methods Levenberg–Marquardt method Superliner convergence Local error bound

By using the F–B function and smoothing technique to convert the nonlinear complementarity problems to smoothing nonlinear systems, and introducing perturbation parameter lk into the smoothing Newton equation, we present a new smoothing Levenberg– Marquardt method for general nonlinear complementarity problems. For general mapping F, not necessarily a P0 function, the algorithm has global convergence. Each accumulation point of the iterative sequence is at least a stationary point of the problem. Under the local error bound condition, which is much weaker than nonsingularity assumption or the strictly complementarity condition, we get the local superlinear convergence. Under some proper condition, quadratic convergence is also obtained. Ó 2010 Elsevier Inc. All rights reserved.

1. Introduction Consider the following nonlinear complementarity problem (denoted by NCP(F)):

FðxÞ P 0 x P 0 and xT FðxÞ ¼ 0; n

n

ð1Þ

n

where x 2 R , and F: R ? R are continuously differentiable functions. The nonlinear complementarity problem has a large number of important applications and has attracted strong interests in numerical methods for solving this problem. A function /: R2 ? R is called an NCP function if the following property is satisﬁed:

/ða; bÞ ¼ 0 () a P 0; b P 0 and ab ¼ 0: In this paper, we focus on the well-known Fischer–Burmeister function /: R2 ? R

/ða; bÞ ¼

qﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ 2 a2 þ b  a  b:

By using the F–B function, the NCP(F) can be equivalently reformulated as the following nonsmooth equations:

0

1 /ðx1 ; F 1 ðxÞÞ B C .. C ¼ 0: UðxÞ :¼ B . @ A /ðxn ; F n ðxÞÞ The merit function of (2) used in this paper is WðxÞ :¼ 12 UðxÞT UðxÞ ¼ 12 kUðxÞk2 . q

Research Supported by National Natural Science Foundation of China Grant No. 10771162.

⇑ Corresponding author.

ð2Þ

1338

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

Since the standard Newton method cannot be used on this equation directly, smoothing method turns to become an important approach for solving nonsmooth problems, and numerous algorithms of this class have been developed (see [1–8]). However, most current smoothing methods cannot solve general nonlinear complementarity problems. They have to assume the function F to be P0 functions to guarantee the nonsingularity of the Jacobian of the smoothing Newton equation. The assumption of P0 function restricts the use of smoothing methods for general NCP(F). Kanzow and Pieper  proposed a smoothing algorithm for general NCP(F). Their method is called Jacobian smoothing method, which solves a mixed Newton equation at each iteration. The key idea of their method is to introduce a gradient step as a substituted search direction when the mixed Newton step fails to ﬁnd a proper decent direction. This is an simple idea. However, the introduction of the gradient step may reduce the convergence efﬁciency of the algorithm. On the other hand, the gradient step is for another merit function, i.e. W. Thus, their method has to minimize two different merit function, the convergence analysis turns to be very complicated. There also exists some papers which try to reform the NCP(F) to a constrained problem and use Newton-type to solve it, see [9–12]! for reference. Levenberg–Marquardt method, which is a special class of Newton method, is also an important method for solving nonlinear equation. The main idea of this method is to introduce an perturbation item lk to guarantee the Newton equation to be solvable. Namely, for the following nonlinear equation:

f ðxÞ ¼ 0; n

ð3Þ n

where f: R ? R , the algorithm solves the following system

ðf 0 ðxk ÞT f 0 ðxk Þ þ lk IÞd ¼ f 0 ðxk ÞT f ðxk Þ

ð4Þ

at each iterate point xk. However, to prove the local superlinearly convergence of the algorithm, the parameter lk must be designed carefully. Yamashita and Fukushima  and Dan et al.  presented a new update rule for lk. Their method sets lk = kf(xk)kd. The advantage of this designation is that they can prove the algorithm to be superlinearly convergent under local error bound condition. This condition is weaker than the nonsingularity assumption, which is essential for most Levenberg–Marquardt method. As to complementarity problems, their method was applied to solve LCP ﬁrstly. Then, Yu et al.  proposed a smoothing Levenberg–Marquardt method for the extended linear complementarity problems. On the other hand, Zhang and Zhang  presented a smoothing L–M method for nonlinear complementarity problems. Their algorithm introduces an smoothing parameter e and views e as an independent variable. Based on the technique proposed by Yamashita and Fukushima  and the local error condition, they analyze the local convergence of the algorithm under two special cases, namely, strictly complementarity condition or the nonsingularity assumption. Like most smoothing algorithm, they still have to assume the function F to be P0 function. In this paper, we present a smoothing L–M method for general NCP(F). The method uses the smoothing parameter of which the update rule is similar with that in . The algorithm solve a Levenberg–Marquardt equation at each iteration, and the perturbation parameter is set by lk = kU(xk)kd (0 < d 6 2). The merits of this method are: as a special Newton-type method, the L–M algorithm retains high iteration efﬁciency; for general nonlinear complementarity problems, all accumulation points of the iterative sequence are stationary points of W; under the assumption of local error bound without the nonsingularity or strict complementarity condition, we get the local superlinear convergence of the algorithm. The organization of this paper is as follows: In the next section, some deﬁnitions and preliminary results are given. The smoothing L–M algorithm for general NCP(F) is presented in section 3. The global convergence is also established in this section. The local superlinear convergence under local error bound assumption will be stated in detail in the fourth section. At last, some numerical results are reported. 2. Preliminaries In this paper, we use the Kanzow’s  smoothing function to approximate the F–B function:

/e ða; bÞ ¼

qﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ 2 a2 þ b þ 2e  a  b:

At each iteration, the algorithm deals with the following equation:

0

B Ue ðxÞ :¼ B @

1 /e ðx1 ; F 1 ðxÞÞ C .. C: . A /e ðxn ; F n ðxÞÞ

ð5Þ

Similar with the deﬁnition of W(), we deﬁne We ðxÞ ¼ 12 Ue ðxÞT Ue ðxÞ ¼ 12 kUe ðxÞk2 . Let mapping G: Rn ? Rn be locally Lipschitzian. Denote G0 (x) be the Jacobian of G at a point x 2 Rn, and rG(x) be the transposed Jacobian of G. In the sense of Clark , the generalized Jacobian of G at x can be deﬁned as follows:

  @GðxÞ ¼ conv H 2 Rn  Rn jH ¼ lim G0 ðxk Þ; xk 2 DG ; xk !x

where DG denotes the set of differentiable point of G. To estimate the @G(x), Qi  deﬁned the C-subdifferential of G at x:

@ C GðxÞT :¼ @G1 ðxÞ  @G2 ðxÞ     @Gn ðxÞ:

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

1339

Lemma 2.1 (). For all x 2 Rn and all e1, e2 P 0, we have

kUe1 ðxÞ  Ue2 ðxÞk 6 jj where

pﬃﬃﬃﬃﬃ

pﬃﬃﬃﬃﬃ

e1  e2 j;

ð6Þ

pﬃﬃﬃﬃﬃﬃ

j :¼ 2n. In particular, for all x 2 Rn and all e P 0, we have pﬃﬃﬃ kUe ðxÞ  UðxÞk 6 j e:

ð7Þ

Lemma 2.2 (). Let {xk}, {dk} # Rn and {tk} # R be sequences with xk+1 :¼ xk + tkdk such that {xk} ? x*, {dk} ? d*, and {tk} ; 0 for certain vectors x*, d* 2 Rn. Furthermore, let {ek} # R be a sequence with {ek} ; 0. Then

lim

Wek ðxk þ t k dk Þ  Wek ðxk Þ tk

k!1



¼ rWðx ÞT d :

Lemma 2.3 (). Let x 2 Rn be arbitrary but ﬁxed. Assume that x is not a solution of NCP(F). Let us deﬁne the constants

cðxÞ :¼ maxfkxi ei þ F i ðxÞrF i ðxÞkg P 0 iRbðxÞ

and

aðxÞ :¼ maxfx2i þ F i ðxÞ2 g > 0; iRbðxÞ

where b(x) :¼ {ijxi = Fi(x) = 0}. Let d > 0 be given, and deﬁne

eðx; dÞ :¼

8 < 1; : aðxÞ2  2

if d2 ; ncðxÞ2 d2 aðxÞ

ncðxÞ2 d2

 aðxÞ 6 0;

otherwise:

Then

distðU0e ðxÞ; @ C UðxÞÞ 6 d for all e such that 0 < e 6 eðx; dÞ. In order to analyze the local superlinear convergence of the algorithm, we need the local error bound assumption. See [19,20] for more details about the error bound theory and its application. Assumption 2.4. Let X be the solution set of (1), there exists a neighbor N(X, r1) = {xjdist(x, X) 6 r1, r1 > 0} and a constant c > 0 such that when x 2 N(X, r1), we have

distðx; XÞ 6 ckUðxÞk:

ð8Þ

Proposition 2.5 (). Assume that {xk} # Rn is a convergent sequence with a limit point x* 2 Rn. Then the following statements hold. (i) The function U is semismooth, which implies that for any Vk 2 @ CU(xk),

kUðxk Þ  Uðx Þ  V k ðxk  x Þk ¼ oðkxk  x kÞ: (ii) If F0 is Lipschitz continuous, then the function U is strongly semismooth, which implies that for any Vk 2 @ CU(xk),

kUðxk Þ  Uðx Þ  V k ðxk  x Þk ¼ Oðkxk  x k2 Þ: It is well known that the function /() and U() are locally Lipschitz continuous. Hence, it is easy to deduce the following result. Proposition 2.6. If the solution set X is bounded, then there exists a closed neighbor N(X, b) = {xjdist(x, X) 6 b, b > 0}, and a constant L1 > 0 such that

kUðx1 Þ  Uðx2 Þk 6 L1 kx1  x2 k;

8x1 ; x2 2 NðX; bÞ:

1340

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

3. Algorithm and analysis In the ﬁrst part of this section, we propose a smoothing L–M method for general NCP(F). In the second part, we show that the algorithm is well deﬁned and prove its global convergence. 3.1. Algorithm We now state the algorithm formally. Algorithm 3.1 Step 0. Initialization:   Given x0 2 Rn,pﬃﬃﬃﬃﬃﬃ choose k; g; 2 ð0; 1Þ; a 2 ð0; 0:8Þ; r 2 0; 14 a ; d 2 ð0; 2; c > 0, and the toleration s. Set b0 :¼  a 2 2 0 kUðx Þk; j :¼ 2n; e0 :¼ 2j b0 and k :¼ 0, l :¼ 1. Step 1. If krW(xk)k 6 s, then terminates. Step 2. Compute a solution dk 2 Rn of the linear system

ðrUek ðxk ÞU0ek ðxk Þ þ lk IÞd ¼ rUek ðxk ÞUek ðxk Þ; where lk = kU(xk)kd.  Step 3. Set rk ¼ min r;

1 4

1ð12Þ

ð9Þ

 k d k U ðx Þk . Search the smallest sk in {0, 1, 2, . . .} which satisﬁes that l

k

k

Wek ðx þ k d Þ 6 Wek ðxk Þ  rk ksk kd k2 : k

sk

sk

k+1

Set tk :¼ k and x Step 4. If

k

ð10Þ

k

:¼ x + tkd .

  1 kUðxkþ1 Þk 6 max gbk ; kUðxkþ1 Þ  Uek ðxkþ1 Þk

a

ð11Þ

then reset l = 1 and set

bkþ1 :¼ kUðxkþ1 Þk and choose ek+1 such that

0 < ekþ1 6 min





a 2 2 3 b ; ek ; eðxkþ1 ; cbkþ1 Þ : 4 2j kþ1

ð12Þ

If the condition is not satisﬁed, ﬁnd ek+1 such that

0 < ekþ1 6 min



2 3 a kUðxkþ1 Þk2 ; ek 4 2j

 ð13Þ

and set bk+1 :¼ bk, l :¼ l + 1. Step 5. Set k :¼ k + 1, and return to step 1. Remark 1 (1) To guarantee the smoothing parameter ek to go to 0, we reduce it at each iteration by at least a factor of 3/4. (2) The Armijo parameter rk of the line search in step 3 seems complicated. It plays important rules in the proof of global convergence. Without loss of generality, we assume implicitly that the iteration does not terminate in ﬁnite steps. In other words, the smoothing parameter e will converge to zero, and for all k P 1, we have lk > 0. For convenience of the analysis, we denote the index set

K :¼ f0g

[

k 2 NjkUðxk Þk 6 max gbk1 ; a1 kUðxk Þ  Uek1 ðxk Þk :

ð14Þ

Firstly we have to prove the algorithm is well deﬁned. Hence we need the following results: Lemma 3.1. The following two statements holds: (a) We have

kUðxk Þ  Uek ðxk Þk 6 akUðxk Þk2 for all k P 0.

ð15Þ

1341

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

(b) We have

dist F ðU0ek ðxk Þ; @ C Uðxk ÞÞ 6 ckUðxk Þk

ð16Þ

for all k 2 K with k P 1. Proof. The ﬁrst statement of this Lemma is obtained immediately from Lemma 2.1, (12) and (13). The second statement is directly cited from . h Proposition 3.2. Algorithm A is well deﬁned. Proof. Since the function F is continuously differentiable, and rWe ðxÞ ¼ ðU0e ðxÞÞT Ue ðxÞ, we obtain k

k

k

Wek ðxk þ kd Þ  Wek ðxk Þ ¼ krWek ðxk ÞT d þ oðkÞ ¼ kUek ðxk ÞT U0ek ðxk Þd þ oðkÞ: It is easy to deduce from the expression of rk that k

rk 6 12 lk . Thus, we can get

k

k

k

k

Wek ðxk þ kd Þ  Wek ðxk Þ ¼ kðd ÞT ðrUek ðxk ÞU0ek ðxk Þ þ lk IÞd þ oðkÞ 6 klk kd k2 þ oðkÞ 6 2krk kd k2 þ oðkÞ: Notice that the rk is always positive, the line search condition will be satisﬁed for k small enough. Thus, the algorithm is well deﬁned. h 3.2. Global convergence In this subsection, we prove the smoothing Levenberg–Marquardt algorithm is globally convergent. We will show that every accumulation point of {xk} is a stationary point of NCP(F). We begin with the following obvious result. Lemma 3.3. The following statement holds for all k 2 N

kUek ðxkþ1 Þk < kUek ðxk Þk: Lemma 3.4 (). Let {xk} # Rn be a sequence generated by Algorithm 3.1 . Assume that {xk} has an accumulation point x*, which is a solution of NCP(F). Then the index set K is inﬁnite and {ek} ? 0. Lemma 3.5. The following statement holds for all l 2 N

pﬃﬃﬃﬃ pﬃﬃﬃﬃﬃﬃﬃﬃ kUelþ1 ðxlþ1 Þk þ j elþ1 6 kUel ðxl Þk þ j el : Proof. By Lemmas 2.1 and 3.3,

pﬃﬃﬃﬃ pﬃﬃﬃﬃﬃﬃﬃﬃ kUelþ1 ðxlþ1 Þk 6 kUelþ1 ðxlþ1 Þ  Uel ðxlþ1 Þk þ kUel ðxlþ1 Þk 6 j el  elþ1 þ kUel ðxl Þk: This leads to

pﬃﬃﬃﬃ pﬃﬃﬃﬃﬃﬃﬃﬃ kUelþ1 ðxlþ1 Þk þ j elþ1 6 kUel ðxl Þk þ j el

ð17Þ

which is just we want to prove. h From this lemma, and using the same technique as , we can prove the next important propositions. Note that, since the update rules of ek of our algorithm is different from that in , the level set L0 is also slightly different. Proposition 3.6. The sequence {xk} generated by Algorithm 3.1 remains in the level set

L0 :¼ fx 2 Rn jWðxÞ 6 ð1 þ b0 aÞ2 Wðx0 Þg:

ð18Þ

The index set K is critical in the analysis of the algorithm, because we have the following result. Proposition 3.7. Let {xk} be the sequence generated by Algorithm 3.1 and have at least one accumulation point, then the index set K is inﬁnite if and only if each accumulation point of {xk} is a solution of NCP(F). Proof. If K is inﬁnite, by Proposition 5.6 in , any accumulation point x* is a solution of NCP(F). Conversely, if one accumulation point is a solution of NCP(F), then Proposition Lemma 5.1 in  shows that K is inﬁnite. h The proof of this proposition immediately shows the following two results:

1342

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

Lemma 3.8. If any one accumulation point of the iteration sequence {xk} is a solution of NCP(F), then each accumulation point of {xk} is a solution of NCP(F). Lemma 3.9. If K is ﬁnite, then none of the accumulation points of {xk} is a solution of NCP(F). Now we prove the main result of this section. Theorem 3.10. Let {xk} be a sequence generated by Algorithm 3.1 and let {xk}L be a subsequence converging to an accumulation point x* 2 Rn, then x* is a stationary point of the function W. Proof. If K is inﬁnite, by Proposition 3.7, x* is a solution of NCP(F), which is surely a stationary point of W. ~ be the largest element in K. Hence we assume the set K is ﬁnite. With out loss of generality, we assume K \ L = ;. Let k ~ From Step 4 of Algorithm 3.1, we get that for all k > k, ~

bk ¼ bk~ ¼ kUðxk Þk;

ð19Þ

~

kUðxk Þk > gkUðxk Þk > 0:

ð20Þ

~ This means, for all k > k, ~

Wðxk Þ > g2 Wðxk Þ > 0:

ð21Þ

Assume by contradiction that rW(x*) – 0. We ﬁrst show that lim infk2Ltk = 0. If limk2Linf tk = t* > 0, the line search rules (10) in step 3 shows that k

Wek ðxkþ1 Þ  Wek ðxk Þ 6 rk t k kd k2

ð22Þ

for all k 2 L. It follows from the L–M Eq. (9) that k

d ¼ ½rUek ðxk ÞU0ek ðxk Þ þ lk I1 rUek ðxk ÞUek ðxk Þ:

ð23Þ

Thus, we obtain k

kd k2 ¼ rWek ðxk ÞT ½rUek ðxk ÞU0ek ðxk Þ þ lk I2 rWek ðxk Þ: 0

ð24Þ k

Denote the singular value decomposition (SVD) of Uek ðx Þ by

2

3

rk;1

6 6 U0ek ðxk Þ ¼ U k Rk V Tk ¼ U k 6 6 4

7 7 T 7V ; 7 k 5

rk;2 ..

.

ð25Þ

rk;n nn

where Uk, Vk 2 R

are orthogonal matrices and rk,1 P rk,2 P    P rk,n P 0. Hence, (23) and (24) can be rewritten as

k

d ¼ V k Kk V Tk rWek ðxk Þ

ð26Þ

and k

kd k2 ¼ rWek ðxk ÞT V k K2k V Tk rWek ðxk Þ;

ð27Þ

where

2 6 6 6 6 Kk ¼ 6 6 6 4

3

1

r2k;1 þlk 1

r2k;2 þlk

..

. 1

7 7 7 7 7: 7 7 5

ð28Þ

r2k;n þlk

Since {x}L ? x* and {ek} ? 0, there exists some m > 0 such that for all k 2 L and all i = 1, . . . , n,

jr2k;i j < m: Notice that rW(x*) – 0, we have {lk}L ? kU(x*)kd > 0. Hence, there exists a scalar M > 0 that for all k 2 L sufﬁciently large, k

kd k2 P

1 ðm þ lk Þ2

krWek ðxk Þk2 P MkrWek ðxk Þk2 :

ð29Þ

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

1343

So, we obtain from the line search rule in step 3 that

Wek ðxkþ1 Þ  Wek ðxk Þ 6 rk t k MkrWek ðxk Þk2

ð30Þ

for all k 2 L large enough. On the other hand, since K is ﬁnite, we have

frk gL ! r ¼ min





1 4

r; kUðx Þkd > 0:

ð31Þ

Let c :¼ r*t*MkrW(x*)k2 > 0, and remember that {ek} ? 0, we obtain

Wðxkþ1 Þ  Wðxk Þ 6 

c 2

ð32Þ

for all k 2 L sufﬁciently large. Let L = {l0, l1, l2, . . .}. Then, it is easy to deduce from Lemmas 2.1 and 3.5 that

pﬃﬃﬃﬃﬃﬃﬃﬃﬃ kUðxljþ1 Þk 6 kUðxlj þ1 Þk þ 2j elj þ1 : This implies that for all lj sufﬁciently large

1 2

c 4

Wðxljþ1 Þ ¼ kUðxljþ1 Þk2 6 Wðxlj þ1 Þ þ :

ð33Þ

Together with (32) and (33), we have

Wðxljþ1 Þ  Wðxlj Þ ¼ ðWðxljþ1 Þ  Wðxlj þ1 ÞÞ þ ðWðxlj þ1 Þ  Wðxlj ÞÞ 6

c  c c þ  6 4 2 4

for all lj sufﬁciently large. This contradicts the nonnegativity of W(x). Hence, we obtain lim infk2Ltk = 0. Subsequencing if necessary, we assume limk2Ltk = 0. Hence, we have k

Wek ðxk þ ksk 1 d Þ  Wek ðxk Þ ksk 1

k

> rkd k2

ð34Þ

for all k 2 L large enough. This implies

rWðx ÞT d P r kd k2 ;

ð35Þ

k

where d* is the limit of {d }L and r* is the limit of {rk}L. d 1 1 1 * Without loss of generality, suppose fV k gL ! V  ; fKk gL ! K ¼ diagðr2 þ l ; r22 þl ; . . . ; r2n þl Þ, where l = kU(x*)k > 0. So, the 1 inequality (35) is equivalent with

rWðx ÞT V  K V T rWðx Þ P r rWðx ÞT V  K2 V T rWðx Þ: Recall that from (31),

ð36Þ

r 6 l . Hence, (36) implies that 1 4

V T rWðx Þ ¼ 0

ð37Þ

which implies that rW(x*) = 0. This completes our proof. h 4. Local convergence under local error bound In this section, we study the local superlinear convergence of the algorithm. Firstly, we need the following assumptions. Assumption 4.1. Assume that the sequence {xk} generated by Algorithm 3.1 has at least one accumulation point x* which is a solution of the NCP(F). Assumption 4.2. Assume that the solution set X is bounded and F0 is locally Lipschitz continuous on a neighbor N(X, r2) = {xjdist(x, X) 6 r2, r2 > 0}. From this assumption, we get the strongly semismooth of the function U(x) on a neighbor of X. Thus, by Proposition 2.5, we have the next lemma. Lemma 4.3. For any x1, x2 2 N(X, r2), there exists L2 > 0 such that for any V 2 @ CU(x1)

kUðx1 Þ  Uðx2 Þ  Vðx1  x2 Þk 6 L2 kx1  x2 k2 : For each point xk, denote  xk be a point such that

kxk  xk k ¼ distðxk ; XÞ;

xk 2 X:

To study the superlinear convergence of the algorithm, we ﬁrstly establish the following lemma.

ð38Þ

1344

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

Lemma 4.4. Denote r = min{r1, r2, b}, where r1, r2, b are deﬁned in Assumption 2.4, Lemma 4.3 and Proposition 2.6, respectively. If k 2 K, and xk 2 N(X, r), there exists m1, m2 > 0 such that k

kd k2 6 m1 distðxk ; XÞ2 ;

ð39Þ

k

kUek ðxk Þ þ U0ek ðxk Þd k2 6 m2 distðxk ; XÞ2þd :

ð40Þ

Proof. Since dk satisﬁes the L–M equation (9), dk must be the solution of the following problem:

min pk ðdÞ ¼ n d2R

1 1 kUek ðxk Þ þ U0ek ðxk Þdk2 þ lk kdk2 : 2 2

ð41Þ

Then, we obtain that for any Vk 2 @ CU(xk) k

kd k2 6 6

2

lk

pk ðxk  xk Þ 6

1 h

lk

1

lk

kUek ðxk Þ þ U0ek ðxk Þðxk  xk Þk2 þ kxk  xk k2

kUðxk Þ  Uðxk Þ  V k ðxk  xk Þk þ kUek ðxk Þ  Uðxk Þk þ kU0ek ðxk Þ  V k k  kxk  xk k

i2

þ kxk  xk k2 :

Since k xk  xk k ¼ distðxk ; XÞ 6 r, from Lemma 4.3, we have

kUðxk Þ  Uðxk Þ  V k ðxk  xk Þk 6 L2 kxk  xk k2 :

ð42Þ

From Lemma 2.1, together with Step 4 of the algorithm and Proposition 2.6, we get

pﬃﬃﬃﬃﬃ a a kUek ðxk Þ  Uðxk Þk 6 j ek 6 kUðxk Þk2 6 L21 kxk  xk k2 : 2 2

ð43Þ

Also, from Lemma 3.1(b), there exists some V k 2 @ C Uðxk Þ such that

kU0ek ðxk Þ  V k k  kxk  xk k 6 ckUðxk Þk  kxk  xk k 6 cL1 kxk  xk k2 :

ð44Þ

Notice from the local error bound Assumption 2.4 that

lk ¼ kUðxk Þkd ¼ kUðxk  xk Þkd P

1 k kx  xk kd : cd

ð45Þ

On the other hand, by Proposition 2.6

lk ¼ kUðxk Þ  Uðxk Þkd 6 Ld1 kxk  xk kd :

ð46Þ

Hence, if we set

h a i2 m1 ¼ r 2d cd cL1 þ L2 þ L21 þ 1: 2

ð47Þ

By (42)–(45) we get k

kd k2 6 m1 kxk  xk k2

ð48Þ

which prove the ﬁrst statement of the lemma. Similarly, (41) shows that k k kUek ðxk Þ þ U0ek ðxk Þd k2 6 2pk ðd Þ 6 kUek ðxk Þ þ U0ek ðxk Þðxk  xk Þk2 þ lk kxk  xk k2 h h a i2 a i2 6 cL1 þ L2 þ L21 kxk  xk k4 þ lk kxk  xk k2 6 cL1 þ L2 þ L21 kxk  xk k4 þ Ld1 kxk  xk k2þd 2 2

h i2 So, if we set m2 ¼ r2d cL1 þ L2 þ a2 L21 þ Ld1 , then the second statement of the lemma is proved. h Lemma 4.5. If xk, xk + dk 2 N(X, r) and k 2 K, then there exists m3 > 0 such that k

d

distðxk þ d ; XÞ 6 m3 distðxk ; XÞ1þ2 : Proof. Since xk, xk + dk 2 N(X, r), by Assumption 2.4 and Lemma 4.3, we obtain that for any Vk 2 @ CU(xk)

1 k k k k distðxk þ d ; XÞ 6 kUðxk þ d Þk 6 kUðxk Þ þ V k d k þ L2 kd k2 c pﬃﬃﬃﬃﬃ k k k 6 kUek ðxk Þ þ U0ek ðxk Þd k þ j ek þ kU0ek ðxk Þ  V k k  kd k þ L2 kd k2 :

ð49Þ

1345

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

By Lemma 3.1 and Proposition 2.6, there exists an Vk 2 @ CU(xk) such that k k k kU0ek ðxk Þ  V k k  kd k 6 ckUðxk Þk  kd k 6 cL1 kxk  xk k  kd k:

Thus, according to Lemma 4.4 and the update rules of ek, we can deduce from the previous inequality that 1 d 1 a k k k distðxk þ d ; XÞ 6 m22 distðxk ; XÞ1þ2 þ L21 kxk  xk k2 þ cL1 kxk  xk k  kd k þ L2 kd k2 c 2  1 1 d a  6 m22 distðxk ; XÞ1þ2 þ cL1 m21 þ L2 m1 þ L21 distðxk ; XÞ2 : 2

h 1 i 1 d Let m3 ¼ c m22 þ ðcL1 m21 þ L2 m1 þ a2 L21 Þr12 , then we have shown that m3 satisﬁes d

k

distðxk þ d ; XÞ 6 m3 distðxk ; XÞ1þ2

ð50Þ

which is just we want to prove. h Furthermore, we have to discuss that in what situation, the condition xk, xk + dk 2 N(X, r) in this lemma will be satisﬁed. Lemma 4.6. Suppose K0 is the index set deﬁned as K0 = {kjxk 2 N(X, r), k 2 K}, then for k 2 K0 sufﬁciently large, xk + dk 2 N(X, r). Proof. Since xk 2 N(X, r), by Lemma 4.4, we have 1

k

1

distðxk þ d ; XÞ 6 ð1 þ m21 Þdistðxk ; XÞ 6 ð1 þ m21 ÞckUðxk Þk: Notice that k 2 K, which implies that {kU(xk)k}K ? 0, hence for k 2 K0 large enough, dist(xk + dk, X) 6 r. h Lemma 4.7. For the subsequence {xk}, k 2 K0, we have k

kUðxk þ d Þk ¼ oðkUðxk ÞkÞ:

ð51Þ

Proof. Denote ~ xk be a point such that k k k~xk  ðxk þ d Þk ¼ distðxk þ d ; XÞ;

~xk 2 X:

Since for k 2 K0 large enough, xk, xk + dk 2 N(X, r). According to Proposition 2.6, we have k

k

k

k

d

kUðxk þ d Þk ¼ kUðxk þ d Þ  Uð~xk Þk 6 L1 k~xk  ðxk þ d Þk ¼ L1 distðxk þ d ; XÞ 6 L1 m3 distðxk ; XÞ1þ2 d

6 L1 m3 c1þ2 kUðxk Þk1þ2 : d

Hence k

kUðxk þ d Þk ¼ oðkUðxk ÞkÞ:



In order to establish the superlinear convergence of the algorithm, we need to prove the following lemma, which is slightly different with Lemma 3.2 in . Lemma 4.8. Let k be a constant which satisﬁes

"

1 ð1  a  4rÞ2 1  k2 ; 2 2 2ð2 þ aÞ2

#

and

WðyÞ  Wðxk Þ 6 2kWðxk Þ n

ð52Þ k

for some k 2 K and y 2 R . If kU(x )k < 1, then

Wek ðyÞ  Wek ðxk Þ 6 2rWek ðxk Þ: Proof. Since k 2 K and kU(xk)k < 1, it follows from (7) that

pﬃﬃﬃﬃﬃ

ek 6

a 2j

which implies

kUðxk Þk2 <

a 2j

kUðxk Þk

ð53Þ

1346

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

pﬃﬃﬃﬃﬃ a kUek ðxk Þk P kUðxk Þk  j ek P ð1  ÞkUðxk Þk 2 and

pﬃﬃﬃﬃﬃ a kUek ðyÞk 6 kUðyÞk þ j ek 6 kUðyÞk þ kUðxk Þk 2 for all y 2 Rn, k 2 K. Therefore, by (52)

2 1  pﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ 1 a a2 kUðxk Þk2 6 WðyÞ þ a 1  2kWðxk Þ  ð1  aÞWðxk Þ kUðyÞk þ kUðxk Þk  1 2 2 2 2  pﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ 6 ð2k  a 1 þ 1  2k ÞWðxk Þ:

Wek ðyÞ  Wek ðxk Þ 6

ð54Þ

Denote



pﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃﬃ

uðkÞ ¼ 2k  a 1 þ 1  2k : a4rÞ2 Set ~ k ¼ 12  ð1 , then 2ð2þaÞ2

uð~kÞ ¼ 1 

ð1  a  4rÞ2 ð2 þ aÞ2

1  a  4r 2ð1  a  4rÞ 1  a  4r P1 ¼ 4r : a 1þ a 1þ 2þa 2þa 2þa

Notice that the function u is monotonically increasing in ½0; 12 and that a 2 (0, 0.8), we get

a

Wek ðyÞ  Wek ðxk Þ 6 4rWðxk Þ 6 2rð1 þ Þ2 Wðxk Þ 6 2rWek ðxk Þ:

ð55Þ

2

This complete the proof. h Finally, we establish the main result of this section. Theorem 4.9. Let {xk} be a sequence generated by Algorithm 3.1 and suppose that Assumptions 2.4, 4.1 and 4.2 hold, then {dist(xk, X)} converges to 0 superlinearly. Furthermore, if d = 2, then the convergence rate is quadratical.

Proof. It follows from Assumption 4.1 that the index set K0 is an inﬁnite set. From Lemmas 4.5 and 4.6, we have that k

distðxk þ d ; XÞ ¼ oðdistðxk ; XÞÞ; Let k ¼ max

n

1 2

 k

k

ð1a4rÞ2 2ð2þaÞ2

2 ; 12g

o

k 2 K 0 ; k ! 1:

ð56Þ

, we obtain from Lemma 4.7 that for any k 2 K0 large enough

k

Wðx þ d Þ  Wðx Þ 6 2kWðxk Þ k

ð57Þ k

and kU(x )k < 1. Thus, Lemma 4.8 implies that the stepsize t = 1, i.e. x easy to deduce from (57) that

k+1

k

k

= x + d . Furthermore, from the deﬁnition of k, it is

kUðxkþ1 Þk 6 gkUðxk Þk ¼ gbk : Hence, we get k + 1 2 K. Finally, by Lemma 4.6, k + 1 2 K0. We have proven that for any k 2 N large enough, k 2 K0. Therefore, by (56), we establish the superlinear convergence of the algorithm. If d = 2, the quadratic convergence follows immediately from Lemma 4.5. This completes the whole proof. h

5. Numerical experiments In this part, we give several examples to illustrate the efﬁciency of the proposed method. In the following examples, we set the toleration be s = 1.0e6 and g = 0.8, a = 0.7, k = 0.5, c = 10, d = 2, r = 0.15. Problem 1. Let F(x) = Mx + q, where

2

4 1 0 6 2 4 1 6 6 6 0 2 4 M¼6 6    6 6 4 0 0 0 0 0 0 and q = (1, 1, . . . ,  1)T.



0

3

0 7 7 7  0 7 7  7 7 7  1 5  4 

1347

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

Problem 2. Let F(x) = Mx + q, where

2

1 2 2  2

3

1 2  27 7 7 1  27 7 .. .7 . .. 5

6 6 6 M¼6 6 6 4

1 T

and q = (1, 1, . . . , 1) . Problem 3. Let F(x) = (f1(x), f2(x), f3(x), f4(x))T, where

f1 ðxÞ ¼ 3x21 þ 2x1 x2 þ 2x22 þ x3 þ 3x4  6; f2 ðxÞ ¼ 2x21 þ x1 þ x22 þ 10x3 þ 2x4  2; f3 ðxÞ ¼ 3x21 þ x1 x2 þ 2x22 þ 2x3 þ 9x4  9; f4 ðxÞ ¼ x21 þ 3x22 þ 2x3 þ 3x4  3: This example is Kojima–Shindo problem which comes from . It has two solutions: x1 ¼ The former is a degenerate solution and the latter is a nondegenerate solution.

pﬃﬃ

T

6 ; 0; 0; 0:5 2

and x2 = (1, 0, 3, 0)T.

Problem 4. Let F(x) = (f1(x), f2(x), f3(x))T, where

f1 ðxÞ ¼ x1  2; f2 ðxÞ ¼ x2  x3 þ x32 þ 3; f3 ðxÞ ¼ x2 þ x3 þ 2x33  3: This problem has the solution (2, 0, 1)T. Problem 5. This problem is constructed in the way proposed by Gomes–Ruggiero . Let f(x): Rn ? Rn be continuously differentiable functions. Denote f(x) = (f1(x), f2(x), . . . , fn(x))T. Let x* = (0, 1, 0, 1, . . .)T 2 Rn. Set

F i ðxÞ ¼



fi ðxÞ  fi ðx Þ þ 1; if i is odd; fi ðxÞ  fi ðx Þ;

Otherwise:

for i = 1, . . . , n. Obviously, x* is a nondegenerate solution of NCP(F). In this example, function f is deﬁned as follows, which is Problem 3 in 

8 n P > > > < fi ðxÞ ¼ ðn þ 1Þ þ xi þ xj ;

i ¼ 1; . . . ; n  1;

i¼1

n Q > > > : fn ðxÞ ¼ 1 þ xj : j¼1

The results of this example are listed in Table 1. Dim denotes the dimension of the variables in the problem, Iter denotes the number of iterations, NF means the number of evaluations of function F. Table 1 Numerical results for Algorithm 3.1. Problem

Dim

Start point

Iter

NF

W(x)

krW(x)k

Pro Pro Pro Pro Pro Pro Pro Pro Pro Pro Pro Pro Pro Pro Pro

4 4 8 4 8 4 4 4 4 3 3 3 10 20 100

(1, 1, 1, 1) (1, 2, 3, 4) (0, 0, . . . , 0) (0, 0, 0, 0) (1, 1, . . . , 1) (5, 5, 5, 5) (1, 1, 1, 2) (2, 2, 2, 2) (1, 2, 3, 4) (1, 1, 1) (1, 1, 1) (1, 2, 3) (1, 1, . . . , 1) (1, 1, . . . , 1) (0, 0, . . . , 0)

6 11 5 6 9 19 7 9 11 6 8 9 10 13 16

6 11 5 6 9 19 7 9 11 6 8 9 11 14 18

6.2e17 1.1e26 7.8e15 4.0e16 4.0e14 1.4e14 3.4e15 3.2e16 1.1e26 1.2e15 6.8e18 1.7e21 4.9e13 1.6e13 3.1e19

3.2e08 4.3e13 2.2e09 1.9e08 2.0e07 3.6e07 2.3e07 7.2e08 4.3e13 3.8e08 2.6e09 5.0e11 7.2e07 4.2e07 4.0e10

1 1 1 2 2 2 3 3 3 4 4 4 5 5 5

1348

H. Yu, D. Pu / Applied Mathematical Modelling 35 (2011) 1337–1348

6. Conclusions In this paper, we present a new smoothing L–M method for the solution of the general nonlinear complementarity problems. The smoothing parameter ek borrowed from Chen et al.  and the perturbation parameter lk are crucial for the algorithm to get desirable convergence properties. For general NCP(F), each accumulation point of the algorithm is a stationary point of the problem. As to local convergence, under the local error bound assumption, which is a quite mild condition, the algorithm has local superlinear convergence. Numerical experiments shows the algorithm is efﬁcient. We believe that it is not very difﬁcult to extend this method for mixed complementarity problems or variational inequality problems. On the other hand, it would also be interesting to see how to introduce the perturbation item lk into the Jacobian smoothing method. We will study these problems in further study. References                      

C. Kanzow, H. Pieper, Jacobian smoothing methods for nonlinear complementarity problems, SIAM J. Optim. 9 (1999) 342–373. Y.F. Yang, L. Qi, Smoothing trust region methods for nonlinear complementarity problems with P0-functions, Annu. Oper. Res. 133 (2005) 99–117. C. Kanzow, Some noninterior continuation methods for linear complementarity problems, SIAM J. Matrix 17 (1996) 851–868. H. Qi, L. Liao, A smoothing Newton method for general complementarity problems, Comput. Optim. Appl. 17 (2000) 231–253. C. Chen, O.L. Mangasarian, A class of smoothing functions for nonlinear and mixed complementarity problems, Comput. Optim. Appl. 5 (1996) 97–138. H. Jiang, Smoothed Fischer–Burmeister Equation Methods for the Complementarity Problem, Tech. Report, Department of Mathematics, The University of Melbourne, Parkville, Victoria, Australia, June 1997. P. Tseng, Analysis of a Non-Interior Continuation Method Based on Chen–Mangasarian Smoothing Functions for Complementarity Problems, Tech. Report, Department of Mathematics, University of Washington, Seattle, WA, July 1997. X. Chen, L. Qi, D. Sun, Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities, Math. Comput. 67 (1998) 519–540. A. Fischer, New constrained optimization reformulation of complementarity problems, J. Optim. Theory Appl. 97 (1998) 105–117. C. Kanzow, H. Qi, A QP-free constrained Newton-type method for variational inequalities problems, Math. Program. 85 (1999) 81–106. C. Kanzow, An inexact QP-based method for nonlinear complementarity problems, Numer. Math. 89 (1998) 557–577. J.S. Pang, S.A. Gabriel, NE/SQP: a robust algorithm for the nonlinear complementarity problem, Math. Program. 60 (1993) 295–337. N. Yamashita, M. Fukushima, On the rate of convergence of the Levenberg–Marquardt method, Computing 15 (2001) 239–249. H. Dan, N. Yamashita, M. Fukushima, Convergence Properties of the Inexact Levenberg–Marquardt Method under Local Error Bound Conditions, Tech. Report, 2001-003, Department of Applied Mathematics and Physics, Kyoto University, February 2001. Z. Yu, K. Su, J. Lin, A smoothing Levenberg–Marquardt method for the extended linear complementarity problem, Appl. Math. Model. 33 (2009) 3409– 3420. J. Zhang, X. Zhang, A smoothing Levenberg–Marquardt method for NCP, Appl. Math. Comput. 178 (2006) 212–228. F.H. Clark, Optimization and Nonsmooth Analysis, John Wiley, New York, 1983. L. Qi, C-Differentiability C-Differential Operators and Generalized Newton Methods, Tech. Report, School of Mathematics, The University of New South Wales, Sydney, Australia, January 1996. J.S. Pang, Error bounds in mathematical programming, Math. Program. 79 (1997) 299–332. Z.Q. Luo, New error bounds and their applications to convergence analysis of iterative algorithm, Math. Program. Ser. B (2000) 88–341. M.A.G. Ruggiero, J.M. Marténez, S.A. Santos, Solving nonsmooth equations by means of quasi-Newton methods with globalization, in: Recent Advances in Nonsmooth Optimization, World Scientiﬁc, Singapore, 1995, pp. 121–140. E. Spedicato, Z. Huang, Numerical experience with Newton-like methods for nonlinear algebraic systems, Computing 58 (1997) 69–89.