# On a discrete version of the Jacobian conjecture of dynamical systems1

## On a discrete version of the Jacobian conjecture of dynamical systems1

Nonlinear Analysis 34 (1998) 779 – 789 On a discrete version of the Jacobian conjecture of dynamical systems1 Mau-Hsiang Shih ∗ , Jinn-Wen Wu Departm...

Nonlinear Analysis 34 (1998) 779 – 789

On a discrete version of the Jacobian conjecture of dynamical systems1 Mau-Hsiang Shih ∗ , Jinn-Wen Wu Department of Mathematics, Chung Yuan University, Chung-Li 32023, Taiwan Received 10 October 1994; received in revised form 12 May 1997; accepted 6 June 1997

Keywords: Discrete dynamical systems; Global asymptotic stability; Slowly state-varying condition; Spectral radius

1. The problem A long-standing Jacobian conjecture of dynamical systems was stated explicitly by Markus and Yamabe in : If the eigenvalues 1 (x); : : : ; n (x) of the Jacobian matrix f0 (x) of a class C 1 vector eld f : R n → R n on the n-dimensional Euclidean space R n all have negative real parts at every x in R n , and if f(0) = 0, then the origin is a globally asymptotically stable equilibrium point for the n-dimensional nonlinear autonomous system of ordinary di erential equations x˙ = f(x). That is, every solution tends to the origin as t → +∞. (See the survey paper by Meisters .) In 1976, LaSalle [3, p. 21] posed two discrete versions of the Jacobian conjecture. We shall discuss one of the two LaSalle problems, the problem related to matrix analysis. Let R n×n denote the vector space of all n × n real matrices. For A ∈ R n×n , denote by (A) the spectral radius of A. The Euclidean norm on R n is denoted by k · k. Consider the state-varying system x0 = A(x)x;

(1)

where x0 ; x are functions on {0; 1; : : :} to R n related by the formula x0 (k) = x(k + 1);

k = 0; 1; : : : ;

E-mail: [email protected]

1

This work was supported in part by the National Science Council of the Republic of China.

0362-546X/98/\$19.00 ? 1998 Elsevier Science Ltd. All rights reserved. PII: S 0 3 6 2 - 5 4 6 X ( 9 7 ) 0 0 6 6 1 - 5

780

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

and A : R n → R n×n is a continuous matrix-valued function. LaSalle asked: “If (A(x))¡1 for all x ∈ R n , is the origin globally asymptotically stable for Eq. (1)?”. By a summation representation theorem, then, if the family {A(x): x ∈ R n } of matrices is triangularizable, the answer to the LaSalle problem is in the armative. In our recent note , we produced a family {A(x): x ∈ R2 } of matrices for which (A(x))¡1 for all x ∈ R2 , but the origin is not globally asymptotically stable for Eq. (1). Indeed, we have shown that even if the eigenvalue condition “(A(x))¡1 for all x ∈ R n ” is replaced by a stronger condition “there exist ¿0 and ¿0 such that (A(x)) ≤ ¡1

and

kA(x)k ≤

for all x ∈ R n ”

the answer to the LaSalle problem is still negative. In the present note, without making a derivation from the eigenvalue condition: there exists a 0 ≤ ¡1 such that (A(x)) ≤  for all x ∈ R n , we consider that A(x) with respect to the state x varies slowly and prove the global stability of the equilibrium. It may be noted that some parameters in economic systems which vary slowly are quite natural.

2. The theorem Given x ∈ R n , we denote A(x) ∈ R n×n by (aij (x)), and by xt and At (x) the transpose of x and A(x), respectively. Let us recall that a map f : R n → R is Gateaux di erentiable at x ∈ R n (see, e.g., [6, p. 59]) if there exists a vector b ∈ R n such that, for any a ∈ R,   1 |f(x + ha) − f(x) − hbt a| = 0: lim h→0 h We shall prove the following, which is the main result of this note. Theorem. Suppose (1) there exists ¿0 such that (A(x)) ≤ ¡1 for all x ∈ R n ; (2) there exist ; ¿0 such that kxk ≤ kA(x)xk ≤ kxk for all x ∈ R n ; (3) there exists a suciently small ¿0 (¿0 can be determined explicitly) such that for all 1 ≤ i; j ≤ n; aij : R n → R is Gateaux di erentiable; and @aij (x) ≤ kxk max 1≤i; j; k≤n @xk

for all x ∈ R n :

Then the origin is globally asymptotically stable for Eq. (1). Condition (3) may be termed a slow varying condition. Consider the following nonlinear second-order di erence equation: y(k + 2) = a(y(k); y(k + 1))y(k + 1) + b(y(k); y(k + 1))y(k);

k = 0; 1; : : : ; (2)

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

781

where a; b are bounded real-valued C 1 functions on R2 . Let us remark that Eq. (2) plays an important role for the study of economic dynamics (see [1, Part IV]). Corollary. Suppose (1) there exists 0 ≤ ¡1 such that for all (u; v) ∈ R2 ; all roots of the equation 2 + a(u; v) + b(u; v) = 0 lie in the closed disk: {z ∈ C: |z| ≤ }, (2) there exists b0 ¿0 such that b(u; v) ≥ b0 for all (u; v) ∈ R2 , (3) there exists a suciently small ¿0 (¿0 can be determined explicitly) such that @a @a r max (u; v) ≤ ; r max (u; v) ≤ ; u2 +v2 =r 2 @u u2 +v2 =r 2 @v @b @b r max (u; v) ≤ : r max (u; v) ≤ ; 2 2 2 2 2 2 @u @v u +v =r u +v =r Then the origin is globally asymptotically stable for Eq. (2). Proof of Corollary. Set ! y(k) x(k) = A(x(k)) = y(k + 1);

0

1

b(y(k); y(k + 1)) a(y(k); y(k + 1))

! ;

k = 0; 1; : : : : Then Eq. (2) becomes x(k + 1) = A(x(k))x(k);

k = 0; 1; : : : :

An argument shows that conditions (1)–(3) of the Theorem are satis ed by conditions (1) – (3) of the Corollary, respectively. By the theorem, the origin is globally asymptotically stable for Eq. (2). We now turn to the proof of the theorem. To prove the theorem, we need the following well-known result for Schur stability (see [9, 10]; see also  for much shorter proof). Stein–Taussky’s Theorem. Let A ∈ R n×n : If (A)¡1; then for any positive de nite matrix P ∈ R n×n the matrix equation X − At X A = P has a solution Q such that Q is positive de nite and Q=

∞ X k=0

(At )k PAk :

782

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

Proof of Theorem. By condition (1), it follows from the Stein–Taussky Theorem that for each x ∈ R n there exists a positive-de nite Q(x) ∈ R n×n such that Q(x) − At (x)Q(x)A(x) = I

(3)

and Q(x) =

∞ X

(At (x))k (A(x))k ;

(4)

k=0

where, as usual, I denotes the identity matrix. Using conditions (1) and (2) together with Eq. (4) there exists q¿0 such that kQ(x)k ≤

∞ X

kAt (x)kk kA(x)kk ≤ q

for all x ∈ R n :

(5)

k=0

To see this, by Schur’s triangularization theorem for each x ∈ R n there exists an orthogonal matrix O(x) ∈ R n×n such that Ot (x)A(x)O(x) = U (x) = (uij (x)) is upper triangular, with diagonal entries uii (x) (i = 1; 2; : : : ; n) are eigenvalues of A(x). Therefore, by (2) for all x ∈ R n we have kU (x)k = kOt (x)A(x)O(x)k = kA(x)k ≤ and so |uij (x)| ≤ for all x ∈ R n and for all i; j ∈ {1; 2; : : : ; n}. Let 0¡¡(1 − )=2n, let  n−1 !  2   ;:::; ∈ R n×n ; D = diag 1; uij (x)); x ∈ R n . Then for all x ∈ R n and for i; j ∈ {1; 2; : : : ; n} and let D−1 U (x)D = (e with i 6= j, we obtain |e uij (x)| ≤ :

(6)

De ne a new norm kxk0 = kD−1 xk∞ on R n , where k · k∞ denotes the l∞ -norm. Then for all x ∈ R n , by (1) and Eq. (6) we see that kU (x)k0 = kD−1 U (x)Dk∞ ≤  + n ≤  + n

1− 1+ = ¡1: 2n 2

(7)

Therefore, for all x ∈ R n and for all k = 1; 2; : : : ; by Eq. (7) we have for some m¿0, k(A(x))k k = kO(x)(U (x))k Ot (x)k = k(U (x))k k ≤ mk(U (x))k k0 ≤ m



1+ 2

k :

(8)

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

783

Thus, for all x ∈ R n , by Eqs. (4) and (8) it must be that kQ(x)k ≤

∞ X

k(At (x))k kk(A(x))k k

k=0

≤m

2

2k ∞  X 1+ k=0

2

≡ q¡+∞;

as required. Now de ne V (x) = xt Q(x)x; x ∈ R n . Then V is continuous on R n , V (x)¿0 if x 6= 0, V (0) = 0, and V (x) → +∞ as kxk → ∞. By a discrete analogue of Lyapunov’s theorem , the theorem will be proved by showing that V (A(x)x)¡V (x)

for all x ∈ R n with x 6= 0:

(9)

To verify Eq. (9), we have to show that Q(x) − At (x)Q(A(x)x)A(x)

(10)

is positive de nite for all x ∈ R n with x 6= 0. By Eq. (3), Eq. (10) can be written as I + At (x)[Q(x) − Q(A(x)x)]A(x)

for all x ∈ R n :

(11)

Fix x ∈ R n with x 6= 0. Let Q1 = Q(x);

Q2 = Q(A(x)x);

A1 = A(x)

and

A2 = A(A(x)x):

By Eq. (3), we have Q1 − At1 Q1 A1 = I;

Q2 − At2 Q2 A2 = I;

and so [(Q2 − Q1 ) − At2 (Q2 − Q1 )A2 ] + At1 Q1 A1 − At2 Q1 A2 = 0:

(12)

Let B = At2 Q1 A2 − At1 Q1 A1 . Then, by Eq. (12) we have Q2 − Q1 =

∞ X

(At2 )k BAk2

k=0

and so kQ2 − Q1 k ≤

∞ X

kAt2 kk kBkkA2 kk :

(13)

k=0

We shall prove that    1 kBk ≤ 2 kQ1 k 4 + max ;1 + n3=2 :

(14)

784

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

To prove Eq. (14), we rst prove that for each pair (i; j) we have    1 ;1 + n1=2 : |aij (A(x)x) − aij (x)| ≤ 4 + max

(15)

Let r = min{kxk; kA(x)xk}: Case 1. r = kA(x)xk. Set y = (r=kxk)x: Choose a point z ∗ ∈ R n such that kz ∗ − yk = minkz − yk; where the minimum is taken over kzk = r, where kz − yk = kz − A(x)xk. For x; y ∈ R n , denote by xy the line segment between x and y. Since each aij is Gateaux di erentiable, by the “mean-value theorem” (see, e.g., [6, p. 68]) for each pair (i; j) there are 1 ∈ A(x)xz ∗ ;

2 ∈ z ∗ y

and

3 ∈ yx

such that aij (A(x)x) − aij (z ∗ ) = (∇aij (1 ))t (A(x)z − z ∗ ); aij (z ∗ ) − aij (y) = (∇aij (2 ))t (z ∗ − y); aij (y) − aij (x) = (∇aij (3 ))t (y − x): Moreover, by condition (2) √ √ 2 2 r; k2 k ≥ r; k1 k ≥ 2 2 and k3 k ≥ kA(x)xk ≥ kxk

for all x ∈ R n :

Consequently, by condition (3) for each pair (i; j) and r = kA(x)xk we have |aij (A(x)x) − aij (x)| ≤ |(∇aij (1 ))t (A(x)x − z ∗ )| + |(∇aij (2 ))t (z ∗ − y)| + |(∇aij (3 ))t (y − x)| n X

k1 k2



k=1

+

n X

k2 k

2

@aij (1 ) @xk 

k=1

+

n X k=1

k3 k2



  1 n1=2 : ≤ 4+

2 !1=2

@aij (2 ) @xk @aij (3 ) @xk

kA(x)x − z ∗ k k1 k

2 !1=2

2 !1=2

kz ∗ − yk k2 k ky − xk k3 k (16)

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

785

Case 2. r = kxk. As in the proof of Case 1, for each pair (i; j) and r = kxk we have |aij (A(x)x) − aij (x)| ≤ [4 + (1 + )]n1=2 :

(17)

Thus, by Eqs. (16) and (17), Eq. (15) is proved. Therefore, by Eq. (15), we have  1=2 n n X X |aij (A(x)x) − aij (x)|2  kA2 − A1 k ≤  i=1 j=1

 ≤ 4 + max



1 ;1 +



n3=2 :

(18)

Therefore condition (2) and Eq. (18) give kBk ≤ kAt2 Q1 kkA2 − A1 k + kAt2 − At1 kkQ1 A1 k ≤ 2 kQ1 kkA2 − A1 k    1 ;1 + n3=2  ≤ 2 kQ1 k 4 + max hence Eq. (14) is proved. Consequently, Eqs. (5), (13) and (14) give    1 ;1 + n3=2 : kQ2 − Q1 k ≤ 2 q2 4 + max Thus, by condition (2) for all y ∈ R n with y 6= 0 we have yt (I + At (x)(Q2 − Q1 )A(x))y ≥ (1 − kAt (x)(Q2 − Q1 )A(x)k)kyk2      1 ;1 + n3=2  kyk2 ≥ 1 − 2 2 q2 4 + max     −1 1 ;1 + n3=2 ¿0 if ¡ 2 2 q2 4 + max :

(19)

Since Eq. (10) can be written as Eq. (11), Q(x) − At (x)Q(A(x)x)A(x) is positive de nite. Since x ∈ R n with x 6= 0 was arbitrary and  was independent of x, Eq. (9) is proved. This completes the proof of the theorem. 3. Examples Let us note that the slow varying condition (3) is really needed here. The following example illustrates this.

786

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

Example 1. Construct a C ∞ function ’ : [0; ∞) → [0; 1] such that ’(t) = 1

if 0 ≤ t ≤ 3;

0¡’(t)¡1

if 3¡t¡5;

’(t) = 0

if t ≥ 5:

De ne A : R2 → R 2×2 by A(x) =

−1 + 12 (x)

’(kxk)(1 − (x)) ’(kxk)(1 − 12 (x)) +

1− 2 (1

− ’(kxk)

’(kxk)(−1 + (x))

! ;

where

  2   x1 − x2 (x) = sin 1− ; 2 2

and  ≡ sup [’(kxk)(1 − 12 (x))2 − (’(kxk)2 (1 − (x))2 ]: x∈R2

A little elementary calculus shows that 0 ≤ ¡1. For x ∈ R2 , the characteristic equation of A(x) is 2 det(I − A) = 2 + ’(kxk) 1 − 12 (x) − (’(kxk))2 (1 − (x))2 +

 1− 1 − 12 (x) (1 − ’(kxk)) = 0: 2

p Hence (A(x)) ≤ (1 + )=2¡1 for all x ∈ R2 and therefore condition (1) of the theorem is satis ed. We now show that condition (2) of the theorem is satis ed. Case 1. kxk ≤ 3. Then A(x) =

1 − (x) 1 − 12 (x)

−1 + 12 (x)

!

−1 + (x)

and kA(x)xk2 = [(1 − (x))2 + (1 − 12 (x))2 ](x12 + x22 ) − 4(1 − (x))(1 − 12 (x))x1 x2 : Since kA(x)xk 1 = and A(x)x 6= 0 kxk 2 kxk→0 lim

for all x ∈ R2 ;

there exists r1 ¿0 such that kA(x)xk ≥ r1 kxk for all kxk ≤ 3. Case 2. kxk ≥ 5. Then   −1 + (x) 0   2 A(x) =  : 1 − (x) 0 2 Thus kA(x)xk ≥ r2 kxk for all x ∈ R2 , where r2 = min{ 12 (1 − ); 12 }¿0.

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

787

Case 3. 3¡kxk¡5. By the continuity of A(x)x and Case 2, we can choose a small ¿0 and nd r3 ¿0 such that kA(x)xk ≥ r3 kxk

for all 0 ≤ kxk ≤ 3 + :

If 3 +  ≤ kxk¡5, it follows from the equation det(I − A) = 0 that 1=2  1− (1 − ’(3 + )) : kA(x)xk ≥ r4 kxk where r4 = 4 Hence, if we take = min{r1 ; r2 ; r3 ; r4 }, then kxk ≤ kA(x)xk for all x ∈ R2 . On the other hand, it is easy to nd a ¿0 such that kA(x)xk ≤ kxk for all x ∈ R2 . Thus, condition (2) of the theorem is satis ed. However, set ! 1 x(0) = : −1 Then x(4m) =

!

1

;

−1;

−1

x(4m + 2) =

x(4m + 1) = ! ;

1

2

! ;

2

−2

x(4m + 3) =

−2

! ;

m = 0; 1; : : : . Thus the origin is not a global attractor for Eq. (1). We now look at a numerical example to illustrate how the result can be applied. Example 2. Consider the second-order nonlinear di erence equation y(k + 2) = a(y(k); y(k + 1))y(k + 1) − 0:36y(k); where



 a(u; v) = − exp − u k( v )k + 1 Set x(k) =

y(k)

;

;

¿0:

A(x(k)) =

0

1

−0:36

a(y(k); y(k + 1))

Then Eq. (20) becomes x(k + 1) = A(x(k))x(k);

(20)



!

y(k + 1)

k = 0; 1; : : : ;

k = 0; 1; : : : :

Since

   @a x1 =  exp − ; @xi kxk + 1 kxk(1 + kxk)2

i = 1; 2;

! ;

k = 0; 1; : : : :

788

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

it must be that @a kxk · ≤ ; @xi

i = 1; 2; x ∈ R2 :

A computation shows that (A(x)) ≤ 0:6 for all x ∈ R2 , and p 0:6 √ kxk ≤ kA(x)xk ≤ 2 + (0:36)2 kxk 6

for all x ∈ R2 :

The number m in the proof of the theorem can be calculated as follows:   √ kxk 1:01; ≤ m = sup kD−1 xk∞ x6=0 where D=

1

0

0

0:1

! :

Therefore, q = m2

∞ X

(0:8)2k ≤ 2:81:

k=0

According to Eq. (19), if 0¡¡0:0013, then the origin is globally asymptotically stable for Eq. (20). Added in proof The Markus–Yamabe conjecture was completely solved armatively for n = 2 independently by R. Feler [Ann. Polon. Math. 62 (1995) 45–74] and G. Gutierrez [Ann. Inst. H. Poincare Anal. Non Lineaire 12 (1995) 627–671]. The Markus–Yamabe conjecture was solved negatively for n ≥ 4 by N.E. Barabanov [Sibirsk. Mat. Zh. 29 (1998) 2–11] and by J. Bernet and J. Llibre [Discrete and Impulsive Systems, 2 (1996) 337–379], and for n = 3 even a polynomial vector elds by A. Cima, van den Essen, A. Gasull, E. Hubbers, and F. Ma˜nosas [Advances in Mathematics, 131 (1997) 453–457]. References  W.J. Baumol, Economic Dynamics, 2nd ed., Macmillan, New York, 1959.  W. Hahn, Uber die Anwendung der Methode von Lyapunov auf Di erenzengleichen, Math. Ann. 136 (1958) 430 – 441.  J. LaSalle, The Stability of Dynamical Systems, Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, 1976.  L. Markus, H. Yamabe, Global stability criteria for di erential equations, Osaka Math. J. 12 (1960) 305 –317.

M.-H. Shih, J.-W. Wu / Nonlinear Analysis 34 (1998) 779 – 789

789

 G.H. Meisters, Jacobian problems in di erential equations and algebraic geometry, Rocky Mountain J. Math. 12 (1982) 679 –705.  J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, London, 1970.  R. Redhe er, Remarks on a paper of Taussky, J. Algebra 2 (1965) 42– 47.  M.-H. Shih, J.-W. Wu, Question of global asymptotic stability in state-varying nonlinear systems, Proc. Amer. Math. Soc. 122 (1994) 801– 804.  Stein, Some general theorems on iterants, J. Res. Nat. Bur. Standards 48 (1952) 82– 83.  O. Taussky, Matrices C with C n → 0, J. Algebra 1 (1960) 5 –10.