Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays

Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays

Accepted Manuscript Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays R. Sriraman, Yang ...

NAN Sizes 0 Downloads 0 Views

Accepted Manuscript Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays R. Sriraman, Yang Cao, R. Samidurai

PII: DOI: Reference:

S0378-4754(19)30112-0 https://doi.org/10.1016/j.matcom.2019.04.001 MATCOM 4772

To appear in:

Mathematics and Computers in Simulation

Received date : 25 February 2019 Revised date : 2 April 2019 Accepted date : 3 April 2019 Please cite this article as:, Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays, Mathematics and Computers in Simulation (2019), https://doi.org/10.1016/j.matcom.2019.04.001 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Manuscript

Click here to view linked References

Global asymptotic stability of stochastic complex-valued neural networks with probabilistic time-varying delays R. Sriraman† , Yang Cao‡ ∗, R. Samidurai†





Department of Mathematics, Thiruvalluvar University, Vellore, Tamil Nadu 632 115, India. Department of Mechanical Engineering, The University of Hong Kong, Pokfulam, Hong Kong.

Abstract This paper studies the global asymptotic stability problem for a class of stochastic complexvalued neural networks (SCVNNs) with probabilistic time-varying delays as well as stochastic disturbances. Based on the Lyapunov-Krasovskii functional (LKF) method and mathematical analytic techniques, delay-dependent stability criteria are derived by separating complex-valued neural networks (CVNNs) into real and imaginary parts. Furthermore, the obtained sufficient conditions are presented in terms of simplified linear matrix inequalities (LMIs), which can be straightforwardly solved by Matlab. Finally, two simulation examples are provided to show the effectiveness and advantages of the proposed results. Keywords: Global asymptotic stability; Complex-valued neural networks; Stochastic disturbance; Lyapunov-Krasovskii functional; Probabilistic time-varying delays.

1

Introduction

Recently, the dynamical properties analysis for various neural networks has become a hot research field due to their extensive applications in many fields, such as secure communication, pattern recognition, signal processing, filtering, image processing, automatic control, and so forth [1]-[7]. It is well-known that CVNNs as an extension of real-valued NNs, it consists of complex-valued states, inputs, connection weights, and activation functions, they possess notable significance for fundamental theorems and powerful applications in engineering sciences, such as communication, quantum mechanics, electromagnetic processing, and so on [8]-[11]. For instance, some complicated reallife problems such as the XOR problem [9] and the speed and direction in wind profile model [10] could have properly solved by complex-valued neurons. Therefore, compared with real-valued NNs, CVNNs ones have great superiority in representing the dynamic problems and some significant results have been reported [12]-[17]. On the other hand, compared with NNs, choosing activation function for CVNNs is a leading challenge. Therefore, in the field of stability analysis of CVNNs, several kinds of activation functions were taken into consideration, manifold different results about the stability of CVNNs have been declared. For instance, the papers [18]-[21] reveals that the LMI-based sufficient conditions for stability of CVNNs by using real-imaginary separate type activation functions. In [22]-[24] the authors have studied the global asymptotic stability problems for CVNNs by applying Lipschitz-type activation functions. In [25], the authors have mentioned the multistability criterion for CVNNs by employing ∗ Corresponding

author e-mail: [email protected]

1

discontinuous-type activation functions. Equally, some other notable stability results can be found in [26], [29]-[31]. It should be noticed that the time delay has frequently occurred in nearly all dynamical systems, and the existence of time delay may lead to oscillation, poor performance, and instability of the systems. Therefore, various time delays have been well considered in the field of delay-dependent stability analysis of NNs, such as discrete delay, distributed delay, leakage delay, neutral delay, interval delay [27]-[33]. However, the time delay in some practical systems is often existent in a stochastic fashion, consequently, many researchers have studied the stability analysis of NNs with probabilistic or random time-varying delays and many fascinating results have been published [34]-[36]. Among many approaches to obtaining stability of dynamical systems, Lyapunov stability theory, Linear matrix inequality, and integral inequality techniques are considered as the major approaches. Recently, by using LKF, several scientific results related to the proposed problem have been well- documented [16]-[18], [33]-[38], [44]-[46]. For example, the exponential stability of impulsive CVNNs with time delay [12], synchronization of fractional-order CVNNs with time delay [16], global stability analysis for delayed complex-valued BAM NNs [20], finite-time stability for delayed complex-valued BAM NNs [21], multistability of delayed CVNNs [22], exponential synchronization of coupled stochastic memristor-based NNs [27], global asymptotic stability of CVNNs with interval time-varying delays [31]. On the other hand, in different circumstances, network models are normally affected by stochastic factors [37]-[39]. Thus, when we study the stability analysis of NNs the stochastic disturbance cannot be neglected. The research paper [19] reveals that the robust sampled-data state estimation criteria for delayed SCVNNs. Based on the Lyapunov theory and matrix inequality approach, some sufficient conditions are derived in [40] for mean square exponential input-to-state stability for a kind of SCVNNs with time delay. Some other relevant results can be found in [41]-[45]. However, one can see that the important problem of global asymptotic stability for SCVNNs with probabilistic time-varying delays as well as stochastic disturbance has not been completely considered yet, it remains important. This situation motivates to our present study. Based on the above analysis, in this paper, our research efforts have mainly focused on developing a new approach to analyzing the global asymptotic stability criteria for a class of SCVNNs with probabilistic time-varying delays. The main contributions of this paper as follows: (i) To applicable more practical situations, a more general form of the system model is studied in this paper. (ii) A suitable LKF is constructed with the full information of probabilistic time-varying delays. (iii) By employing real-imaginary separating type activation function, several delay-dependent sufficient conditions have been derived in terms of simplified LMIs. (iv) Two numerical examples are given to show the effectiveness and advantages of the present results. This paper is structured as follows. Section II will illustrate the statement of the problem. In Section III, the main results will be given. In Section IV, illustrative examples will be presented. Finally, the conclusion will be dram in Section V. Throughout this paper, Rn and Rm×n denote the Euclidean n-space with vector norm k.k and the set of m × n real matrices, respectively. Cn and Cm×n indicate n-dimensional √ complex vectors, m × n complex matrices, respectively. i represent the imaginary unit, that is i = −1. The superscripts ∗ and T indicates the complex conjugate transpose and matrix transposition, respectively. The matrix P > 0(P < 0) means P is positive (negative) definite symmetric matrix. diag{·} shows the diagonal of the block diagonal matrix. I is the identity matrix with appropriate dimension. (Ω, F , P) denote the complete probability space with a filtration {Ft }t≥0 containing all P-null sets in F0 and being right continuous. E{·} represents the Mathematical expectation. Finally, z indicates the elements below the main diagonal of a symmetric matrix. 2

2

Problem statement

Consider the following CVNNs with probabilistic time-varying delays dy(t) = [−Dy(t) + A f (y(t)) + B f (y(t − δ(t))) + I]dt,

(1)

where y(t) = [y1 (t), ...., yn (t)]T ∈ Rn is the state vector. f (y(·)) = [ f (y1 (·)), ..., f (yn (·))]T ∈ Cn is the complex-valued neuron activation function. I = [I1 , ...., In ]T ∈ Cn is the external input vector. δ(t) corresponds to the transmission delay.       d1 a11 . . . a1n b11 . . . b1n    . .  . . . .  .        n×n  ∈ Rn , A =  .  ∈ Cn×n , B =  . . . . . .  D=       ∈C .       . . . . . . . dn an1 . . . ann bn1 . . . bnn Assumption 1: The neuron activation function f j (·), j = 1, ..., n, satisfies the following lipschitz conditions for all y, y0 ∈ C : | f j (y) − f j (y0 )| ≤ l j |y − y0 |, j = 1, ...., n

(2)

where l j is a constant. From Assumption 1, it is clear to get ( f (y) − f (y0 ))∗ ( f (y) − f (y0 )) ≤ (y − y0 )∗ LT L(y − y0 ),

(3)

where L = diag{l1 , ...., ln }. Assumption 2: For y = a + ib ∈ C with a, b ∈ R, f j (y), j = 1, ..., n is expressed as f j (y) = f jR (a, b) + i f jI (a, b),

(4)

where j = 1, ..., n. There exist positive constants λRR , λRI , λIR , λII such that the following inequalities | f jR (a, b) − f jR (a0 , b0 )| ≤ λRR |a − a0 | + λRI |b − b0 |, | f jI (a, b) − f jI (a0 , b0 )|

IR

II

≤ λ |a − a | + λ |b − b |, 0

0

(5) (6)

hold for any a, a0 , b, b0 ∈ R. Moreover, f j (0) = 0. Remark 2.1. It is noted that Assumption 1 is an extension of the real-valued function satisfying the Lipschitz continuity condition. In fact, it is easy to verify that Assumption 2 is quite strict and it is a special case of Assumption 1. The initial condition of the system (1) is given by y(t) = φ(t), t ∈ [−δ, 0], where φ(t) is continuously differential on t ∈ [−δ, 0]. Definition 2.2. [23] The vector y¯ ∈ Cn is said to be equilibrium point of CVNNs (1) if it satisfies −Dy¯ + (A + B) f (y) ¯ + I = 0. 3

(7)

Under Definition (2.2), there exists an equilibrium point y¯ of (1), by setting z(t) = y(t) − y¯ and g(z(t)) = f (z(t) + y¯ + I) − f (y¯ + I). Then, system (1) turns to dz(t) = [−Dz(t) + Ag(z(t)) + Bg(z(t − δ(t)))]dt,

(8)

In system (8), it is assumed that the transmission delay δ(t) has a bounded derivative. In practice, there exists a constant δ1 with 0 ≤ δ1 ≤ δ2 , such that δ(t) takes values in [0, δ1 ] and (δ1 , δ2 ] with certain probability. Assumption 3: The probability distribution of the time-varying delay δ(t) are defined as Prob{0 ≤ δ(t) ≤ δ1 } = γ0

Prob{δ1 < δ(t) ≤ δ2 } = 1 − γ0 ,

where 0 ≤ γ0 ≤ 1 is a positive constant. In order to represent the probability distribution of the time delay, we define two sets ϒ1 = {t|δ(t) ∈ [0, δ1 ]}

ϒ2 = {t|δ(t) ∈ (δ1 , δ2 ]}.

Now we introduce two time-varying delays δ1 (t) and δ2 (t) such that  δ1 (t), t ∈ ϒ1 δ(t) = δ2 (t), t ∈ ϒ2 .

(9)

From (9) it is easy to see that t ∈ ϒ1 implies the event δ(t) ∈ [0, δ1 ] occurs and t ∈ ϒ2 implies the event δ(t) ∈ (δ1 , δ2 ] occurs. Then, we define a stochastic variable as follows:  1, t ∈ ϒ1 γ(t) = (10) 0, t ∈ ϒ2 .

Thus, it can be easily derived that γ(t) is the Bernoulli distributed sequence with Prob{γ(t) = 1} = Prob{0 ≤ δ(t) ≤ δ1 } = E{γ(t)} = γ0

Prob{γ(t) = 0} = Prob{δ1 < δ(t) ≤ δ2 } = 1 − E{γ(t)} = 1 − γ0 . Furthermore, it can be seen that E{γ2 (t)} = γ0 , E{(1 − γ2 (t))} = 1 − γ0 , E{γ(t)(1 − γ2 (t))} = 0.

As per δ1 (t) and δ2 (t) we make the following Assumption 4. Assumption 4: There exist two constants µ1 and µ2 such that δ˙ 1 (t) ≤ µ1 < 1, δ˙ 2 (t) ≤ µ2 < 1. Here, we consider the stochastic CVNNs with probabilistic time varying delays as follows:  dz(t) = [−Dz(t) + Ag(z(t)) + γ(t)Bg(z(t − δ1 (t))) + (1 − γ(t))Bg(z(t − δ2 (t)))]dt (11) +σ(t, z(t), z(t − δ1 (t)), z(t − δ2 (t)))dω(t),

which is equivalent to   dz(t) = [−Dz(t) + Ag(z(t)) + γ0 Bg(z(t − δ1 (t))) + (γ(t) − γ0 )Bg(z(t − δ1 (t))) +(1 − γ0 )Bg(z(t − δ2 (t))) − (γ(t) − γ0 )Bg(z(t − δ2 (t)))]dt  +σ(t, z(t), z(t − δ1 (t)), z(t − δ2 (t)))dω(t).

(12)

where σ(t, z(t), z(t − δ1 (t)), z(t − δ2 (t))) : R × Cn × Cn × Cn → Cn×n is the noise intensity function. ω(t) is a n-dimensional Brownian motion defined on (Ω, F , P). 4

Remark 2.3. It is worth mentioning that the system model (12) is a more general form compared with the system model proposed in [22], [24]. This means, when there has no stochastic effects, the model (12) is the one investigated in [24]. Suppose, there has no stochastic effects and γ(t) ≡ 1, (γ0 = 1), the model (12) is the one investigated in [22]. Assumption 5: [19] For z = x+iy, z¯ = x+i ¯ y, ¯ zˆ = x+i ˆ yˆ with x, x, ¯ x, ˆ y, y, ¯ yˆ ∈ Rn , σ(t, z, z¯, zˆ) is expressed as follows: σ(t, z, z¯, zˆ) = σR (t, x, x, ¯ x, ˆ y, y, ¯ y) ˆ + iσI (t, x, x, ¯ x, ˆ y, y, ¯ y), ˆ

(13)

where σR , σI : R+ × Rn × Rn × Rn × Rn × Rn × Rn → Rn×n . There exists positive semi-definite matrices E j and F j ( j = 1, 2, ..., 6) such that ¯ x, ˆ y, y, ¯ y) ˆ T σR (t, x, x, ¯ x, ˆ y, y, ¯ y)} ˆ ≤ xT E1 x + x¯T E2 x¯ + xˆT E3 xˆ + yT E4 y + y¯T E5 y¯ + yˆT E6 y, ˆ tr{σR (t, x, x, tr{σI (t, x, x, ¯ x, ˆ y, y, ¯ y) ˆ T σI (t, x, x, ¯ x, ˆ y, y, ¯ y)} ˆ ≤ xT F1 x + x¯T F2 x¯ + xˆT F3 xˆ + yT F4 y + y¯T F5 y¯ + yˆT F6 y. ˆ

In order to further investigate, we separate the CVNNs (12) into its real and imaginary parts. Let z(t) = x(t)+iy(t), A = AR +iAI , B = BR +iBI , g(z(t)) = gR (x(t), y(t))+igI (x(t), y(t)), g(z(t −δ1 (t))) = gR (x(t − δ1 (t)), y(t − δ1 (t))) + igI (x(t − δ1 (t)), y(t − δ1 (t))), g(z(t − δ2 (t))) = gR (x(t − δ2 (t)), y(t − δ2 (t))) + igI (x(t − δ2 (t)), y(t − δ2 (t))) and i shows the imaginary unit. Therefore, the system (12) can be separated as follows   dx(t) = [−Dx(t) + AR gR (x(t), y(t)) − AI gI (x(t), y(t)) + γ0 [BR gR (x(t − δ1 (t)), y(t − δ1 (t)))     −BI gI (x(t − δ1 (t)), y(t − δ1 (t)))] + (γ(t) − γ0 )[BR gR (x(t − δ1 (t)), y(t − δ1 (t)))     −BI gI (x(t − δ1 (t)), y(t − δ1 (t)))] + (1 − γ0 )[BR gR (x(t − δ2 (t)), y(t − δ2 (t)))     −BI gI (x(t − δ2 (t)), y(t − δ2 (t)))] − (γ(t) − γ0 )[BR gR (x(t − δ2 (t)), y(t − δ2 (t)))     −BI gI (x(t − δ2 (t)), y(t − δ2 (t)))]]dt    +σR (t, x(t), x(t − δ1 (t)), x(t − δ2 (t)), y(t), y(t − δ1 (t)), y(t − δ2 (t)))dω(t) dy(t) = [−Dy(t) + AI gR (x(t), y(t)) + AR gI (x(t), y(t)) + γ0 [BI gR (x(t − δ1 (t)), y(t − δ1 (t)))      +BR gI (x(t − δ1 (t)), y(t − δ1 (t)))] + (γ(t) − γ0 )[BI gR (x(t − δ1 (t)), y(t − δ1 (t)))     +BR gI (x(t − δ1 (t)), y(t − δ1 (t)))] + (1 − γ0 )[BI gR (x(t − δ2 (t)), y(t − δ2 (t)))     +BR gI (x(t − δ2 (t)), y(t − δ2 (t)))] − (γ(t) − γ0 )[BI gR (x(t − δ2 (t)), y(t − δ2 (t)))     +BR gI (x(t − δ2 (t)), y(t − δ2 (t)))]]dt    +σI (t, x(t), x(t − δ1 (t)), x(t − δ2 (t)), y(t), y(t − δ1 (t)), y(t − δ2 (t)))dω(t) (14) Let    R   R  x(t) g (x(t), y(t)) g (x(t − δ1 (t)), y(t − δ1 (t))) ϑ(t) = , g(ϑ(t)) ¯ = I , g(ϑ(t ¯ − δ1 (t))) = I , y(t) g (x(t), y(t)) g (x(t − δ1 (t)), y(t − δ1 (t)))  R   R  g (x(t − δ2 (t)), y(t − δ2 (t))) ϕ (t) g(ϑ(t ¯ − δ2 (t))) = , ϕ(t) = I , gI (x(t − δ2 (t)), y(t − δ2 (t))) ϕ (t)

ϕR (t) = σR (t, x(t), x(t − δ1 (t)), x(t − δ2 (t)), y(t), y(t − δ1 (t)), y(t − δ2 (t))),

ϕI (t) = σI (t, x(t), x(t − δ1 (t)), x(t − δ2 (t)), y(t), y(t − δ1 (t)), y(t − δ2 (t))),    R   R  I I ¯ = A I −AR , B¯ = B I −BR . ¯ = D 0 ,A D 0 D A A B B 5

Then, system (14) can be equivalently rewritten as ¯ g(ϑ(t)) ¯ dϑ(t) = [−Dϑ(t) +A ¯ + γ0 B¯ g(ϑ(t ¯ − δ1 (t))) + (γ(t) − γ0 )B¯ g(ϑ(t ¯ − δ1 (t))) ¯ g(ϑ(t + (1 − γ0 )B ¯ − δ2 (t))) − (γ(t) − γ0 )B¯ g(ϑ(t ¯ − δ2 (t)))]dt + ϕ(t)dω(t).

(15)

From (3), it can be expressed by separating real and imaginary parts as ¯ − ϑ0 ), (g(ϑ) ¯ − g(ϑ ¯ 0 ))T (g(ϑ) ¯ − g(ϑ ¯ 0 )) ≤ (ϑ − ϑ0 )T L(ϑ

(16)

¯ ϑ(t) = φ(t), t ∈ [−δ2 , 0],

(17)

 T  L L 0 where L¯ = . 0 LT L The initial condition for system (15) is given by

¯ = [φR (t), φI (t)]T , φ(t) ¯ ∈ L 2 ([−δ2 , 0], R2n ) and L 2 ([−δ2 , 0], R2n ) is the family of all where φ(t) F0 F0 2n 2 } < ∞, and it ¯ F0 -measurable C ([−δ2 , 0], R )-valued random variables satisfying sup E{kφ(t)k t∈[−δ2 ,0]

¯ is independent with the Brownian motion ω(·). is further assumed that φ(·) For the sake of convenience, the following abbreviations are adopted in the sequel: ¯ g(ϑ(t)) ¯ η(t) , − Dϑ(t) +A ¯ + γ0 B¯ g(ϑ(t ¯ − δ1 (t))) + (γ(t) − γ0 )B¯ g(ϑ(t ¯ − δ1 (t))) + (1 − γ0 )B¯ g(ϑ(t ¯ − δ2 (t))) − (γ(t) − γ0 )B¯ g(ϑ(t ¯ − δ2 (t))).

(18)

The system (15) can be written as dϑ(t) = η(t)dt + ϕ(t)dω(t).

(19)

Lemma 2.4. [46] Let T ∈ Rn×n be a positive definite matrix, vector function z(s) : [a, b] → Rn with scalars a < b, then Z b T  Z b  Z b z(s)ds T z(s)ds ≤ (b − a) zT (s)Tz(s)ds. a

a

a

Lemma 2.5. [19] Given constant matrices C , G , and O with C = C T and G = G T , then   C O < 0, OT G is equivalent to one of the following conditions: (i) G < 0, C − OG −1 O T < 0

(ii) C < 0, G − O T C −1 O < 0.

3

Main results

This section describes new LMI-based sufficient conditions for the global asymptotic stability of the system (15), which is shown in the following Theorem (3.1). 6

Theorem 3.1. Under Assumption 1, the activation function can be separated into real and imaginary parts. For given scalars δ1 , δ2 , µ1 , µ2 and α + β = 1, the NNs (15) is globally asymptotically stable in the mean square, if there exist matrices P > 0, Q > 0, R > 0, S > 0, U > 0, V > 0, W > 0, X > 0, Y > 0, any matrices M, N with appropriate dimension and scalars ε1 > 0, ε2 > 0, ε3 > 0, λ > 0 such that the following LMIs holds: P ≤ λI, ¯

        ¯ Θ=        

Θ1,1 z z z z z z z z z z z z

¯ 1,2 Θ ¯ 2,2 Θ z z z z z z z z z z z

0 ¯ 2,3 Θ ¯ 3,3 Θ z z z z z z z z z z

0 0 ¯ 3,4 Θ ¯ 4,4 Θ z z z z z z z z z

0 0 0 0 ¯ 5,5 Θ z z z z z z z z

0 0 0 ¯ 4,6 Θ 0 ¯ 6,6 Θ z z z z z z z

¯ 1,7 Θ ¯ 2,7 Θ

¯ 1,8 Θ ¯ 2,8 Θ

¯ 1,9 Θ ¯ 2,9 Θ

0 0 0 0 ¯ 7,7 Θ z z z z z z

0 0 0 0 0 ¯ 8,8 Θ

0 0 0 0 0 0 ¯ 9,9 Θ

z z z z z

z z z z

0 0 0 0 0 0 0 0 0

¯ 10,10 Θ z z z

0 0 0 0 0 0 0 0 0 0

¯ 11,11 Θ z z

0 ¯ 2,12 Θ 0 0 0 0 ¯ 7,12 Θ ¯ 8,12 Θ ¯ 9,12 Θ 0 0

¯ 12,12 Θ z

(20)



0 ¯ 2,13  Θ 0   0  0   0   ¯ Θ7,13  ¯ 8,13  Θ  ¯ 9,13  Θ 0 0 0

¯ 13,13 Θ

where

< 0,

(21)

   

¯ Θ ¯ 1,1 = δ1 X + (δ2 − δ1 )Y − N − NT , Θ ¯ 1,2 = PT − MT − ND, ¯ 1,7 = NA, ¯ 1,8 = γ0 NB, ¯ Θ ¯ Θ ¯ 1,9 = (1 − γ0 )NB, ¯ 2,2 = Q + S − α V − β V − MD ¯ Θ ¯ − (MD) ¯ T + ε1 L¯ + λJ1 , Θ δ1 δ1 p ¯ Θ ¯ 2,3 = α V + β V, Θ ¯ 2,7 = MA, ¯ 2,8 = γ0 MB, ¯ 2,9 = (1 − γ0 )MB, ¯ 2,12 = − δ1 DV, ¯ Θ ¯ Θ ¯ Θ δ1 δ1 p ¯ 3,3 = −(1 − µ1 )Q − α V − α V − β V − β V + ε2 L¯ + λJ2 , ¯ 2,13 = − (δ2 − δ1 )DW, ¯ Θ Θ δ1 δ1 δ1 δ1 α β α β α β ¯ 3,4 = V + V, Θ ¯ 4,4 = R − S + U − V − V − Θ W− W, δ1 δ1 δ1 δ1 (δ2 − δ1 ) (δ2 − δ1 ) α β ¯ 4,6 = ¯ 5,5 = −(1 − µ2 )R + ε3 L¯ + λJ3 , Θ W+ W, Θ (δ2 − δ1 ) (δ2 − δ1 ) p α β ¯ ¯ 7,7 = −ε1 I, Θ ¯ 7,12 = δ1 AV, ¯ 6,6 = − U − Θ W− W, Θ (δ2 − δ1 ) (δ2 − δ1 ) p p p ¯ ¯ 7,13 = (δ2 − δ1 )AW, ¯ 8,8 = −ε2 I, Θ ¯ 8,12 = δ1 γ0 BV, ¯ 8,13 = (δ2 − δ1 )γ0 BW, ¯ ¯ Θ Θ Θ p p ¯ 9,9 = − ε3 I, Θ ¯ 9,12 = δ1 (1 − γ0 )BV, ¯ 9,13 = (δ2 − δ1 )(1 − γ0 )BW, ¯ ¯ Θ Θ Y ¯ 10,10 = − X , Θ ¯ 11,11 = − ¯ 12,12 = −V, Θ ¯ 13,13 = −W. Θ ,Θ δ1 (δ2 − δ1 ) Proof: Let us consider the following Lyapunov function candidate for the NNs (15) V(t, ϑt ) = V1 (t, ϑt ) + V2 (t, ϑt ) + V3 (t, ϑt ) + V4 (t, ϑt ) + V5 (t, ϑt ),

7

(22)

where V1 (t, ϑt ) = ϑT (t)Pϑ(t), V2 (t, ϑt ) = V3 (t, ϑt ) = V4 (t, ϑt ) = V5 (t, ϑt ) =

Z t

ϑT (s)Qϑ(s)ds +

t−δ1 (t) Z t T t−δ1

Z t

ϑ (s)Sϑ(s)ds + Z t

t−δ1 u Z t Z t t−δ1 u

Z

Z t−δ1

t−δ2 (t) t−δ1 T

t−δ2

˙ ϑ˙ T (s)Vϑ(s)dsdu + ηT (s)Xη(s)dsdu +

ϑT (s)Rϑ(s)ds,

ϑ (s)Uϑ(s)ds,

Z t−δ1 Z t

t−δ2 u Z t−δ1 Z t t−δ2

u

˙ ϑ˙ T (s)Wϑ(s)dsdu, ηT (s)Yη(s)dsdu.

Let L be the weak infinitesimal operator. Taking the mathematical expectation of L V j (t, ϑt ) j = 1, 2, ..., 5 we have E{L V(t, ϑt )} = E{L V1 (t, ϑt )} + E{L V2 (t, ϑt )} + E{L V3 (t, ϑt )} + E{L V4 (t, ϑt )} + E{L V5 (t, ϑt )}. (23) Through directly calculating E{L V(t, ϑt )} along the trajectories of system (15) is given by (24) E{L V1 (t, ϑt )} = E{2ϑT (t)Pη(t) + tr{ϕT (t)Pϕ(t)}}, T T T E{L V2 (t, ϑt )} ≤ E{ϑ (t)Qϑ(t) − (1 − δ˙ 1 (t))ϑ (t − δ1 (t))Qϑ(t − δ1 (t)) + ϑ (t − δ1 )Rϑ(t − δ1 ) − (1 − δ˙ 2 (t))ϑT (t − δ2 (t))Rϑ(t − δ2 (t))}, (25) E{L V3 (t, ϑt )} ≤ E{ϑT (t)Sϑ(t) − ϑT (t − δ1 )Sϑ(t − δ1 ) + ϑT (t − δ1 )Uϑ(t − δ1 ) − ϑT (t − δ2 )Uϑ(t − δ2 )},

˙ − E{L V4 (t, ϑt )} ≤ E{δ1 ϑ˙ T (t)Vϑ(t) −

Z t−δ1 t−δ2

Z t

t−δ1

(26)

˙ ˙ ϑ˙ T (u)Vϑ(u)du + (δ2 − δ1 )ϑ˙ T (t)Wϑ(t)

˙ ϑ˙ T (u)Wϑ(u)du}.

(27)

For arbitrary scalars 0 ≤ α ≤ 1 and 0 ≤ β ≤ 1 (α + β = 1), it is clear that the following inequality holds −

Z t

Z t

Z t

˙ ˙ ˙ ϑ˙ T (u)Vϑ(u)du = −α ϑ˙ T (u)Vϑ(u)du −β ϑ˙ T (u)Vϑ(u)du t−δ1 t−δ1 t−δ1  Z t−δ (t)  Z t 1 T T ˙ ˙ ˙ ˙ =−α ϑ (u)Vϑ(u)du + ϑ (u)Vϑ(u)du t−δ1 (t) t−δ1 Z t  Z t−δ1 (t) T T ˙ ˙ ˙ ˙ −β ϑ (u)Vϑ(u)du + ϑ (u)Vϑ(u)du t−δ1 (t)

=−α −β

Z t−δ1 (t)

t−δ1 Z t

t−δ1 (t)

t−δ1

˙ ϑ˙ T (u)Vϑ(u)du −α

˙ ϑ˙ T (u)Vϑ(u)du −β

8

Z

Z t

t−δ1 (t) t−δ1 (t)

t−δ1

˙ ϑ˙ T (u)Vϑ(u)du

˙ ϑ˙ T (u)Vϑ(u)du.

Using Lemma (2.4), we have −α

Z t−δ1 (t) t−δ1

˙ ϑ˙ T (u)Vϑ(u)du ≤

−α

Z t

˙ ϑ˙ T (u)Vϑ(u)du ≤

−β

Z t

˙ ϑ˙ T (u)Vϑ(u)du ≤

−β

t−δ1 (t)

t−δ1 (t)

Z t−δ1 (t) t−δ1

˙ ϑ˙ T (u)Vϑ(u)du ≤

From (28)-(31), we have −α

Z t−δ1 (t) t−δ1

−β

Z t

t−δ1 (t)

˙ ϑ˙ T (u)Vϑ(u)du −α

˙ ϑ˙ T (u)Vϑ(u)du −β

Z t−δ1 t−δ2

Z t

t−δ1 (t)

Z t−δ1 (t) t−δ1

Z t−δ1 t−δ2

˙ ϑ˙ T (u)Wϑ(u)du −β

Z t−δ1 t−δ2

Z t−δ1

(30)

(31)

(32)

(33)

˙ ϑ˙ T (u)Wϑ(u)du

 T    T   α ϑ(t − δ1 ) I I ϑ(t − δ1 ) W −I −I ϑ(t − δ2 ) (δ2 − δ1 ) ϑ(t − δ2 ) t−δ2  T    α ϑ(t − δ1 ) W −W ϑ(t − δ1 ) =− , z W ϑ(t − δ2 ) (δ2 − δ1 ϑ(t − δ2 )  T    T   Z t−δ1 β ϑ(t − δ1 ) I I ϑ(t − δ1 ) ˙ −β ϑ˙ T (u)Wϑ(u)du ≤− W −I −I ϑ(t − δ2 ) (δ2 − δ1 ) ϑ(t − δ2 ) t−δ2  T    β ϑ(t − δ1 ) W −W ϑ(t − δ1 ) =− , z W ϑ(t − δ2 ) (δ2 − δ1 ) ϑ(t − δ2 )

−α

(29)

˙ ϑ˙ T (u)Vϑ(u)du

 T    ϑ(t) V −V 0 ϑ(t) β ϑ(t − δ1 (t))  z V + V −V ϑ(t − δ1 (t)) , ≤− δ1 ϑ(t − δ1 ) z z V ϑ(t − δ1 )

˙ ϑ˙ T (u)Wϑ(u)du =−α

(28)

˙ ϑ˙ T (u)Vϑ(u)du

  T   ϑ(t) V −V 0 ϑ(t) α ϑ(t − δ1 (t))  z V + V −V ϑ(t − δ1 (t)) , ≤− δ1 ϑ(t − δ1 ) z z V ϑ(t − δ1 )

Similarly, −

 T    T   ϑ(t) 0 0 ϑ(t) α ϑ(t − δ1 (t))  I  V  I  ϑ(t − δ1 (t)) , − δ1 ϑ(t − δ1 ) −I −I ϑ(t − δ1 )  T    T   ϑ(t) I I ϑ(t) α ϑ(t − δ1 (t)) −I V −I ϑ(t − δ1 (t)) , − δ1 ϑ(t − δ1 ) 0 0 ϑ(t − δ1 )  T    T   ϑ(t) I I ϑ(t) β − ϑ(t − δ1 (t)) −I V −I ϑ(t − δ1 (t)) , δ1 ϑ(t − δ1 ) 0 0 ϑ(t − δ1 )  T    T   ϑ(t) 0 0 ϑ(t) β ϑ(t − δ1 (t))  I  V  I  ϑ(t − δ1 (t)) , − δ1 ϑ(t − δ1 ) −I −I ϑ(t − δ1 )

˙ ϑ˙ T (u)Wϑ(u)du ≤−

E{L V5 (t, ϑt )} ≤ E{δ1 ηT (t)Xη(t) − −

Z t−δ1 t−δ2

Z t

t−δ1

(35)

ηT (u)Xη(u)du + (δ2 − δ1 )ηT (t)Yη(t)

ηT (u)Yη(u)du}. 9

(34)

(36)

Using Lemma (2.4), we have − −

Z t

t−δ1

Z t−δ1 t−δ2

ηT (u)Xη(u)du ≤ − δ−1 1

Z

t

t−δ1

T  Z η(u)du X

ηT (u)Yη(u)du ≤ − (δ2 − δ1 )−1

From the Assumption 5, one has

Z

t−δ1

t−δ2

t

t−δ1 T

η(u)du

 η(u)du ,

Z Y

t−δ1

t−δ2

(37)  η(u)du .

(38)

tr(ϕT (t)ϕ(t)) ≤ xT (t)(E1 + F1 )x(t) + xT (t − δ1 (t))(E2 + F2 )x(t − δ1 (t))

+ xT (t − δ2 (t))(E3 + F3 )x(t − δ2 (t)) + yT (t)(E4 + F4 )y(t)

+ yT (t − δ1 (t))(E5 + F5 )y(t − δ1 (t)) + yT (t − δ2 (t))(E6 + F6 )y(t − δ2 (t))

≤ ϑT (t)J1 ϑ(t) + ϑT (t − δ1 (t))J2 ϑ(t − δ1 (t)) + ϑT (t − δ2 (t))J3 ϑ(t − δ2 (t)),       E + F1 0 E + F2 0 E + F3 0 where J1 = 1 , J2 = 2 , J3 = 3 . 0 E4 + F4 0 E5 + F5 0 E6 + F6 From (20) and (39), one can obtain tr(ϕT (t)Pϕ(t)) ≤ ϑT (t)(λJ1 )ϑ(t) + ϑT (t − δ1 (t))(λJ2 )ϑ(t − δ1 (t)) + ϑT (t − δ2 (t))(λJ3 )ϑ(t − δ2 (t)).

(39)

(40)

Moreover, from (16) it follows that

¯ ε1 [ϑT (t)Lϑ(t) − g¯T (ϑ(t))g(ϑ(t))] ¯ ≥ 0,

¯ ε2 [ϑ (t − δ1 (t))Lϑ(t − δ1 (t)) − g¯T (ϑ(t − δ1 (t)))g(ϑ(t ¯ − δ1 (t)))] ≥ 0, T T ¯ ε3 [ϑ (t − δ2 (t))Lϑ(t − δ2 (t)) − g¯ (ϑ(t − δ2 (t)))g(ϑ(t ¯ − δ2 (t)))] ≥ 0. T

(41) (42) (43)

for ε j > 0, j = 1, 2, 3. On the other hand, for any matrices M and N with appropriate dimensions, it is true that, ¯ g(ϑ(t)) ¯ +A ¯ + γ0 B¯ g(ϑ(t ¯ − δ1 (t))) 0 = 2[ϑ(t)M + η(t)N]T [−η(t) − Dϑ(t) ¯ + (1 − γ0 )Bg(ϑ(t ¯ − δ2 (t)))].

(44)

Then, combining from (24)-(44), we have where

E{L V(t, ϑt )} ≤ E{ξT (t)(Θ + δ1 ΛT VΛ + (δ2 − δ1 )ΛT WΛ)ξ(t)},  ξ(t) = ηT (t) ϑT (t) ϑT (t − δ1 (t)) ϑT (t − δ1 ) ϑT (t − δ2 (t)) ϑT (t − δ2 ) g¯T (ϑ(t)) Z t Z t−δ1 T g¯T (ϑ(t − δ1 (t))) g¯T (ϑ(t − δ2 (t))) ηT (u)du ηT (u)du ,  Θ1,1

     Θ=     

z z z z z z z z z z

Θ1,2 Θ2,2 z z z z z z z z z

0 Θ2,3 Θ3,3 z z z z z z z z

0 0 Θ3,4 Θ4,4 z z z z z z z

0 0 0 0 Θ5,5 z z z z z z

0 0 0 Θ4,6 0 Θ6,6 z z z z z

Θ1,7 Θ2,7 0 0 0 0 Θ7,7 z z z z

t−δ1 Θ1,8 Θ2,8 0 0 0 0 0 Θ8,8 z z z

¯ γ0 B¯ (1 − γ0 )B¯ 0 0], ¯ 0 0 0 0 A Λ = [0 − D 10

Θ1,9 Θ2,9 0 0 0 0 0 0 Θ9,9 z z

t−δ2

0 0 0 0 0 0 0 0 0

Θ10,10 z

0 0 0 0 0 0 0 0 0 0

Θ11,11



     ,    

(45)

and ¯ Θ1,8 = γ0 NB, ¯ Θ1,7 = NA, ¯ Θ1,1 = δ1 X + (δ2 − δ1 )Y − N − NT , Θ1,2 = PT − MT − ND, ¯ Θ2,2 = Q + S − α V − β V − MD ¯ − (MD) ¯ T + ε1 L¯ + λJ1 , Θ1,9 = (1 − γ0 )NB, δ1 δ1 α β ¯ Θ2,8 = γ0 MB, ¯ Θ2,9 = (1 − γ0 )MB, ¯ Θ2,3 = V + V, Θ2,7 = MA, δ1 δ1 α α β β Θ3,3 = − (1 − µ1 )Q − V − V − V − V + ε2 L¯ + λJ2 , δ1 δ1 δ1 δ1 α β α β α β Θ3,4 = V + V, Θ4,4 = R − S + U − V − V − W− W, δ1 δ1 δ1 δ1 (δ2 − δ1 ) (δ2 − δ1 ) α β Θ4,6 = W+ W, Θ5,5 = −(1 − µ2 )R + ε3 L¯ + λJ3 , (δ2 − δ1 ) (δ2 − δ1 ) α β Θ6,6 = − U − W− W, Θ7,7 = −ε1 I, (δ2 − δ1 ) (δ2 − δ1 ) Y X . Θ8,8 = − ε2 I, Θ9,9 = −ε3 I, Θ10,10 = − , Θ11,11 = − δ1 (δ2 − δ1 )

By Schur Complement Lemma (2.5), it is obvious that Θ + δ1 ΛT VΛ + (δ2 − δ1 )ΛT WΛ is equivalent ¯ < 0. It is clear that from (45) the LMI (21) holds, which means that the NNs (15) is globally to Θ asymptotically stable in the means square. Remark 3.2. When γ(t) ≡ 1, (γ0 = 1), then the system (12) becomes

dz(t) = [−Dz(t) + Ag(z(t)) + Bg(z(t − δ1 (t)))]dt + σ(t, z(t), z(t − δ1 (t)))dω(t)

The time varying delay satisfies the following condition 0 ≤ δ1 (t) ≤ δ1 , δ˙ 1 (t) ≤ µ1 . Simultaneously, system (15) turns to ¯ g(ϑ(t)) ¯ ¯ dϑ(t) = [−Dϑ(t) +A ¯ + B¯ g(ϑ(t ¯ − δ1 (t)))]dt + ϕ(t)dω(t),

¯ = where ϕ(t)

(46)

(47)

[ϕ¯ R (t), ϕ¯ I (t)]T ,

ϕ¯ R (t) = σR (t, x(t), x(t − δ1 (t)), y(t), y(t − δ1 (t))),

ϕ¯ I (t) = σI (t, x(t), x(t − δ1 (t)), y(t), y(t − δ1 (t))).

By setting R = U = W = Y = 0 in the proof of Theorem (3.1), the following Corollary (3.3) can be obtained. Corollary 3.3. Under Assumption 1, the activation function can be separated into real and imaginary parts. For given scalars δ1 , µ1 and α + β = 1, the NNs (47) is globally asymptotically stable in the mean square, if there exist matrices P > 0, Q > 0, S > 0, V > 0, X > 0, any matrices M, N with appropriate dimension and scalars ε1 > 0, ε2 > 0, λ > 0 such that the following LMIs holds: P ≤ λI, ˜

˜ = Θ

Θ1,1  z  z   z  z   z  z z

˜ 1,2 Θ ˜ 2,2 Θ z z z z z z

0 ˜ 2,3 Θ ˜ 3,3 Θ z z z z z

0 0 ˜ 3,4 Θ ˜ 4,4 Θ z z z z

˜ 1,5 Θ ˜ 2,5 Θ 0 0 ˜ 5,5 Θ z z z

11

˜ 1,6 Θ ˜ 2,6 Θ 0 0 0 ˜ 6,6 Θ z z

0 0 0 0 0 0 ˜ 7,7 Θ z

(48)



0 ˜ 2,8  Θ 0   0   ˜ Θ5,8  ˜ 6,8  Θ 0

˜ 8,8 Θ



< 0,

(49)

where ¯ Θ ˜ 1,1 = δ1 X − N − NT , Θ ˜ 1,2 = PT − MT − ND, ˜ 1,5 = NA, ˜ 1,6 = NB, ¯ Θ ¯ Θ ¯ ˜ 2,2 = Q + S − α V − β V − MD ˜ 2,3 = α V + β V, Θ ˜ 2,5 = MA, ¯ − (MD) ¯ T + ε1 L¯ + λJ1 , Θ Θ δ1 δ1 δ1 δ1 p ˜ 2,6 = MB, ˜ 2,8 = − δ1 DV, ˜ 3,3 = −(1 − µ1 )Q − α V − α V − β V − β V + ε2 L ¯ Θ ¯ ¯ + λJ2 , Θ Θ δ1 δ1 δ1 δ1 ˜ 3,4 = α V + β V, Θ ˜ 4,4 = −S − α V − β V, Θ ˜ 5,5 = −ε1 I, Θ δ1 δ1 δ1 δ1 p p ¯ ˜ 6,6 = −ε2 I, Θ ˜ 6,8 = δ1 BV, ˜ 7,7 = − X , Θ ˜ 8,8 = −V. ˜ 5,8 = δ1 AV, ¯ Θ Θ Θ δ1

Remark 3.4. When γ(t) ≡ 1, (γ0 = 1), and there has no stochastic disturbance in (12), then the system (12) turns to dz(t) = [−Dz(t) + Ag(z(t)) + Bg(z(t − δ1 (t)))]dt

(50)

¯ g(ϑ(t)) ¯ dϑ(t) = [−Dϑ(t) +A ¯ + B¯ g(ϑ(t ¯ − δ1 (t)))]dt,

(51)

Simultaneously, system (15) then turns to

By setting R = U = W = X = Y = M = N = 0 in the proof of Theorem (3.1), the following Corollary (3.5) can be obtained. Corollary 3.5. Under Assumption 1, the activation function can be separated into real and imaginary parts. For given scalars δ1 , µ1 and α + β = 1, the NNs (51) is globally asymptotically stable, if there exist matrices P > 0, Q > 0, S > 0, V > 0 and scalars ε1 > 0, ε2 > 0 such that the following LMIs holds: b b b b b 

where

  b Θ=   

Θ1,1 z z z z z

Θ1,2 b 2,2 Θ z z z z

0 b 2,3 Θ b 3,3 Θ z z z

Θ1,4 0 0 b 4,4 Θ z z

Θ1,5 0 0 0 b 5,5 Θ z

Θ1,6 0  0   b 4,6  Θ  b 5,6 Θ b 6,6 Θ

< 0,

(52)

b 1,1 = − 2PD b 1,2 = α V + β V, Θ b 1,4 = PA, ¯ ¯ + Q + S − α V − β V + ε1 L, ¯ Θ Θ δ1 δ1 δ1 δ1 p b 1,5 = PB, b 1,6 = − δ1 DV, b 2,2 = −(1 − µ1 )Q − α V − α V − β V − β V + ε2 L, ¯ Θ ¯ ¯ Θ Θ δ1 δ1 δ1 δ1 b 2,3 = α V + β V, Θ b 3,3 = −S − α V − β V, Θ b 4,4 = −ε1 I, Θ δ1 δ1 δ1 δ1 p p b 4,6 = δ1 AV, b 5,5 = −ε2 I, Θ b 5,6 = δ1 BV, b 6,6 = −V. ¯ ¯ Θ Θ Θ R

t ˙ Remark 3.6. In the present results, the time derivative of LKF such as − t−δ ϑ˙ T (u)Vϑ(u)du and 1 R t−δ1 T ˙ ˙ − t−δ2 ϑ (u)Wϑ(u)du have not been simply estimated. By introducing arbitrary scalars 0 ≤ α ≤ 1 R t−δ1 (t) T ˙ and 0 ≤ β ≤ 1, (α+β = 1), and the corresponding equivalent integral terms −α ϑ˙ (u)Vϑ(u)du, t−δ1 Rt R t−δ1 (t) T R t−δ1 T ˙T ˙ ˙T ˙ ˙ ϑ˙ (u)Vϑ(u)du and −α t−δ ϑ˙ (u) t−δ1 (t) ϑ (u)Vϑ(u)du, −β t−δ1 (t) ϑ (u)Vϑ(u)du, −β t−δ1 2 R t−δ1 T ˙ ˙ ×Wϑ(u)du, −β t−δ2 ϑ˙ (u)Wϑ(u)du have been well considered, which provides a new analysis pat-

−α

Rt

tern and yields a great potential in practice.

12

Remark 3.7. In this paper, Theorem (3.1), Corollary (3.3) and Corollary (3.5) reveals that the global asymptotic stability criteria for a class of CVNNs with probabilistic time-varying delays by separating complex-valued activation function into real and imaginary parts. Suppose, when the activation function cannot be separated into real part and imaginary parts, the obtained results are invalid.

4

Illustrative examples

In this section, two numerical examples will be provided to demonstrate the efficiency of the theoretical results. Example 1: Consider the CVNNs (12) with the following parameters:       4 0 1 + i −2 + i 1+i i D= , A= , B= , 0 4 1−i −1 −1 + i 2 1  1 − e−2x j (t)−y j (t) 1 0 L = 2 1 , g j (z j (t)) = +i , j = 1, 2. −2x (t)−y (t) −x j j 0 2 1+e 1 + e j (t)+2y j (t) Then, it is easy to view that Assumption 5 is satisfied with     0.005 0 0.005 0 E1 = F1 = , E2 = F2 = , 0 0 0 0.005     0.005 0 0 0 E3 = F3 = , E4 = F4 = , 0 0 0 0.005     0.005 0 0.005 0 E5 = F5 = , E6 = F6 = . 0 0.005 0 0 Assume that g j (z(t)) = gRj (x(t), y(t)) + igIj (x(t), y(t)), j = 1, 2. Under simple computation, we have    1 −2 4 0 0 0 0 4 0 0  1 −1  ¯  ¯ = D 0 0 4 0 , A =  1 1 0 0 0 4 −1 0   1 1 0 −1 −1 2 −1 2 −1 0  0    ¯ ¯ B=  , L= 1 1 1 0 0 1 0 −1 2 0

 −1 −1 1 0 , 1 −2 1 −1  0 0 0 1  2 0 0 . 0 12 0  0 0 12

Take γ0 = 0.3, δ1 = 0.8, δ2 = 1.4, µ1 = 0.3, µ2 = 0.4, α = 0.5, β = 0.5 and by applying Theorem (3.1), it is easy to verify that the system (15) is globally asymptotically stable in the mean square.

13

Furthermore, by solving LMIs (20)-(21), we can get the following feasible solutions     17.8611 0.4685 −0.0045 0.4981 28.9930 2.3431 0.0253 0.8856  0.4685 17.8980 −0.5009 0.0005   2.3431 26.5140 −0.8888 0.0049     P =  0.0253 −0.8888 28.8597 2.3462  , Q = −0.0045 −0.5009 17.8677 0.4668  , 0.8856 0.0049 2.3462 26.5677 0.4981 0.0005 0.4668 17.8955     44.1526 1.8692 0.3439 1.2020 36.4601 0.6572 0.3565 0.3454  1.8692 42.4764 −1.2091 0.0012   0.6572 35.2918 −0.3485 0.0003     R =  0.3565 −0.3485 35.7503 0.6560  , S =  0.3439 −1.2091 43.4705 1.8643  , 1.2020 0.0012 1.8643 42.4615 0.3454 0.0003 0.6560 35.2860     0.4926 0.1335 −0.0000 −0.0022 5.5979 0.9108 −0.0074 0.5115  0.1335 0.2983 0.0026  0.9108 0.0001  5.0275 −0.5130 0.0007 ,   U = −0.0074 −0.5130 5.6170 0.9083 , V = −0.0000 0.0026 0.4945 0.1331  −0.0022 0.0001 0.1331 0.2967 0.5115 0.0007 0.9083 5.0198     3.4272 0.2347 −0.0028 0.1477 0.5705 0.1528 −0.0001 0.0048  0.2347  0.1528 3.2476 −0.1480 0.0006 0.3606 −0.0045 0.0001    W=  −0.0001 −0.0045 0.5728 0.1525 , X = −0.0028 −0.1480 3.4293 0.2361 , 0.1477 0.0006 0.2361 3.2475 0.0048 0.0001 0.1525 0.3588     24.8573 2.2935 0.0589 −0.2632 4.0997 0.2546 −0.0031 0.1605  2.2935 22.5678 0.2544 −0.0184  0.2546 3.9042 −0.1608 0.0006   , Y=  −0.0031 −0.1608 4.1019 0.2561 , M =  0.0589 0.2544 24.7826 2.2897  −0.2632 −0.0184 2.2897 22.5788 0.1605 0.0006 0.2561 3.9041   6.5394 0.8864 −0.0024 0.0585  0.8864 5.5085 −0.0594 −0.0000 , N=  −0.0024 −0.0594 6.5363 0.8883  0.0585 −0.0000 0.8883 5.5109 ε1 = 71.4675, ε2 = 20.6015, ε3 = 38.7058, λ = 43.8047.

Example 2: Consider the CVNNs (50) with the following parameters:       8 0 2 + 3i 3 − i −1 + 2i 2+i D= , A= , B= , 0 6 4 − 2i 1 + 2i 3 − 4i −3 + 2i   1 − e−x j (t) 1 0.5 0 L= , g j (z j (t)) = +i , j = 1, 2. 0 0.5 1 + e−x j (t) 1 + e−y j (t) Assume that g j (z(t)) = gRj (x(t), y(t)) + igIj (x(t), y(t)), j = 1, 2. Under simple computation, we have  8 0 0 0 6 0 ¯ = D 0 0 8 0 0 0  −1 2  3 −3 ¯= B 2 1 −4 2

   0 2 3 −3 1  0 1 2 −2 , A , ¯ = 4   0 3 −1 2 3 6 −2 2 4 1    −2 −1 0.5 0 0 0  4 −2 0  , L¯ =  0 0.5 0 .   −1 2 0 0 0.5 0  3 −3 0 0 0 0.5 14

5 x1 (t)

4

x2 (t) y1 (t)

3

y (t)

x 1(t),x 2(t),y 1(t),y 2(t)

2

2 1 0 -1 -2 -3 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Time/Sec

Figure 1: Time responses for the real-imaginary part of the states z1 (t) and z2 (t) of the system (50) in Example 2. Take δ1 = 2.4, µ1 = 0.5, α = 0.5, β = 0.5 and by solving LMI (52) in Corollary (3.5), we can obtain the following feasible solutions     61.7449 4.2639 −0.0000 −2.3337 10.7595 0.5199 0.0000 0.1728  4.2639 59.5838 2.3337 −0.0000  0.5199 15.8700 −0.1728 −0.0000    P =  0.0000 −0.1728 10.7595 0.5199  , Q = −0.0000 2.3337 61.7449 4.2639  , −2.3337 −0.0000 4.2639 59.5838 0.1728 −0.0000 0.5199 15.8700     11.6632 6.5412 0.0000 −3.5585 0.0219 0.0157 −0.0000 0.0004  6.5412  8.4121 3.5585 −0.0000 0.0425 −0.0004 −0.0000  , V =  0.0157 , S =  0.0000   3.5585 11.6632 6.5412 −0.0000 −0.0004 0.0219 0.0157  −3.5585 −0.0000 6.5412 8.4121 0.0004 −0.0000 0.0157 0.0425

ε1 = 108.3385, ε2 = 108.9327.

Under the initial conditions z1 (t) = x1 (t) + iy1 (t) = 3 − 5i, z2 (t) = x2 (t) + iy2 (t) = −2 + 3i and the corresponding time responses of the states z1 (t) and z2 (t) of the system (50) are shown in Figure 1 and Figure 2, respectively. According to Corollary (3.5), the CVNNs (50) is globally asymptotically stable.

5

Conclusion

In this paper, we have dealt with the delay-dependent stability analysis problem for SCVNNs with probabilistic time-varying delays by separating CVNNs into real and imaginary parts. By constructing an appropriate LKF and by employing integral inequalities, several delay-dependent sufficient conditions have been derived in terms of simplified LMIs, which can be straightforwardly tested by Matlab LMI control toolbox. Finally, two numerical examples are provided to demonstrate the effectiveness of the theoretical results. The proposed method herein is possible for the investigation and applications of some other CVNNs, including Fuzzy stochastic CVNNs, memristor-based stochastic CVNNs. Therefore, future works will concentrate on the investigation of the proposed method for 15

2

1.5

1

0.5

0 5 4 0

2 0 -5

-2

Figure 2: Time responses for the states z1 (t) and z2 (t) of the system (50) in Example 2. different stochastic CVNNs by some new methodologies. Acknowledgements This work was supported by the National Board for Higher Mathematics of India (Grant No. 02011 / 10 / 2019 NBHM (R.P) R &D II / 1242). The authors are grateful to the handling editor and reviewers for their valuable suggestions and comments.

References [1] J. Cao, Global asymptotic stability of neural networks with transmission delays, Int. J. Syst. Sci. 31 (2000) 1313-1316. [2] S. Arik, An analysis of global asymptotic stability of delayed cellular neural networks, IEEE Trans. Neural Netw. 13 (2002) 1239-1242. [3] O.M. Kwon, J.H. Park, New delay-dependent robust stability criterion for uncertain neural networks with time-varying delays, Appl. Math. Comput. 205 (2008) 417-427. [4] P. Muthukumar, K. Subramanian, S. Lakshmanan, Robust finite time stabilization analysis for uncertain neural networks with leakage delay and probabilistic time-varying delays, J. Frankl. Inst. 353 (2016) 4091-4113. [5] J.D.J. Rubio, Interpolation neural network model of a manufactured wind turbine, Neural Comput. Appl. 28 (2017) 2017-2028. [6] Z. Lin, D. Ma, J. Meng, and L. Chen, Relative ordering learning in spiking neural network for pattern recognition, Neurocomputing 275 (2018) 94-106. [7] C. Mu and D. Wang, Neural-network-based adaptive guaranteed cost control of nonlinear dynamical systems with matched uncertainties, Neurocomputing 245 (2017) 46-54. 16

[8] J.H. Mathews, R.W. Howell, Complex analysis for mathematics and engineering, Jones and Bartlett Publishers (2012). [9] S. Jankowski, A. Lozowski, J.M. Zurada, Complex-valued multistate neural associative memory, IEEE Trans. Neural Netw. 7 (1996) 1491-1496. [10] S.L. Goh, M. Chen, D.H. Popovic, K. Aihara, D. Obradovic, D.P. Mandic, Complex-valued forecasting of wind profile, Renew. Energy 31 (2006) 1733-1750. [11] T. Nishikawa, T. Iritani, K. Sakakibara, Y. Kuroe, Phase dynamics of complex-valued neural networks and its application to traffic signal control, Int. J. Neural Syst. 15 (2005) 111-120. [12] Z. Wang, X. Liu, Exponential stability of impulsive complex-valued neural networks with time delay, Math. Comput. Simulat. 156 (2019) 143-157. [13] D. Liu, S. Zhu, W. Chang, Input-to-state stability of memristor-based complex-valued neural networks with time delays, Neurocomputing 221 (2017) 159-167. [14] D. Liu, S. Zhu, W. Chang, Mean square exponential input-to-state stability of stochastic memristive complex-valued neural networks with time varying delay, Int. J. Syst. Sci. 48 (2017) 1966-1977. [15] S. Ramasamy, G. Nagamani, Dissipativity and passivity analysis for discrete-time complexvalued neural networks with leakage delay and probabilistic time-varying delays, Int. J. Adaptive Control Signal Process. 31 (2017) 876-902. [16] H. Bao, J.H. Park, J. Cao, Synchronization of fractional-order complex-valued neural networks with time delay, Neural Netw. 81 (2016) 16-28. [17] Y. Shi, J. Cao, G. Chen, Exponential stability of complex-valued memristor-based neural networks with time-varying delays, Appl. Math. Comput. 313 (2017) 222-234. [18] W. Gong, J. Liang, X. Kan,X. Nie, Robust state estimation for delayed complex-valued neural networks, Neural Process. Lett. 46 (2017) 1009-1029. [19] W. Gong, J. Liang, X. Kan, L. Wang, A.M. Dobaie, Robust state estimation for stochastic complex-valued neural networks with sampled-data, Neural Comput. Appl. (2018) doi.org/10.1007/s00521-017-3030-8. [20] Z. Wang, L. Huang, Global stability analysis for delayed complex-valued BAM neural networks, Neurocomputing 173 (2016) 2083-2089. [21] Z. Zhang, X. Liu, R. Guo, C. Lin, Finite-time stability for delayed complex-valued BAM neural networks, Neural Process. Lett. 48 (2018) 179-193. [22] X. Chen, Z. Zhao, Q. Song, J. Hu, Multistability of complex-valued neural networks with timevarying delays, Appl. Math. Comput. 294 (2017) 18-35. [23] K. Subramanian, P. Muthukumar, Global asymptotic stability of complex-valued neural networks with additive time-varying delays, Cogn. Neurodyn. 11 (2017) 293-306.

17

[24] Q. Song, Z. Zhao, Y. Liu, Stability analysis of complex-valued neural networks with probabilistic time-varying delays, Neurocomputing 159 (2015) 96-104. [25] J. Liang, W. Gong, T. Huang, Multistability of complex-valued neural networks with discontinuous activation functions, Neural Netw. 84 (2016) 125-142. [26] M. Tan, D. Xu, Multiple µ-stability analysis for memristor-based complex-valued neural networks with nonmonotonic piecewise nonlinear activation functions and unbounded time-varying delays, Neurocomputing 275 (2018) 2681-2701. [27] H. Bao, J.H. Park, J. Cao, Exponential synchronization of coupled stochastic memristor-based neural networks with time-varying probabilistic delay coupling and impulsive delay, IEEE Trans. Neural Netw. Learn. Syst. 27 (2016) 190-201. [28] H. Bao, J. Cao, Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay, Neural Netw. 24 (2011) 19-28. [29] W. Gong, J. Liang, J. Cao, Global µ-stability of complex-valued delayed neural networks with leakage delay, Neurocomputing 168 (2015) 135-144. [30] J. Liang, K. Li, Q. Song, Z. Zhao, Y. Liu, F.E. Alsaadi, State estimation of complex-valued neural networks with two additive time-varying delays, Neurocomputing 309 (2018) 54-61. [31] R. Samidurai, R. Sriraman, J. Cao, Z. Tu, Effects of leakage delay on global asymptotic stability of complex-valued neural networks with interval time-varying delays via new complex-valued Jensen’s inequality, Int. J. Adaptive Control Signal Process. 32 (2018) 1294-1312. [32] R. Samidurai, R. Sriraman, S. Zhu, Leakage delay-dependent stability analysis for complexvalued neural networks with discrete and distributed time-varying delays, Neurocomputing 338 (2019) 262-273. [33] Z. Tu, J. Cao, A. Alsaedi, F.E. Alsaadi, T. Hayat, Global Lagrange stability of complex-valued neural networks of neutral type with time-varying delays, Complexity 21 (2016) 438-450. [34] R. Li, J. Cao, Z. Tu, Passivity analysis of memristive neural networks with probabilistic timevarying delays, Neurocomputing. 191 (2016) 249-262. [35] C. Pradeep, A. Chandrasekar, R. Murugesu, R. Rakkiyappan, Robust stability analysis of stochastic neural networks with Markovian jumping parameters and probabilistic time-varying delays, Complexity 21 (2016) 59-72. [36] R. Rakkiyappan, A. Chandrasekar, S. Lakshmanan, J.H. Park, Exponential stability of Markovian jumping stochastic Cohen-Grossberg neural networks with mode-dependent probabilistic time-varying delays and impulses, Neurocomputing 131 (2014) 265-277. [37] S. Blythe, X. Mao, and X. Liao, Stability of stochastic delay neural networks, J. Frankl. Inst. 338 (2001) 481-495. [38] Y. Chen, W. Zheng, Stability analysis of time-delay neural networks subject to stochastic perturbations, IEEE Trans. Cyber. 43 (2013) 2122-2134.

18

[39] H. Bao, J. Cao, Existence and uniqueness of solutions to neutral stochastic functional differential equations with infinite delay, Appl. Math. Comput. 215 (2009) 1732-1743. [40] Q. Zhu, J. Cao, Mean-square exponential input-to-state stability of stochastic delayed neural networks, Neurocomputing 131 (2014) 157-163. [41] K. Shi, X. Liu, H. Zhu, S. Zhong, Y. Liu, C. Yin, Novel integral inequality approach on masterslave synchronization of chaotic delayed Lur’e systems with sampled-data feedback control, Nonlinear Dyn. 83 (2016) 1259-1274. [42] A. Arunkumar, R. Sakthivel and K. Mathiyalagan, Robust reliable H∞ control for stochastic neural networks with randomly occurring delays, Neurocomputing 149 (2015) 1524-1534. [43] G. Liu, S.X. Yang, Y. Chai, W. Feng, W. Fu, Robust stability criteria for uncertain stochastic neural networks of neutral-type with interval time-varying delays, Neural Comput. Appl. 22 (2013) 349-359. [44] J. Guo, Z. Meng, Z. Xiang, Passivity analysis of stochastic memristor-based complex-valued recurrent neural networks with mixed time-varying delays, Neural Process. Lett. 47 (2018) 10971113. [45] S. Zhu, Y. Shen, Passivity analysis of stochastic delayed neural networks with Markovian switching, Neurocomputing 74 (2011) 1754-1761. [46] Z. Zhang, X. Liu, J. Chen, R. Guo, S. Zhou, Further stability analysis for delayed complexvalued recurrent neural networks, Neurocomputing 251 (2017) 81-89.

19