# Cauchy problem for least-squares estimation with semidegenerate covariance

## Cauchy problem for least-squares estimation with semidegenerate covariance

PERGAMON Applied Mathematics Letters Applied Mathematics Letters 12 (1999) 95-99 Cauchy Problem for Least-Squares Estimation with Semidegenerate Co...
PERGAMON

Applied Mathematics Letters

Applied Mathematics Letters 12 (1999) 95-99

Cauchy Problem for Least-Squares Estimation with Semidegenerate Covariance H . NATSUYAMA* AND S. U E N O Kyoto School of Computer Science 7 Monzen-cho Tanaka, Sakyo-ku Kyoto, 606-8225 Japan hnat suOearthlink, n e t

(Received and accepted April 1998) A b s t r a c t - - A linear least-squares filtering problem with a semidegenerate integral equation is considered. An invariant imbedding treatment reduces this problem to a system of ordinary differential equations with initial conditions. The optimal estimate may be evaluated in a real-time sequential manner through a simple algebraic formula. © 1999 Elsevier Science Ltd. All rights reserved. Keywords--Detection, Estimation theory, Invariant imbedding, Integral equations. 1. I N T R O D U C T I O N In detection and estimation theory [1,2], the solution of a Fredholm integral equation provides the least squares optimal estimate. In , invariant imbedding was applied to the case of a displacement kernel, and in  a kernel with degenerate covariances. In this paper, we consider covariances which have semidegenerate form. The Cauchy problem for the estimate is obtained through analytical derivation and the computational method is discussed. The treatment of various types of integral equations is presented in . 2. I N T E G R A L STOCHASTIC

EQUATION ESTIMATION

FOR

LINEAR

PROCESSES

Consider the stochastic process

y(t) = z(t) + v(t),

0 < t < T < ~¢,

(1)

where z(t) is an n-dimensional, stationary zero mean signal process with a covariance of the n x n matrix form E[z(t)z*(s)]=k(t,s), O<_t, s<_T. (2) In equation (2) the asterisk denotes the transpose operation and k(t, s) is assumed to be continuous in t and s. Furthermore, v(t) is an n-dimensional, additive, zero mean, white noise process with unit spectral intensity, such that E [v(t)] = O,

E [v(t)v*(s)] = 6(t - s),

(3)

"Author to whom all correspondence should be addressed. Permanent address: P.O. Box 9352, Brea, CA 92821, U.S.A. 0893-9659/99/\$ - see front matter (~) 1999 Elsevier Science Ltd. All rights reserved. PII: S0893-9659(99)00041-5

Typeset by Jth/IS-TEX

96

H. NATSUYAMA

and z(t) is also such that

E [z(t)v*(s)] = O,

0 <_ t < s.

(4)

Let the linear least-squares estimate of z(t) based on the cumulative observations {y(s) : 0 < s < t < T} be denoted by ~(t), which can be written

~(t) =

h(t, s)y(s) ds,

(5)

where h(t, s) is called an impulse response function. The weighting function h(t, s) satisfies the Wiener-Hopf type of Fredholm integral equation

h(t, s) = k(t, s) -

k(y, s)h(t, y) dy,

0 < s < t < T.

(6)

The optimal linear least-squares estimator of (5) minimizes the least-squares error. In view of equation (5), an evaluation of the h-function of (6) is required to make the optimal estimation of such linear stochastic processes. Next, we develop an algorithm to determine the estimate. 3. I N V A R I A N T IMBEDDING AND THE INITIAL-VALUE PROBLEM Let the covariance be expressed in terms of a semidegenerate kernal M

k(t, s) = Z

a(m, z)b(m, s),

0 < s < z,

rn= l

N

= E

(7) c(n, z)d(n, s),

0 <_ z < s.

n=l

In a manner similar to that given in our preceding paper , we introduce the Fredholm resolvent, the K-function, governed by the following equations:

K ( z , s; t) = k(z, s) -

k(t, y ) K ( y , s; t) dy,

(8)

g ( z , s, t) = k(z, s) -

g ( z , y; t)k(y, s) ay,

(9)

where 0 _< z, s < t < T, and the kernel k-function is the covariance. Furthermore, introduce a set of auxiliary functions which satisfy the integral equations,

I(n; z, t) = c(n, z) -

k(~, y)~r(,~; y, t) ay,

(10)

J ( m ; s , t ) = b(m,s) -

J(m;y,t)k(y,s)dy.

(11)

On the other hand, the I- and ,/-functions are expressed in terms of the resolvent K-function

I(n; z, t) = c(n, z) J(m; s, t) = b(m, s) -

K ( z , y; t)c(n, y) dy,

(12)

b(m, z ) K ( z , s; t) dz,

(13)

where n = 1 , 2 , . . . , N , and m = 1 , 2 , . . . , M . We introduce

@(z, t) = K ( z , t; t),

0 < z < t < T,

(14)

@(s, t) = K ( t , s; t),

0 < s < t < T,

(15)

Cauchy Problem

97

Two useful relations obtained by setting z = t in (12) and s = t in (13) are

~0t ql(y, t)c(n,

I(n, t; t) = c(n, t) -

y) dy

(16)

and

f0t b(m, y)~(y, t) dy.

J(m, t; t) = b(m, t) -

(17)

On differentiating equation (9) with respect to t, we have

~0t Kt(z,

Kt(z, s; t) = - ~ ( z , t)k(t, s) -

y; t)k(y, s) dy,

(18)

where the subscript t denotes differentiation with respect to t. Putting z -- t in equation (9) and recalling equation (15), we get

• (s,t) = k(t,s) -

/o

~(y,t)k(y,s)dy.

(19)

On regarding equation (18) as an integral equations for Kt(z, s; t), based on the superposition principle, we obtain g t ( z , s; t) = -q2(z, t)O(s, t), (20) where t _> max(z, s). Once the (I)- and kO-functions have been computed, the resolvent Kfunction can be determined by the Bellman-Krein type formula (20). On recalling equation (12) through (15), after some algebraic manipulations, we see that

N O(z, t) = E

d(n, t)I(n; z, t),

(21)

M

h(t, s) = ~Y(s, t) = ~

a(m, t)J(m; s, t).

(22)

m=l Upon differentiation of equation (10) with respect to t, recalling (21), we have the differential equation for I,

z,(n; z, t) = - I ( n ; t, t)¢(z, t).

(23)

Similarly, we have, from (11), the differential equation for J,

Jr(m; s, t) = - J ( m ; t, t)g~(s, t).

(24)

R(i,j;t) = -

~0t b ( i , z ) I ( j ; z , t ) d z ,

(25)

S(i, j; t) = -

/o

(26)

Introduce the R-function

and the S-function

J(i; y, t)c(j, y) dy,

for i = 1 , 2 , . . . , N ; j = 1 , 2 , . . . , N. On differentiating (24) with respect to t, using (23) and (17), we have the derivative of R expressed as

Rt(i, j; t) = I(j, t; t)J(i, t; t).

(27)

98

H. NATSUYAMA

Similarly from (26), (24), and (16), we obtain

St(i, j; t) = J(i, t; t)I(j, t; t).

(28)

The R- and S-functions obey the same differential equations and have the same initial conditions

R(i,j;O) = S(i,j;O) = O.

(29)

R(i,j; t) = S(i,j; t).

(30)

We conclude that

From (16) and (22), we know that M

I(j,t;t) = c(j,t) + E

a(m,t)S(m,j;t)

(31)

m=l

and from (17) and (21), N

J(i, t; t) = b(i, t) + E R(i, n; t) d(n, t).

(32)

n~l

Then the initial value problem for the R-function is

Rt(i,j;t)=

[

][

b(i,t)+ZR(i,n;t)d(n,t)

M

. c(j,t)+Ea(m,t)S(m,j,t)

n=l

,

(33)

m=l

R(i,j; 0) = 0, for i = 1 , 2 , . . . , N ; j

]

(34)

= 1,2,...,N. 4. T H E

REAL-TIME

ESTIMATE

On inserting equation (22) into equation (5), we see that M

~(t) = E

a(m, t)L(m, t),

(35)

J(m; s, t)y(s) ds.

(36)

m=l

where

L(m, t) =

f

Differentiating equation (35) with respect to t and recalling equation (24), we have the differential equation for L,

Lt(m,t) = J(m;t,t)

y(t) - E a(j,t)L(j,t)

,

(37)

j=l

together with the initial condition,

L(m, 0) = 0,

(38)

where J(m; t, t) in equation (37) is given by (32). The real-time solution L-function can be evaluated by integration of the Cauchy system of (33), (34), (37), and (38). The optimal estimate z is evaluated using (35), without computing the impulse response function.

Cauchy Problem

5. C O M P U T A T I O N A L

99

METHOD

T h e c o m p u t a t i o n a l solution consists of the numerical integration of the system of ordinary differential equations for the R- and L-functions,

Rt(i,j;t)=

b(i,t)+ ~ _ R ( i , n ; t ) d ( n , t )

. c ( j , t ) + ~-'~ a ( m , t ) S ( m , j , t )

n=l

,

(39)

m=l

fori=l,2,...,N;j=l,2,...,N,

Lt(m,t) = J(m;t,t)

y(t)-

a(j,t)L(j,t)

,

(40)

for t >_ 0, with initial conditions

R(i, j, O) = O,

(41)

L ( m , O) = O.

(42)

T h e n the o p t i m a l real-time estimate is given by the evaluation of M ~.(t) = ~ a(m, t ) L ( m , t). m=l

(43)

6. D I S C U S S I O N A sequential solution for the optimal estimate in the case of semi-degenerate covariance has been obtained. In our subsequent paper we shall show how to numerically c o m p u t e the ~-function. O n extending the one-dimensional optimal estimate to the two-dimensional ones, we shall be able to have a reM-time solution for two-dimensional image restoration.

REFERENCES 1. H.L. Van Trees, Detection, Estimation, and Modulation Theory, Part I, Wiley, New York, (1968). 2. T. Kailath, A view of three decades of linear filtering theory, IEEE Trans. Inform. Theory IT-20, 146-181 (1974). 3. J.L. Casti, R. Kalaba and V.K. Murthy, A new initial-value method for on-line filtering and estimation, IEEE Trans. Inform. Theory IT-18, 515-518 (1972). 4. S. Ueno, An initial-value solution of the least-squares estimation problem with degenerate covariance, Appl. Math. Comput 13, 87-94 (1983). 5. H.H. Kagiwada and R.E. Kalaba, Integral Equations via Imbedding Methods, Addison-Wesley, (1975).