Equalities of various estimators in the general growth curve model and the restricted growth curve model

Equalities of various estimators in the general growth curve model and the restricted growth curve model

Accepted Manuscript Equalities of various estimators in the general growth curve model and the restricted growth curve model Guangjing Song, Haixia Ch...

675KB Sizes 0 Downloads 13 Views

Accepted Manuscript Equalities of various estimators in the general growth curve model and the restricted growth curve model Guangjing Song, Haixia Chang PII: DOI: Reference:

S0378-3758(15)00158-5 http://dx.doi.org/10.1016/j.jspi.2015.09.003 JSPI 5417

To appear in:

Journal of Statistical Planning and Inference

Received date: 9 September 2014 Revised date: 20 June 2015 Accepted date: 3 September 2015 Please cite this article as: Song, G., Chang, H., Equalities of various estimators in the general growth curve model and the restricted growth curve model. J. Statist. Plann. Inference (2015), http://dx.doi.org/10.1016/j.jspi.2015.09.003 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Click here to view linked References

Equalities of various estimators in the general growth curve model and the restricted growth curve model 1

Guangjing Song , Haixia Chang a,∗

a School

b

of Mathematics and Information Sciences, Weifang University, Weifang 261061, P.R. China

b School

of Statistics and Mathematics, Shanghai Finance University, Shanghai 201209, P.R. China

Abstract: We show some equalities for three estimators: (1) the ordinary least-squares estimators (OLSE ), (2) the weighted least-squares estimators (WLSE ), (3) the best linear unbiased estimator (BLUE ) under the general growth curve model © ª (A) µ1 = Y, X1 ΘX2 , σ 2 (Σ2 ⊗ Σ1 ) , and the restricted growth curve model ¯ © ª (B) µ2 = Y, X1 ΘX2 ¯A1 Θ = B1 , ΘA2 = B2 , σ 2 (Σ2 ⊗ Σ1 ) ,

respectively. Keywords: Ordinary least squares estimators; Weighted least squares estimators; Best linear unbiased estimator; Restricted growth curve model 2000 AMS subject classifications 62F11 62H12 1. Introduction Let Rm×n and Rm ≥ denote the sets of all m × n real matrices and m × m real nonnegative definite 0 matrices, respectively. The symbols A , r(A), tr(A) and R(A) stand for the transpose, the rank, the trace and the range (column space) of a matrix A ∈ Rm×n , respectively. The Kronecker product of the matrices 0 0 0 A ∈ Rm×n and B ∈ Rp×q is A ⊗ B = (aij B). For a matrix A = [α1 , ..., αn ] ∈ Rm×n , vec(A) = [α1 , ..., αn ] . The Moore-Penrose inverse of A ∈ Rm×n , denoted by A† , is the unique solution G to the four matrix 0 0 equations (i) AGA = A, (ii) GAG = G, (iii) (AG) = AG, (iv) (GA) = GA. Furthermore, PA = AA† , QA = A† A, RA = Im − AA† , and LA = In − A† A are the orthogonal projectors induced by A. In statistics, a well-known growth curve model which is a special type of linear longitudinal data model, is Y = X1 ΘX2 + ε, E (ε) = 0, Cov (vec (ε)) = σ 2 (Σ2 ⊗ Σ1 ) ,

which can be written as

© ª µ1 = Y, X1 ΘX2 , σ 2 (Σ2 ⊗ Σ1 ) ,

(1.1)

where Y is an observable random matrix, X1 , X2 are two known model matrices, Θ is an unknown parameter matrix to be estimated, Σ1 and Σ2 are two given nonnegative definite matrices, and σ 2 is a unknown positive scalar. If one of Σ1 and Σ2 is a singular matrix, (1.1) is called as a singular growth curve model. The growth curve model is originally introduced by Potthoff & Roy (1964), and subsequently studied by many authors, such as Rao (1965, 1966), Khatri (1966), Woolson & Leeper (1980), Seber (1985), Pan & Fang (2002), Tian & Takane (2007, 2009), Song (2011) and Anna (2013), etc. Such a model has been widely applied in many areas, such as medicine, biology, psychology, education, economics, 1

This research was supported by the National Natural Science Foundation of China (No: 11326066), the Doctoral Program of Shan Dong Provience (BS2013SF011), and Scientific Research of Foundation of Shanghai Finance University (SHFUKT13-08). ∗ Corresponding author. Email addresses: [email protected] (G. Song), [email protected] (H. Chang)

2

agriculture and engineering. For example, the clinical trials of new pharmaceutical drugs, the agricultural investigations, and the business surveys are conducted through repeated sampling over time by the basic experimental units (Kshirsagar and Smith, 1995). To compare two estimators for an unknown parameter vector under the general Gauss-Markov model © ª µ0 = y, Xβ, σ 2 Σ , we introduce an efficiency function as follows. The D-relative efficiency characterizes relations between two unbiased estimators L1 y and L2 y of β with both nonsingular Cov (L1 y) and Cov (L2 y), eff D (L1 y, L2 y) =

det (Cov (L2 y)) , det (Cov (L1 y))

which is the most frequently used. If eff D (L1 y, L2 y) = 1, i.e., det (Cov (L2 y)) = det (Cov (L1 y)) , then the two estimators L1 y and L2 y are said to have the same D-efficiency. Baksalary & Puntanen (1989), Puntanen & styan (1989) give a definition that two estimators have the same efficiency if Cov(L1 y) = Cov(L2 y). Moreover, two estimators are said to be identical with probability 1 if Cov (L1 y − L2 y) = 0, which can be found in Tian (2006). We give the following definition to characterize the equality of different estimators in the growth curve model. Definition 1.1. Suppose that A1 Y A2 and B1 Y B2 are two unbiased linear estimators of X1 ΘX2 in (1.1). (1) The two estimators are called to have the same efficiency if Cov (A1 Y A2 ) = Cov (B1 Y B2 ) which is C denoted by A1 Y A2 ∼ B1 Y B2 . (2) The two estimators are said to be identical (coincide) with probability 1 if Cov (A1 Y A2 − B1 Y B2 ) = 0 P which is denoted by A1 Y A2 ∼ B1 Y B2 . It follows from Definition 1.1 that Cov (A1 Y A2 ) = Cov (B1 Y B2 ) ³ 0 ´ ³ ´ ³ 0 ´ ³ ´ 0 0 ⇔ A2 Σ2 A2 ⊗ A1 Σ1 A1 = B2 Σ2 B2 ⊗ B1 Σ1 B1 0

0

0

0

⇔ A2 Σ2 A2 = l1 B2 Σ2 B2 , A1 Σ1 A1 = l2 B1 Σ1 B1 , l1 l2 = 1

and Cov (A1 Y A2 − B1 Y B2 )

= Cov (A1 Y A2 ) + Cov (B1 Y B2 ) − Cov (A1 Y A2 , B1 Y B2 ) − Cov (B1 Y B2 , A1 Y A2 ) ´ ´ ³ 0 ´ ³ ´ ³ 0 ´ ³ ³ 0 ´ ³ 0 0 0 = A2 Σ2 A2 ⊗ A1 Σ1 A1 + B2 Σ2 B2 ⊗ B1 Σ1 B1 − A2 Σ2 B2 ⊗ A1 Σ1 B1 ³ 0 ´ ³ 0 ´ − B2 Σ2 A2 ⊗ A1 Σ1 B1 ³ 0 ´ ³ ´ 0 0 0 = A2 ⊗ A1 − B2 ⊗ B1 (Σ2 ⊗ Σ1 ) A2 ⊗ A1 − B2 ⊗ B1 .

³ 0 ´ ³ 0 ´ P Then A1 Y A2 ∼ B1 Y B2 if and only if A2 Σ2 ⊗ (A1 Σ1 ) − B2 Σ2 ⊗ (B1 Σ1 ) = 0, which is equivalent to 0

0

A2 Σ2 = p1 B2 Σ2 , A1 Σ1 = p2 B1 Σ1 , p1 p2 = 1.

It is easy to obtain from the above that P

C

A1 Y A2 ∼ B1 Y B2 ⇒ A1 Y A2 ∼ B1 Y B2 .

3

Parameters in regression models often arise with some restrictions. The natural restrictions to the unknown parameter vector in a singular linear model is proposed by Rao (1973). Many discussions on natural restrictions to singular linear models can be found in Baksalary et al. (1992) and Gorß (2004). Besides the natural restrictions, there are two well-known explicit restrictions on the unknown parameter matrix Θ which are given by A1 Θ = B1 , ΘA2 = B2 ,

(1.2)

where A1 , A2 , B1 , B2 are known matrices. In such a case, the model (1.1) with (1.2) can be represented as ¯ © ª µ2 = Y, X1 ΘX2 ¯A1 Θ = B1 , ΘA2 = B2 , σ 2 (Σ2 ⊗ Σ1 ) . (1.3)

Recently, Song & Wang (2014) derived the expressions of the weighted least-squares, the ordinary leastsquares and the best linear unbiased estimators under (1.3). Motivated by the work mentioned above and in view of applications and interests of the restricted general growth model, we mainly investigate the following basic problems. (1) Equality of the ordinary least-squares estimators (OLSEs), the weighted least-squares estimators (WLSEs) and the best linear unbiased estimator (BLUE ) of some parametric functions under (1.1). (2) Relations of OLSEs, WLSEs, and BLUE of some parametric functions in (1.1) and (1.3). 2. Equalities of OLSE, WLSE and BLUE in (1.1) Suppose that Z ∈ Rn×m , V1 ∈ Rn≥ , V2 ∈ Rm ≥ , and denote ³ 0 ´ 0 f (Z) = tr Z V1 ZV2 = vec (Z) (V2 ⊗ V1 ) vec (Z) .

(2.1)

The WLSE of Θ under (1.1) with respect to the loss function in (2.1), denoted by WLSE (Θ) is defined to be ∼

Θ = arg min f (Y − X1 ΘX2 ) ,

(2.2)

Θ

and the WLSE of X1 ΘX2 under (1.1) is WLSE (X1 ΘX2 ) = X1 WLSE (Θ) X2 . The normal equation of (2.2) is given by 0

0

0

0

X1 V1 X1 ΘX2 V2 X2 = X1 V1 Y V2 X2 which is always consistent. In order to study the relations between different parameters we give the following lemma. Lemma 2.1. (1) Given X1 ∈ Rn×p and Σ1 ∈ Rn≥ . Assume PX1 kΣ1 is the solution of the following matrix equation for G1 h i h i G1 X1 Σ1 RX1 = X1 0 . Then

PX1 kΣ1 =

h

X1

0

ih

X1

Σ1 RX1

i†

µ h + U1 I − X1

Σ1 RX1

ih

X1

Σ1 RX1

i† ¶

,

where U1 is arbitrary with appropriate size. The product PX1 kΣ1 Σ1 is unique and can be expressed as †

PX1 kΣ1 Σ1 = PX1 Σ1 − PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 .

(2.3)

4

(2) Given X2 ∈ Rq×m and Σ2 ∈ Rm ≥ . Assume QΣ2 kX2 is the solution of the following matrix equation for G2 " # " # X2 X2 G2 = . LX2 Σ2 0 Then QΣ2 kX2 =

"

X2 LX2 Σ2

#† "

X2 0

#



+ I −

"

X2 LX2 Σ2

#† "

X2 LX2 Σ2

#  U2

where U2 is arbitrary with appropriate size. The product Σ2 QΣ2 kX2 is unique and can be expressed as †

Σ2 QΣ2 kX2 = Σ2 QX2 − Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 .

(2.4)

The following results on the OLSE, WLSE and BLUE of X1 ΘX2 in (1.1) are well known. Lemma 2.2 (Tian (2009)). Let µ1 be given as (1.1). (a) The OLSE of X1 ΘX2 in µ1 is unique which can be written as OLSE(X1 ΘX2 ) = PX1 Y QX2 , and E(OLSE(X1 ΘX2 )) = X1 ΘX2 , Cov (OLSE(X1 ΘX2 )) = (QX2 Σ2 QX2 ) ⊗ (PX1 Σ1 PX1 ) . (b) The general expression of the W LSE of X1 ΘX2 in µ1 is W LSE (X1 ΘX2 ) = PX1 :V1 Y QV2 :X2 , and

where

E (W LSE (X1 ΘX2 )) = PX1 :V1 X1 ΘX2 QV2 :X2 , ³ ´ ³ ´ 0 0 Cov (W LSE (X1 ΘX2 )) = QV2 :X2 Σ2 QV2 :X2 ⊗ PX1 :V1 Σ1 PX1 :V1 , ³ 0 ´† 0 ³ ´ 0 0 † PX:V = X X V X X V + XLV X U1 , QV :X = V X XV X X + U2 RXV X,

(2.5)

and U with appropriate sizes. In particular, W LSE (X1 ΘX2 ) is unique if and only ³1 and ´U2 are³arbitrary ´ 0 0 if R X1 V1 = R X1 and R(X2 V2 ) = R(X2 ). In such a case, PX1 :V1 X1 = X1 , X2 QV2 :X2 = X2 and W LSE (X1 ΘX2 ) is unbiased. Lemma 2.3 (Song (2011)). If l1 l2 = 1 in (1.1), then a statistic function B1 Y B2 represents the BLU E of an estimable function X1 ΘX2 if and only if ³ ´ 0 B1 X1 = l1 X1 , X2 B2 = l2 X2 , R Σ1 B1 ⊆ R (X1 ) , R (Σ2 B2 ) ⊆ R (X2 ) , or

B1

h

X1

Σ1 RX1

i

=

h

Then BLU E (X1 ΘX2 ) = PX1 kΣ1 Y QΣ2 kX2 ,

l1 X 1

0

i

,

"

X2 LX2 Σ2

#

B2 =

"

l2 X2 0

#

.

E (BLU E (X1 ΘX2 )) = X1 ΘX2 , ³ ´ ³ ´ 0 0 Cov (BLU E (X1 ΘX2 )) = QΣ2 kX2 Σ2 QΣ2 kX2 ⊗ PX1 kΣ1 Σ1 PX1 kΣ1 . h 0 i In particular, BLU E (X1 ΘX2 ) is unique if vec (Y ) ∈ R X2 ⊗ X1 Σ2 ⊗ Σ1 .

(2.6)

5

Lemma 2.4 (Song (2011)). Let µ1 be defined as (1.1). Suppose that A1 Y A2 , B1 Y B2 are two unbiased P linear estimators of X1 ΘX2 , then A1 Y A2 = B1 Y B2 with probability 1, i.e. A1 Y A2 ∼ B1 Y B2 , if and only if A1 X1 = l1 X1 , X2 A2 = l2 X2 , l1 l2 = 1, B1 X1 = q1 X1 , X2 B2 = q2 X2 , q1 q2 = 1, 0

0

A1 Σ1 = p1 B1 Σ1 , A2 Σ2 = p2 B2 Σ2 , p1 p2 = 1, i.e. A1 where λ1 =

l1 q1 , λ2

=

h

X1

l2 q2 ,

Σ1

i

= B1

λ1 λ2 = 1.

h

λ1 X1

p1 Σ1

i

,

"

X2 Σ2

#

A2 =

"

λ2 X2 p2 Σ2

#

B2 ,

The rank method is a powerful tool for finding general properties of matrix expressions. For two matrices A and B with the same sizes, A = B if and only if r (A − B) = 0. Two sets of S1 and S2 consisting of matrices with the same sizes have a common matrix if and only if minA1 ∈S1 ,B∈S2 r (A − B) = 0. The formulas for the rank of A − B can be often used to characterize the equality A = B. In order to simplify various matrix expressions involving generalized inverses, we need the following lemmas. Lemma and Styan (1974)). Let A ∈ Rm×n , B ∈ Rm×k and C ∈ Rl×n . Then h 2.5 (Marsaglia i (a) r A B = r(A) + r(RA B) = r(B) + r(RB A). " # A (b) r = r(A) + r(CLA ) = r(C) + r(ALC ). C " # A B (c) r = r(B) + r(C) + r(RB ALC ). C 0 m×n Lemma 2.6 (Tian Y, Takane , Bi ∈ Rm×ki , Ci ∈ Rli ×n , i = 1, 2, be given with ³ 0 ´ Y (2007)). ³ 0 ´ Let A ∈ R R (B1 ) ⊆ R (B2 ) and R C2 ⊆ R C1 . Then ( " # " #) h i A A B1 max r (A − B1 X1 C1 − B2 X2 C2 ) = min r A1 B2 , r ,r , X1 ,X2 C1 C2 0 " # " # h i A A B1 min r (A − B1 X1 C1 − B2 X2 C2 ) = r A1 B2 + r +r X1 ,X2 C1 C2 0 " # " # A B1 A B2 −r −r . C1 0 C2 0

Now we give our main result of this section. Theorem 2.7. Suppose that µ1 is given as (1.1) and λ1 λ2 = p1 p2 = 1. Then there exists W LSE(X1 ΘX2 ) P such that W LSE(X1 ΘX2 ) ∼ OLSE(X1 ΘX2 ) if and only if " # " # h i λ1 X1 p1 PX1 Σ1 λ1 V1 X1 p1 V1 PX1 Σ1 0 0 r (1 − λ1 ) X1 V1 X1 X1 V1 (I − p1 PX1 ) Σ1 + r =r X1 Σ1 X1 Σ1 (2.7) and " " # " # # 0 (1 − λ2 ) X2 V2 X2 X2 λ2 X2 X2 λ2 X2 V2 r +r =r . (2.8) 0 Σ2 p2 Σ2 QX2 Σ2 p2 Σ2 QX2 V2 Σ2 (p2 QX2 − I) V2 X2

6

Proof. By Lemma 2.1, Lemma 2.2 and Lemma 2.4, we obtain that P

WLSE (X1 ΘX2 ) ∼ OLSE (X1 ΘX2 ) if and only if there exist PX1 :V1 , QV2 :X2 such that PX1 :V1

h

X1

Σ1

i

= PX1

h

λ1 X1

p1 Σ1

i

,

"

X2 Σ2

#

QV2 :X2 =

, N1 =

"

X2 Σ2

"

λ2 X 2 p2 Σ 2

#

QX2 .

Assume M1 =

h

X1

Σ1

i

, M2 =

h

λ1 X 1

p1 PX1 Σ1

i

#

, N2 =

"

λ2 X 2 p2 Σ2 QX2

#

,

then by (2.5), ³ 0 ´† 0 ³ ´ † PX1 :V1 M1 − M2 = X1 X1 V1 X1 X1 V1 M1 − M2 + X1 − X1 (V1 X1 ) (V1 X1 ) U1 M1 ,

where U1 is arbitrary. By Lemma 2.6, we get µ ³ ¶ ´† 0 0 min r(PX1 :V1 M1 − M2 ) = min r X1 X1 V1 X1 X1 V1 M1 − M2 + X1 LV1 X1 U1 M1 PX1 :V1 U1 · ¸ ³ 0 ´† 0 = r X1 X1 V1 X1 X1 V1 M1 − M2 X1 LV1 X1     ³ 0 ´† 0 ³ 0 ´† 0 X X 1 V 1 X 1 X 1 V 1 M1 − M2  X X1 V1 X1 X1 V1 M1 − M2 X1 LV1 X1  +r 1 −r 1 . M1 M1 0 By Lemma 2.5 and applying elementary operations to the block matrices in (2.9), we obtain · ¸ ³ 0 ´† 0 r X1 X1 V1 X1 X1 V1 M1 − M2 X1 LV1 X1   ³ 0 ´† 0 X X V X X V M − M X 1 1 1 1 1 2 1 1 1  − r (V1 X1 ) = r 0 V1 X 1 µ ¶ ³ 0 ´† 0 = r V1 X1 X1 V1 X1 X1 V1 M1 − V1 M2 + r (X1 ) − r (V1 X1 ) " 0 # 0 X 1 V1 X 1 0 X 1 V 1 Σ1 =r − r (V1 X1 ) + r (X1 ) − r (V1 X1 ) V1 X1 (λ1 − 1) V1 X1 p1 V1 PX1 Σ1 h i 0 0 = r (λ1 − 1) X1 V1 X1 X1 V1 (I − p1 PX1 ) Σ1 + r (X1 ) − r (V1 X1 ) , 

 ³ 0 ´† 0 " # M2 X X V X X V M − M 1 2  1 1 1 1 1 1  r =r , M1 M1

and 

³ 0 ´† 0 X 1 X 1 V 1 X 1 X 1 V 1 M1 − M2  r M1

 " # V 1 M2 X1 LV1 X1  =r + r (X1 ) − r (V1 X1 ) . M1 0

(2.9)

(2.10)

(2.11)

(2.12)

7

Taking (2.10)-(2.12) into (2.9), PX1 :V1 M1 = M2 ⇔ min r(PX1 :V1 M1 − M2 ) = 0 PX1 :V1

⇔r

h

0

(1 − λ1 ) X1 V1 X1

0

X1 V1 (I − p1 PX1 ) Σ1

Therefore, we obtain (2.7). Similarly,

i

+r

"

M2 M1

#

−r

"

V 1 M2 M1

#

= 0.

N1 QV2 :X2 = N2 ⇔ min r(N1 QV2 :X2 − N2 ) = 0 QV2 :X2 # " 0 h (1 − λ2 ) X2 V2 X2 ⇔r + r N1 0 Σ2 (p2 QX2 − I) V2 X2

N2

Hence, we get (2.8).

i

−r

h

N1

N 2 V2

i

= 0.

(2.13) ¤

Corollary 2.8. Let µ1 be given as (1.1) and λi = pi = 1, i = 1, 2. P (1) There exist W LSE(X1 ΘX2 ) such that W LSE(X1 ΘX2 ) ∼ OLSE(X1 ΘX2 ) if and only if PX1 V1 RX1 Σ1 = 0 and Σ2 LX2 V2 QX2 = 0. (2) Assume r

h

X1

Σ1

i

= n, r

"

X2 Σ2

#

= n.

(2.14)

(2.15)

P

Then there exist W LSE(X1 ΘX2 ) such that W LSE(X1 ΘX2 ) ∼ OLSE(X1 ΘX2 ) if and only if PX1 Σ1 = Σ1 PX1 and QX2 Σ2 = Σ2 QX2 .

(2.16)

Proof. (1) If λi = pi = 1, i = 1, 2, then µ ¶ ³ 0 ´† 0 min r(PX1 :V1 M1 − M2 ) = r V1 X1 X1 V1 X1 X1 V1 Σ1 − V1 Σ1 PX1 :V1 µ µ ¶¶ ³ 0 ´† 0 = r PX1 V1 X1 X1 V1 X1 X1 V1 Σ1 − V1 Σ1 = r (PX1 V1 RX1 Σ1 ) . Similarly,

min r (N1 QV2 :X2 − N2 ) = r (Σ2 LX2 V2 QX2 ) .

QV2 :X2

Then we get (2.14). (2) The rank equalities in (2.15) are equivalent to r (RX1 Σ1 ) = r(RX1 ) and r (Σ2 LX2 ) = r(LX2 ). Therefore, PX1 V1 RX1 Σ1 = 0 ⇔ PX1 V1 RX1 = 0 ⇔ PX1 V1 = PX1 V1 PX1 ,

Σ2 LX2 V2 QX2 = 0 ⇔ LX2 V2 QX2 = 0 ⇔ V2 QX2 = QX2 V2 QX2 . Note that PX1 V1 PX1 , QX2 V2 QX2 are symmetric, hence (2.16) holds.

¤

8

Theorem 2.9. Let µ1 be given as (1.1) and λ1 λ2 = p1 p2 = 1. Then there exists BLU E(X1 ΘX2 ) such that P

BLU E(X1 ΘX2 ) ∼ OLSE(X1 ΘX2 )

if and only if λ1 = λ2 = 1,





(1 − p1 ) PX1 Σ1 = p1 PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 , (1 − p2 ) QX2 Σ2 = p2 Σ2 (RX2 Σ2 RX2 ) Σ2 QX2 .

(2.17)

Proof. By Lemma 2.3 and Lemma 2.4, P

BLUE (X1 ΘX2 ) ∼ OLSE (X1 ΘX2 ) if and only if PX1

h

X1

Σ1

i

= PX1 kΣ1

h

λ1 X 1

p1 Σ1

i

and

"

X2 Σ2

#

QX2 =

"

λ2 X 2 p2 Σ 2

#

QΣ2 kX2 . ¤

According (2.3)-(2.4), we can get λ1 = λ2 = 1 and (2.17).

Corollary 2.10. Let µ1 be given as (1.1) and pi = 1, i = 1, 2. Then the following statements are equivalent. P (a) There is BLU E(X1 ΘX2 ) such that BLU E(X1 ΘX2 ) ∼ OLSE(X1 ΘX2 ). C (b) There is BLU E(X1 ΘX2 ) such that BLU E(X1 ΘX2 ) ∼ OLSE(X ³ 1 ΘX ´ 2 ). ³ 0 ´ 0 (c) PX1 Σ1 RX1 = 0, QX2 Σ2 LX2 = 0, i.e. R (Σ1 X1 ) ⊆ R (X1 ) , R Σ2 X2 ⊆ R X2 . (d) PX1 Σ1 = Σ1 PX1 , QX2 Σ2 = Σ2 QX2 . Proof. If pi = 1, i = 1, 2, then it follows from Theorem 2.9 that P

BLUE (X1 ΘX2 ) ∼ OLSE (X1 ΘX2 ), which is equivalent to †



PX1 Σ1 − PX1 kΣ1 Σ1 = PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 , Σ2 QX2 − Σ2 QΣ2 kX2 = Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 . Moreover, it is easy to verify that ³ ´ ³ ´ † † r PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 = r PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 PX1 = r (PX1 Σ1 RX1 ) , ³ ´ ³ ´ † † r Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 = r QX2 Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 = r (QX2 Σ2 QX2 ) .

(2.18) (2.19)

The equivalence of (a), (c) and (d) follows these results. For (b),

Cov (BLUE (X1 ΘX2 )) ³ ´ ³ ´ † † = QX2 Σ2 QX2 − QX2 Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 ⊗ PX1 Σ1 PX1 − PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 PX1 = (QX2 Σ2 QX2 ) ⊗ (PX1 Σ1 PX1 )

= Cov (OLSE (X1 ΘX2 )) .

C

And for the other equivalences, if BLUE (X1 ΘX2 ) ∼OLSE (X1 ΘX2 ) then we have ³ ³ ´ ´ † QX2 Σ2 QX2 = λ1 QX2 Σ2 − Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 , ³ ³ ´ ´ † PX1 Σ1 PX1 = λ2 PX1 Σ1 − Σ1 (RX1 Σ1 RX1 ) Σ1 PX1 ,

9

and (1 − λ1 )QX2 Σ2 QX2 = −λ1 QX2 Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 ,



(2.20)

(1 − λ2 )PX1 Σ1 PX1 = −λ2 PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 PX1 .



(2.21) †

Note the fact that (1 − λ1 ) ≥ 0, (1 − λ2 ) ≥ 0 and QX2 Σ2 QX2 , PX1 Σ1 PX1 , QX2 Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 , † PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 PX1 are all nonnegative definite matrices then (2.20) and (2.21) hold if and only if †



PX1 Σ1 (RX1 Σ1 RX1 ) Σ1 PX1 = 0, QX2 Σ2 (LX2 Σ2 LX2 ) Σ2 QX2 = 0. ¤

Thus by (2.18)-(2.19), we can get (a), (c) and (d).

Theorem 2.11. Let µ1 be given as (1.1) and λ1 λ2 = p1 p2 = 1. There exist W LSE(X1 ΘX2 ) and P BLU E(X1 ΘX2 ) such that W LSE(X1 ΘX2 ) ∼ BLU E(X1 ΘX2 ) if and only if " # h i ¢ ¡ λ1 X1 p1 PX1 kΣ1 Σ1 0 0 r (1 − λ1 ) X1 V1 X1 X1 V1 I − p1 PX1 kΣ1 Σ1 + r X1 Σ1 " # λ1 V1 X1 p1 V1 PX1 kΣ1 Σ1 =r (2.22) X1 Σ1 and r

"

0

(1 − λ2 ) X2 V2 X2 ¢ ¡ 0 Σ2 I − p2 QΣ2 kX2 V2 X2

#

+r

"

X2 Σ2

λ2 X2 p2 Σ2 QΣ2 kX2

#

=r

"

X2 Σ2

λ2 X2 V2 p2 Σ2 QΣ2 kX2 V2

#

(2.23)

.

P

Proof. There exist BLU E(X1 ΘX2 ) and W LSE(X1 ΘX2 ) such that BLU E(X1 ΘX2 ) ∼ W LSE(X1 ΘX2 ) if and only if " # " # h i h i X2 λ2 X 2 PX1 :V1 X1 Σ1 = PX1 kΣ1 λ1 X1 p1 Σ1 , QV2 :X2 = QΣ2 kX2 . Σ2 p2 Σ 2 Assume M1 =

h

X1

Σ1

i

, M2 =

h

λ1 X1

p1 PX1 kΣ1 Σ1

i

, N1 =

"

X2 Σ2

#

, N2 =

"

λ2 X 2 p2 Σ2 QΣ2 kX2

#

.

Then it follows form Lemma 2.7 that min r(PX1 :V1 M1 − M2 ) µ ³ ¶ ´† 0 ³ ´ 0 † = min r X1 X1 V1 X1 X1 V1 M1 − M2 + X1 − X1 (V1 X1 ) (V1 X1 ) U1 M1 U1   ³ 0 ´† 0 · ¸ ³ 0 ´† 0 X X V X X V M − M 2  1 1 1 1 1 1 = r X1 X1 V1 X1 X1 V1 M1 − M2 X1 LV1 X1 + r  1 M1   ³ 0 ´† 0 X X V X X V M − M X L 2 1 V1 X1  1 1 1 1 1 1 −r 1 . (2.24) M1 0

PX1 :V1

10

By Lemma 2.5 and applying elementary block matrix operations in (2.24), we get · ¸ ³ 0 ´† 0 † r X1 X1 V1 X1 X1 V1 M1 − M2 X1 − X1 (V1 X1 ) (V1 X1 )   ³ 0 ´† 0 ³ 0 ´† 0 X X1 V1 X1 X1 V1 X1 − λ1 X1 X1 X1 V1 X1 X1 V1 Σ1 − p1 PX1 kΣ1 Σ1 X1  = r 1 − r (V1 X1 ) 0 0 V1 X 1 · ¸ ³ 0 ´† 0 = r (1 − λ1 ) V1 X1 V1 X1 X1 V1 X1 X1 V1 Σ1 − p1 V1 PX1 kΣ1 Σ1 + r (X1 ) − r (V1 X1 ) µh i ³ 0 ´† h i¶ 0 =r + r (X1 ) − r (V1 X1 ) (1 − λ1 ) V1 X1 p1 V1 PX1 kΣ1 Σ1 − V1 X1 X1 V1 X1 0 X 1 V 1 Σ1 " 0 # 0 ³ 0 ´ X1 V1 X1 0 X 1 V 1 Σ1 =r − r X1 V1 X1 + r (X1 ) − r (V1 X1 ) V1 X 1 (1 − λ1 ) V1 X1 p1 V1 PX1 kΣ1 Σ1 h i 0 0 0 = r (1 − λ1 ) X1 V1 X1 X1 V1 Σ1 − p1 X1 V1 PX1 kΣ1 Σ1 + r (X1 ) − r (V1 X1 ) i h ¢ ¡ 0 0 (2.25) = r (1 − λ1 ) X1 V1 X1 X1 V1 I − p1 PX1 kΣ1 Σ1 + r (X1 ) − r (V1 X1 ) , 

and

³

0

X X1 V1 X1 r 1

´†



0

X 1 V 1 M1 − M2  =r M1

"

M2 M1

#

(2.26)

,



 ³ 0 ´† 0 † X X V X X V M − M X − X (V X ) (V X ) 2 1 1 1 1 1 1  1 1 1 1 1 1 r 1 M1 0   " # M2 X1 X1 Σ1   = r  M1 + r (X1 ) − r (V1 X1 ) . 0 =r −λ1 V1 X1 −p1 V1 PX1 kΣ1 Σ1 0 V1 X 1

(2.27)

Taking (2.25)-(2.27) into (2.24), we have min r(PX1 :V1 M1 − M2 )

PX1 :V1

=r

h

0

(1 − λ1 ) X1 V1 X1

0

¡

¢

X1 V1 I − p1 PX1 kΣ1 Σ1

i

+r

"

M2 M1

Let the right side be zero, we can get (2.22). Similarly, (2.23) follows from the following equation " # " # X2 λ2 X 2 QV2 :X2 = QΣ2 kX2 Σ2 p2 Σ 2 Ã" # " # ! X2 λ2 X2 ⇔ min r QV2 :X2 − QΣ2 kX2 = 0 QV2 :X2 Σ2 p2 Σ2 " # h i h (1 − λ2 ) X2 V2 ¡ ¢ ⇔r + r N1 N2 − r N1 0 Σ2 I − p2 QΣ2 kX2 V2 X2

#

−r

"

N 2 V2

V 1 M2 M1

i

#

.

= 0. ¤

11

Corollary 2.12. Let µ1 be given as (1.1) and assume λi = pi = 1, i = 1, 2. There exist W LSE(X1 ΘX2 ) P and BLU E(X1 ΘX2 ) such that W LSE(X1 ΘX2 ) ∼ BLU E(X1 ΘX2 ) if and only if PX1 V1 Σ1 RX1 = 0, LX2 Σ2 V2 QX2 = 0. Proof. Note that Hence

(2.28)

³ 0 ´† 0 V 1 X 1 X 1 V1 X 1 X 1 V 1 X 1 = V1 X 1 .

¸ · ³ 0 ´† 0 ´† 0 ³ 0 V1 X1 X1 V1 X1 X1 V1 M1 − V1 M2 = 0 V1 X1 X1 V1 X1 X1 V1 Σ1 − V1 PX1 kΣ1 Σ1 ,

and it can be verified that · ¸ ³ 0 ´† 0 r V1 X1 X1 V1 X1 X1 V1 Σ1 − V1 PX1 kΣ1 Σ1 = r (PX1 V1 Σ1 RX1 ) . Similarly,

min r

QV2 :X2

Ã"

X2 Σ2

#

QV2 :X2 −

"

λ2 X 2 p2 Σ 2

#

QΣ2 kX2

!

= r (LX2 V2 Σ2 QX2 ) . ¤

Then we can get (2.28). 3. Equalities of OLSE, WLSE and BLUE between (1.1) and (1.3) We begin this section by the following lemma.

Lemma 3.1 (Mitra (1984)). Let A1 ∈ Rm×n , B2 ∈ Rr×s , C1 ∈ Rm×r , C2 ∈ Rn×s be known and X ∈ Rn×r unknown. Suppose that the system of matrix equations A1 X = B1 , XA2 = B2 is consistent, then the general solution can be expressed as X = A†1 C1 + LA1 C2 B2† + LA1 Y RB2 , where Y is an arbitrary matrix over Rn×r with appropriate dimension. There are two restrictions to the parameter Θ in (1.3), and by Lemma 3.1 Θ = A†1 B + LA1 B2 A†2 + LA1 T RA2 , where T is an arbitrary matrix over R with appropriate dimension. Then the restricted growth curve model (1.3) can be turned into the following unrestricted model Y11 = X1 LA1 T RA2 X2 + ε, E (ε) = 0, Cov [vec (ε)] = σ 2 (Σ2 ⊗ Σ1 ) , (3.1) ´ ³ where Y11 = Y − X1 A†1 B + LA1 B2 A†2 X2 and T is a matrix of unknown parameter to be estimated. Now we give the definitions of linear unbiased estimator and best linear unbiased estimator in (1.3). Definition 3.1 (Song and Wang (2014)). A parametric function K1 ΘK2 is said to be estimable if there exist matrices G, F and X0 such that E (GY F + X0 ) = K1 ΘK2 , for all Θ ∈ R. Furthermore, if (3.2) holds, GY F + X0 is called a linear unbiased estimator for K1 ΘK2 .

(3.2)

12

Definition 3.2 (Song and Wang (2014)). Let Q be the set of all linear unbiased estimators. The linear functions G0 Y F0 + X0 is called to be the BLUE of the parametric functions K1 ΘK2 , if G0 Y F0 + X0 ∈ Q and for any functions GY F + X ∈ Q, Cov (G0 Y F0 + X0 ) ≤ Cov (GY F + X) . Analogously, the weighted least-squares estimation of Θ is define as W LSE (Θ) = A†1 B1 + LA1 B2 A†2 + LA1 W LSE (T ) RA2 where WLSE (T ) is based on minimizing the trace of ³ ³ ´ ´0 ³ ³ ´ ´ Y − X1 A†1 B + LA1 B2 A†2 + LA1 V RA2 X2 V1 Y − X1 A†1 B + LA1 B2 A†2 + LA1 V RA2 X2 V2 .

It follows that the WLSE of X1 ΘX2 under the model (1.3) are defined as W LSE (X1 ΘX2 ) = X1 W LSE (Θ) X2 . Then we can get the following results.

Theorem 3.2. The general expressions of the W LSE(X1 ΘX2 ) under the model (1.3) are given by ³ ´ ³ ³ ´ ´ W LSE (X1 ΘX2 ) = X1 A†1 B1 + LA1 B2 A†2 X2 −PX1 LA1 :V1 X1 A†1 B1 + LA1 B2 A†2 X2 − Y QRA2 X2 :V2 . Proof. The parameter matrix Θ satisfies a pair of consistent linear matrix equations (1.2), and the linear model (1.3) can be expressed as (3.1). Suppose ³ ´† ³ ´† ∼ ∼ 0 0 0 0 X1 LA1 = X1 LA1 LA1 X1 V1 X1 LA1 LA1 X1 , RA2 X2 = X2 RA2 RA2 X2 V2 X2 RA2 RA2 X2 , then the WLSE (X1 LA1 T RA2 X2 ) under the model (3.1) can be expressed as · ¸ · ¸ ∼ ∼ W LSE (X1 LA1 T RA2 X2 ) = X1 LA1 V1 + X1 LA1 LV1 X1 LA1 U1 Y11 V2 RA2 X2 + U2 RRA2 X2 V2 , ³ ´ where Y11 = Y − X1 A†1 B + LA1 B2 A†2 X2 . Thus we can get W LSE (X1 ΘX2 )

³ ´ = X1 W LSE (Θ) X2 = X1 A†1 B1 + LA1 B2 A†2 X2 + W LSE (X1 LA1 T RA2 X2 ) ³ ´ = X1 A†1 B1 + LA1 B2 A†2 X2 + PX1 LA1 :V1 Y11 QRA2 X2 :V2 ³ ´ ³ ³ ´ ´ = X1 A†1 B1 + LA1 B2 A†2 X2 + PX1 LA1 :V1 Y − X1 A†1 B + LA1 B2 A†2 X2 QRA2 X2 :V2 .

¤

Theorem 3.3. Let µ2 be given as (1.3). The unique BLUE of the estimable functions X1 ΘX2 in model µ2 is ³ ´ BLUE (X1 ΘX2 ) =X1 A†1 B1 + LA1 B2 A†2 X2 ´ ´ ³ ³ (3.3) + PX1 LA1 kΣ1 Y − X1 A†1 B + LA1 B2 A†2 X2 X2 QΣ2 kRA X2 . 2

Proof. It follows from the proof of Theorem 3.2 that

BLUE (X1 ΘX2 ) = X1 A†1 B1 X2 + X1 LA1 B2 A†2 X2 + BLUE (X1 LA1 T RA2 X2 ) .

(3.4)

13

By Lemma 2.3, we have BLUE (X1 LA1 T RA2 X2 ) = PX1 LA1 kΣ1 Y11 QΣ2 kRA X2 2 ³ ³ ´ ´ = PX1 LA1 kΣ1 Y − X1 A†1 B + LA1 B2 A†2 X2 QΣ2 kRA

2

X2

(3.5)

.

¤

Combine (3.4) with (3.5), we can get (3.3).

Theorem 3.4. Let µ1 and µ2 be given as (1.1) and (1.3). There exist W LSEµ2 (X1 ΘX2 ) and W LSEµ1 (X1 ΘX2 ) such that P

W LSEµ2 (X1 ΘX2 ) ∼ W LSEµ1 (X1 ΘX2 )

if and only if ³ 0 ´† 0 h V1 X 1 X 1 V 1 X 1 X 1 V1 λ 1 X 1 "

λ2 X 2 p2 Σ 2

#

0

³

0

V 2 X 2 X 2 V2 X 2

Proof. Assume that

´†

p1 Σ 1 X2 V2 =

i

³ ´† h 0 0 = V1 X1 LA1 (X1 LA1 ) V1 X1 LA1 (X1 LA1 ) V1 X1

"

X2 Σ2

#

0

V2 (RA2 X2 )

³

0

RA2 X2 V2 (RA2 X2 )

´†

i Σ1 , (3.6)

R A2 X 2 V 2 .

(3.7)

"

# " # X2 λ2 X2 M1 = X1 Σ1 , M2 = λ1 X1 p1 Σ1 , N1 = , N2 = , Σ2 p2 Σ 2 ³ ´† ³ 0 ´† 0 ∼ ∼ 0 0 X1 LA1 = X1 LA1 LA1 X1 V1 X1 LA1 LA1 X1 , X1 = V1 X1 X1 V1 X1 X1 V1 , h

i



h

i



W = X1 LA1 V1 M1 − X1 V1 M2 .

There exist W LSEµc (X1 ΘX2 ) and W LSEµ (X1 ΘX2 ) such that P

W LSEµ2 (X1 ΘX2 ) ∼ W LSEµ1 (X1 ΘX2 ) if and only if PX1 LA1 :V1

h

X1

Σ1

i

= PX1 :V1

h

λ1 X 1

p1 Σ1

i

,

"

X2 Σ2

#

QRA2 X2 :V2 =

"

λ2 X 2 p2 Σ2

#

QRA2 :V2 .

By Lemma 2.6, we have ³ h i h i´ min r PX1 LA1 :V1 X1 Σ1 − PX1 :V1 λ1 X1 p1 Σ1 µ ¶ ∼ ∼ = min r X1 LA1 V1 M1 + X1 LA1 LV1 X1 LA1 U1 M1 − X1 V1 M2 + X1 LV1 X1 U2 M2

 ( " # W W X1 LA1 LV1 X1 LA1   = r W X1 LA1 LV1 X1 LA1 X1 LV1 X1 + r  M1  + max r M2 0 M2   " # W X1 LA1 LV1 X1 LA1 W X1 LA1 LV1 X1 LA1 X1 LV1 X1   −r − r  M2 0 , M2 0 0 M1 0   " # " # W X1 LV1 X1   W X1 LV1 X1 W X1 LA1 LV1 X1 LA1 X1 LV1 X1   r −r − r  M2 0  .  M1 0 M1 0 0  M1 0 h

i



(3.8)

14

By Lemma 2.5 and applying elementary block matrix operations to some block matrices in (3.8) gives h i h i ∼ ∼ r W X1 LA1 LV1 X1 LA1 X1 LV1 X1 = r X1 LA1 V1 M1 − X1 V1 M2 X1 LA1 LV1 X1 LA X1 LV1 X1 1   ∼ ∼ X1 LA1 V1 M1 − X1 V1 M2 X1 LA1 X1   = r 0 V1 X1 LA1 0  − r (V1 X1 ) − r (V1 X1 LA1 ) 0 0 V1 X 1   0 0 X1  0 V1 X1 LA1 0  = r  − r (V1 X1 ) − r (V1 X1 LA1 ) ∼



V1 X1 V1 M2 − V1 X1 LA1 V1 M1 −V1 X1 LA1 0 µ ¶ ∼ ∼ = r V1 X1 V1 M2 − V1 X1 LA1 V1 M1 − r (V1 X1 ) + r (X1 ) ,

(3.9)



 ∼ ∼ " # X1 LA1 V1 M1 − X1 V1 M2 M1   r , (3.10) =r M1 M2 M2 " # " # ∼ ∼ ∼ X1 LA1 V1 M1 − X1 V1 M2 X1 LA1 LV1 X1 LA1 V1 X1 LA1 V1 M1 r =r + r (X1 LA1 ) − r (V1 X1 LA1 ) , M2 0 M2 (3.11) " # " # ∼ ∼ M2 X1 LA1 V1 M1 − X1 V1 M2 X1 LA1 LV1 X1 LA1 X1 LV1 X1 ∼ r =r −r (V1 X1 )+r (X1 ) , M2 0 0 V1 X1 LA1 V1 M1 (3.12)   ∼ ∼ " # X1 LA1 V1 M1 − X1 V1 M2 X1 LA1 LV1 X1 LA1 M2   r + r (X1 LA1 ) − r (V1 X1 LA1 ) , (3.13) =r M2 0 M1 M1 0 " # " # ∼ ∼ M1 X1 LA1 V1 M1 − X1 V1 M2 X1 LV1 X1 ∼ r =r + r (X1 ) − r (V1 X1 ) , (3.14) M1 0 V 1 X 1 V 1 M2 " # " # ∼ ∼ M1 X1 LA1 V1 M1 − X1 V1 M2 X1 LA1 LV1 X1 LA1 X1 LV1 X1 ∼ r =r − r (V1 X1 ) + r (X1 ) , M1 0 0 V 1 X 1 V 1 M2 (3.15) and   ∼ ∼ " # X1 LA1 V1 M1 − X1 V1 M2 X1 LV1 X1 M2   r + r (X1 LV1 X1 ) . (3.16) =r M2 0 M1 M1 0 ¤

Taking (3.9)-(3.16) into (3.8), we can get (3.6). Similarly we can get (3.7).

Theorem 3.5. Let µ1 and µ2 be given as (1.1) and (1.3). There exist BLU Eµ2 (X1 ΘX2 ) and BLU Eµ1 (X1 ΘX2 ) such that P

if and only if h ih X1 LA1 0 X1 LA1

BLU Eµ2 (X1 ΘX2 ) ∼ BLU Eµ1 (X1 ΘX2 ) Σ1 RX1 LA1

i† h

X1

Σ1

i

=

h

X1

0

ih

X1

Σ1 RX1

i† h

λ1 X1

i p1 Σ 1 (3.17)

15

and

"

X2 Σ2

#"

X2 LX2 Σ2

#† "

, M2 =

h

X2 0

#

=

"

λ2 X 2 p2 Σ 2

#"

#† "

R A2 X 2 LRA2 X2 Σ2

R A2 X 2 0

#

(3.18)

.

Proof. Suppose that M1 = S1 =

h

h

X1

Σ1

X1 LA1

i 0

i

, T1 =

λ1 X1 h

X1

"

i

#

X2 Σ2

"

λ2 X 2 , N2 = p1 Σ 1 , N 1 = p2 Σ 2 i h i h 0 , X1 LA1 ΣRX1 LA1 = S, X1

#

,

ΣRX1

There exist BLU Eµ2 (X1 ΘX2 ) and BLU Eµ1 (X1 ΘX2 ) such that

i

= T.

P

BLU Eµ2 (X1 ΘX2 ) ∼ BLU Eµ1 (X1 ΘX2 ) if and only if PX1 LA1 kΣ1 M1 − PX1 kΣ1 M2 = 0 and N1 QΣ1 kRA

It follows from Lemma 2.6 that ³ h min r PX1 LA1 kΣ1 X1 X1 LA1 kΣ1 ,PX1 kΣ1

1

X1

− N2 QΣ2 kRA

Σ1

i

− PX1 kΣ1

2

X2

h

= 0.

λ1 X 1

p1 Σ1



¡ ¢ ¡ ¢ = min r S1 S † + U1 RS M1 − T1 T † − U2 RT M2 = r S1 S † M1 − T1 T † M2 U1 ,U2

=

h

X1 LA1

0

ih

X1 LA1

Σ1 RX1 LA1

i†

M1 −

h

X1

0

ih

X1

Σ1 RX1

i†

M2 .

Let the right side of (3.19) be zero, we can get (3.17). Similarly, ³ ´ min r N1 QΣ1 kRA X1 − N2 QΣ2 kRA X2 1 2 Σ1 kRA1 X1 ,Σ2 kRA2 X2 " #" #† " # " #" #† " # X X X λ X R X R X 2 2 2 2 2 A 2 A 2 2 2 . = r − Σ2 LX2 Σ2 0 p2 Σ 2 LRA2 X2 Σ2 0 Let the right side of (3.20) be zero, we can get (3.18).

(3.19)

(3.20) ¤

Remark 3.1. Let µ1 and µ2 be given as (1.1) and (1.3). Then (1) If W LSEµ2 (X1 ΘX2 ) and BLU Eµ2 (X1 ΘX2 ) exist, then we can obtain the necessary and sufficient conditions for P

P

P

P

BLU Eµ2 (X1 ΘX2 ) ∼ W LSEµ1 (X1 ΘX2 ), BLU Eµ2 (X1 ΘX2 ) ∼ OLSEµ1 (X1 ΘX2 ), W LSEµ2 (X1 ΘX2 ) ∼ BLU Eµ1 (X1 ΘX2 ), W LSEµ2 (X1 ΘX2 ) ∼ OLSEµ1 (X1 ΘX2 ), respectively. (2) Assume λi = pi = 1, i = 1, 2, we can also get necessary and sufficient conditions for P

P

BLU Eµ2 (X1 ΘX2 ) ∼ BLU Eµ1 (X1 ΘX2 ), W LSEµ2 (X1 ΘX2 ) ∼ W LSEµ1 (X1 ΘX2 ), respectively.

16

4. Conclusion In this paper, we have derived a lot of equalities related to the ordinary least-squares estimators, the weighted least-squares estimators, and the best linear unbiased estimator of some parametric functions under (1.1). Moreover, we have also given the formulas of the weighted least-squares estimators, the ordinary least-squares estimators and the best linear unbiased estimator of some parametric functions between (1.1) and (1.3). Acknowledgement The authors express their thanks to the referees for their constructive and valuable comments and suggestions which greatly improve the original manuscript of this paper. References [1] Anna S., 2013. Simultaneous choice of time points and the block design in the growth curve model Statist. Papers Stat Papers 54, 413-425. [2] Baksalary J.K., Puntanen S., 1989. Weighted-least-squares estimation in the general Gauss-Markov model in: Y. Dodge (Ed.), Statistical Data Analysis and Inference. Elsevier, Amsterdam, 355-368. [3] Baksalary J.K., Rao C.R., Markiewicz A., 1992. A study of the influence of the natural restrictions on estimation problems in the singular Gauss-Markov model. J. Stat. Plann. Inference 31, 335-351. [4] Kshirsagar, A.M., Smith, W.B., 1995. Growth Curves. Marcel Dekker, NewYork. [5] Khatri C.G., 1966. A note on a MANOVA model applied to problems in growth curve. Ann. Inst. Statist. Math. 18, 75-86. [6] Groß J., 2004. The general Gauss-Markov model with possibly singular dispersion matrix. Stat Pap 45, 311-336. [7] Mitra S.K, 1984. The matrix equations AX = C, XB = D. Linear Algebra Appl. 59, 171-181. [8] Marsaglia G., Styan G.P.H., 1974. Equalities and inequalities for ranks of matrices. Linear and Multilinear Algebra 2, 269-292. [9] Pan J., Fang K.T., 2002. Growth curve models with statistical diagnostics. Springer, New York. [10] Potthoff R., Roy S.N., 1964. A generalized multivariate analysis of variance model useful especially for growth curve problems, Biometrika. 51, 313-326. [11] Puntanen S., Styan G.P.H., 1989. The equality of the ordinary least squares estimator and the best linear unbiased estimator. Am Stat 43, 153-164. [12] Rao C.R., 1965. The theory of least squares when the parameters are stochastic and its application to the analysis of growth curves. Biometrika 52, 447-458. [13] Rao C.R., 1966. Covariance adjustment and related problems in multivariate analysis. In: Krishnaiah, P.R. (Ed.), Multivariate Analysis. Academic Press, New York, 87-103. [14] Rao C.R., 1973. Representations of best linear unbiased estimators in the Gauss-Markov model with a singular dispersion matrix. J. Multivariate Anal. 3, 276-292. [15] Seber G.A.F., 1985. Multivariate Observations, Wiley, New York. [16] Song G.J., 2011. On the best linear unbiased estimator and the linear sufficiency of a general growth curve model. J. Statist. Plann. Inference 141, 2700-2710. [17] Song G.J., Wang Q.W., 2014. On the weighted least-squares, the ordinary least-squares and the best linear unbiased estimators under a restricted growth curve model Stat Papers 55, 375-392. [18] Tian Y., Douglas P., Wiens., 2006. On equality and proportionality of ordinary least squares, weighted least squares and best linear unbiased estimators in the general linear model Statistics & Probability Letters 76, 1265-1272. [19] Tian Y., Takane Y., 2007. Some algebraic and statistical properties of estimators under a general growth curve model. Electron. J. Linear Algebra 16, 187-203. [20] Tian Y., Takane Y., 2009. On consistency, natural restrictions and estimability under classical and extended growth curve models. J. Statist. Plann. Inference 139, 2445-2458. [21] Woolson R.F., Leeper J.D., 1980. Growth curve analysis of complete and incomplete longitudinal data. Comm. Statist. Theory Methods 9, 1491-1513.