# A note on strong approximation for quantile processes of strong mixing sequences

## A note on strong approximation for quantile processes of strong mixing sequences

b"TATISTIC8 & ELSEVIER Statistics & Probability Letters 30 (1996) 1 - 7 A note on strong approximation for quantile processes of strong 1 mixing seq...
b"TATISTIC8 & ELSEVIER

Statistics & Probability Letters 30 (1996) 1 - 7

A note on strong approximation for quantile processes of strong 1 mixing sequences Hao Yu Department of Statistical and Actuarial Sciences, The University of Western Ontario, London, Ont., Canada N6A 5B7 Received July 1995

Abstract

In this note we give a short proof that a quantile process based on strong mixing sequences can be approximated by a Gaussian process almost surely. Our result improves Theorem 2 of Fotopoulos et al. (1994), with lighter strong mixing decay rate and wider intervals. A M S classification: Primary 60F15; secondary 62G30 Keywords: Empirical process; Quantile process; Strong approximation; Stationarity; Strong mixing

1. Introduction

Let {Xn, n >~ 1} be a sequence of random variables with common distribution function F and Q be the quantile function of F, defined by Q ( s ) = F - l ( s ) = i n f ( x : F ( x ) > > . s } , 0 < s~
-~

and the nth general quantile process 7n(S) is defined as 7n(S) = nl/2(Q(s) - Q , ( s ) ) ,

O<~s<~ 1.

Assume that F is continuous. Then Un = F ( X , ) for all n/> 1 are uniform-[0, 1] distributed random variables and thus induced uniform empirical distribution function of U1. . . . . Un is defined by E , ( s ) = ( 1 / n ) ~ 7 = ] I (Ui<~s) = F n ( Q ( s ) ) , 0~
(1.1)

2

H. Yu/Statistics & Probability Letters 30 (1996) 1-7

The similarly induced uniform empirical quantile function is given by G n ( s ) = E ; I ( s ) = F(Qn(s)), 0 <~s <<.1, and its nth uniform quantile process is defined by {u.(s), 0~
= {nl/2(s

- Gn(s)),

0~
1}.

(1,2)

The so-called normalized general quantile process is defined by p,(s) = f ( Q ( s ) ) 7 , ( s ) ,

0~
(1.3)

where f = F t is the density function of F. In this note we study how the general quantile process p,(s) can be approximated by a Gaussian process almost surely based on strong mixing sequences. The purpose of this note is twofold. First, by using a very simple method, we find that the sup-norm distance of the general quantile process p,(s) from its corresponding uniform quantile process u,(s) converges a.s. to zero under the so-called Cs6rg6-R6v6sz conditions. This enables us to obtain an a.s. approximation for p,(t) by a Kiefer process. Our result (cf. Theorem 2.3 and Remark 2.2) improves Theorem 2 of Fotopoulos et al. (1994), with lighter strong mixing decay rate and wider intervals. Secondly, we will point out that Theorems 1 and 3 of Fotopoulos et al. (1994) are not correct as stated unless some further conditions are imposed.

2. Results

We first introduce the following dependence notion. Let {Am, n~>l} be a sequence of real-valued random variables on (t2, ~ , P ) and ~ m = tr(Xi, n <<.i<~m) be the a-algebra generated by the indicated random variables. Define c~(n) = sup

sup

IP(A n B) - P(A)P(B)I.

The sequence {X,, n>~ 1} is said to be :c-mixing (strong mixing) if :c(n) ~ 0 as n ~ c~. We now restate below a known strong approximation by Philipp and Pinzur (1980) for the uniform empirical process :c,(s) of :c-mixing sequences. This strong approximation plays a central role in our proofs for obtaining a strong approximation for the general quantile process p,(s). Let gn(S) = I(U, ~ 1 and define for 0 ~
(2.1)

n=2

A separable Gaussian process {K(s,t);O<~s<~l,t>>.O} is called a Kiefer process if it satisfies K(s,O) = K(O,t) = K(1,t) = 0, EK(s,t) = 0, and has covariance function min(t,F)F(s, st), for t,t' > 0 and 0 ~/1} be a stationary :c-mixing sequence ofuniform-[O, 1] random variables satisfying

ct(n) = O(n - 5 - c )

for some 0 < e<~¼.

(2.2)

Let F(s, s') be defined by (2.1). Then without changing its distribution we can redefine the empirical process {:cn(S); 0~~ 1} o f { U,, n >~1} on a richer probability space on which there exists a Kiefer process {K(s,t); O<~s<~ 1, t ~>0} with covariance function min(t,t')F(s,s t) and a constant 2 > 0 depending only on e such that sup

sup Ikl/2:ck(s) - K(s,k)l = O(nl/2(logn) -2)

k<~n 0~
a.s.

(2.3)

H. Yu/Statistics & Probability Letters 30 (1996) 1 - 7

3

An implication of Theorem A is the law of the iterated logarithm for ~n(s), i.e., there exists a C > 0 such that ]~n(s)l ~
a.s.

(2.4)

Theorem 2.1. Let {X~, n >~1 } be a sequence o f random variables with common continuous distribution function F. Assume that F satisfies the Cs6rgr"-Rkvksz conditions, i.e., (i) F is twice differentiable on (a, b), where a = s u p { x : F ( x ) = 0}, b = inf{x :F(x) = 1},

-oo<<.a < b<~ + ~ ;

(ii) F'(x) = f ( x ) > 0 on (a, b); (iii) f o r some 7 > 0 we have . . . . If'(x)l sup F(x)(1 - r t x ) ) -- sup s ( 1 - s" If'(Q(s))l ~<~. a
nl/23n Ip,(s) - u~(s)l ~ ~C2 sup 2 limn~o~suplog log n 6..
a.s.,

(2.5)

where 0 < 6n --+ 0 and (loglogn)U2/(nl/26n) ~ 0, as n ~ cxz. If, in addition to the CsSrgr-Rrvrsz conditions, we also assume that F satisfies (iv) l i m i n f f ( Q ( s ) ) > O, and liminfsTl f ( Q ( s ) ) > O, s~O

(v) lim sup ]f'(Q(s))] < oz, and lim sup I f ( Q ( s ) ) ] < e~, slO sT1 then f o r the s a m e 3n defined in (2.5) we have log log n sup I p , ( s ) - Un(S)] ~- O nl/262 --[- ~ j

a.s.

(2.6)

O
Proof. By the two-term Taylor expansion, we have 1 2, s , S ( Q ( ¢ ) ) . . . . . pn(S) ---- Un(S ) -- 2~-~Unt ) f 3 ( Q ( ~ ) ) J (~(s)),

where ~ = ~(s, n) and (2.7)

IS -- 41 <~n-I/21Un(S)l.

Hence,

1 2 If'(Q(g))df(Q(s)). Ip,(s) - u,(s)l ~<2-~Zu~(s) f3(Q(¢)) Using Lemma 2.3 of Babu and Singh (1978), we have from (2.4) that limsup sup n---*~x~ 0~
lun(s)[ <<.C (loglogn) 1/2

a.s.,

(2.8)

4

H. Y u l S t a t i s t i c s & Probability Letters 30 (1996) 1 - 7

which implies almost surely that

lim sup

sup

(

6n s(1-s)loglogn

n---*oo 6n<~S<
) 1/2 [Un(S)l

sup ,u.(s)l 0~<1 (loglogn) 1/2

(?±)1/2 ~< limsuPn~ 6n( - 6.) ~
Hence. we have almost surely that for any e > 0, there exists an integer N such that for n>>.N and 6n <~s<<.1-6.

lu.(s)l ~(c

+ e ) ( s(1 - s)loglog n'~ 1/2,

(2.9)

and also, by 6. ~ 0 and (loglogn)1/2/(nl/26.) --* O, as n --* oo, lim

lu.(s)l

sup

n---*cx~ 6n <~s<~l_6"

nl/2s(1

-

-

-

a.s.

0

(2.10)

S)

Therefore, by (2.8), (2.9) and (iii), we have uniformly for bn~s<<.l - 6 , ,

IOn(S) - Un(S)[ <~

( C + e)2 log logn s(1 - s ) I ~) [f'(Q(~))l "Lf(Q(s)) 2ni/26 n ~(1 ~ ) . ~:(1 ~ j ~

(2.11)

~< 7(C + e) 2 loglogn s(1 - s) f(Q(s)) 2n1/26n ~(1 -- ~) f(Q(~))" By Lemma 1 of Cs6rg6 and R6v6sz (1978)

f(Q(s)) .<{sv ~'~ f l-(sA¢)] ~ U(Q(~))"~sA~J

~,1

(2.12)

(sV-~J"

Thus, (2.5) follows from (2.11) and (2.12) if we can show that lim sup n-+oo

sup

6n<~s<~l--6n

max

s'

s'

1

'

~<1

1

a.s.

Now we estimate s/( = 1 + (s - ~)/~. By (2.7) and (2.10), we have almost surely lim sup

sup

s

~< 1 + lim sup

n---~cx~ 6. <~s<<.1--6. ~

sup

n---*cx~ 6. <<.s<~1-6. S

lim

n-1/2lu.(s)l -- n-1/2lUn(S)l

n-l/Zs -1 [Un(S)l

sup

n---*cx~ 6n ~
~<1+

1 -

lim

sup

n-1/2s-llu,(s)l

n---+cx~ 6. ~
Similar arguments give lim sup

sup

n----~oo 6n<~s~l--6.

This proves (2.13).

max '

1

s'

1

~<1

a.s.

(2.13)

H. Yul Statistics & Probability Letters 30 (1996) 1 - 7

5

In order to prove (2.6), it suffices to show that Ip.(s) - u.(s)l

sup

=

0(nl/262)

a.s.

(2.14)

O
and

Ipn(s)

sup

-

-

O(n1/262n)

Un(S)t =

a.s.

(2.15)

1-6.~
We demonstrate this only for (2.14) since, for (2.15), a similar argument holds. First we observe that sup lu.(s)l<~n ~/z sup sI{s>~Gn(s)}+n 1/2 sup O
O
Gn(S)I{S~Gn(S)}.

O
By (2.10)

Gn(s)~nl/2G.(6n)<~.nl/26. + [Un(fn)[

sup

n 1/2

= O(nl/2(~n)

a.s.

O
Hence, we have shown that sup

a.s.

lu.(s)[ = o ( n l / 2 ( ~ . )

(2.16)

O
Using the mean value theorem, we can write

pn(s) = f (Q(s) )nl/2(Q(s) - Q( Gn(s) ) ) = Un(S) + Un(S)en(s), where

en(S) --

f(Q(s)) f(Q(~I ))

1 --

(s 31 )ft(Q(¢2)) f(Q(~I ))f(Q(~2)) ' -

-

and ~i = ~i(s,n) and [s - ~i[<<.n-1/2lun(s)[ for i = 1,2. Then, by (2.16), (iv) and (v), lu.(s)~.(s)l

sup

= O(u.(s)(s

- ~1)) = 0 (n-l/2u2.(s)) = 0 (nl/Z62n)

a.s.,

O
which implies (2.14). This completes the proof of Theorem 2.1. Remarks 2.1. Note that in Theorem 2.1 we do not use any specific structure of dependence as long as the sequence {X,,n~>l} has a common distribution function F. On the other hand, by choosing 5, = n-1/2(logn)~(loglogn) for some 2 > 0, we can obtain from (2.5) that

I p . ( s ) - u.(s)l:O((logn) -~)

sup

a.s.

~, ~
and from (2.6) that sup

[pn(s) - Un(S)[=O((logn) -'~)

a.s.

O
The following theorem gives a.s. upper bound for the deviations between uniform empirical and uniform quantile processes, the so-called Bahadur-Kiefer representations.

6

H. YulStatistics & Probability Letters 30 (1996) 1 - 7

Theorem 2.2. Let {U,, n~>l} be a stationary a-mixing sequence o f uniform-[O, 1] random variables. I f (2.2) holds, then for the same 2 as in Theorem A, we have

sup [u,(s) - ~,(s)[ = O((logn) -~)

a.s.

(2.17)

0~
Proof. It is well known that we have from (1.1) and (1.2) sup

lu.(s) - ~.(s)l

O~
~< sup I=,(G,(s)) - ~,(s)[ + n 1/2 sup IE,(G,(s)) - s[ O~
O~
~< sup Ic¢.(G.(s)) - ~.(s)] + n -1/z 0~
= O(

sup

\0~
\

sup [s'-sl<~2 .

n-l/2lK(s',n ) - K(s,n)[ + (logn) -'~} /

a.s.,

where 2, = Cn-l/2(loglogn) 1/2 and the last equality follows by (2.3) and (2.4). Thus, to prove (2.17), one only needs to verify that sup sup IK(s',n) - K(s,n)l = O(nl/2(logn) -~) o~
a.s.

(2.18)

Let tk = [exp(kl-S)], rk = [logk/log 4], and sj = sj~ = ( j - 1)2 -rk (1 <~j<~2rk). Then the following two inequalities can be proved in exactly the same way as Lemmas 6.2 and 6.3 of Berkes and Philipp (1977) (note that (3.3) of Berkes and Philipp (1977) is replaced by (5.3) of Philipp and Pinzur (1980)): max

sup

a.s.

IK(sj, tk) - K(s, tk)l = O(t2/2(logtk) -~)

l <~J<~2rk sj <~s<~sy+t

and sup

sup [K(s,t) - K(s, tk)l = O(t2/2(logtk) -~)

a.s.

tk <~t<.tk+l 0~
Since for n = tk and k large enough 2n ~ 1} be a stationary ~-mixin9 sequence o f random variables with common continuous distribution function F. Assume that F satisfies the Cs6ro6-R~v~sz conditions and (2.2) holds. Then there exists a Kiefer process K(s, n) defined on the same probability space as pn(s) with covariance function min(n,n')F(s,s') and a constant 2 > 0 dependin9 only on ~ such that

sup

Ipn(S) - - K ( s , n ) / n 1/2] ----O ((log n) -'~)

a.s.,

(2.19)

6n <.s <~1--On

where 6n = n - l/2(log n)'t(log log n). If, in addition to the Cs6r96-R~vbsz conditions, we also assume that F satisfies the conditions (iv) and (v) in Theorem 2.1, then we have

sup Ip,(s) - K(s,n)/nl/21 = O((logn) -'~) 0
a.s.

(2.20)

H. Yu/ Statistics & Probability Letters 30 (1996) 1 - 7

7

Remarks 2.2. Requiring longer proofs and strong mixing decay rate ~(n) = O(n -8), Fotopoulos et al. (1994) in their Theorem 2 obtain the same result as that of (2.19), but with the much narrower intervals fin = n -u for some 0 < # < 1/480. Finally, we wish to note in passing that Theorems 1 and 3 of Fotopoulos et al. (1994) may not be correct as stated, unless some further conditions are imposed. With strong mixing decay rate ~(n) = O (n -8) and assuming that F satisfies their conditions F1 and F2 (the same conditions as (i) and (ii) of the C s r r g r - R r v r s z conditions respectively), and F3: sup If'(Q(s))l < ~ , O
Fotopoulos et al. (1994) in their Theorem 1 obtain the same result as that of (2.20). However, we believe that their Theorem 1 and also their Theorem 3 hold true only under the additional assumption GI: inf f(Q ( s ) ) > O. 0
One can, for example, use the exponential distribution F = 1 - exp(-x),x>~0 to verify that though their conditions F1,F2 and F3 are satisfied, G1 fails to hold. This, in turn, means that their conclusion (4.7), which is the key step in proving their Theorems 1 and 3, is not valid. On the other hand, it is easy to check that conditions F1,F2,F3 and G1 imply the Csrrgr-Rrvrsz conditions, and conditions (iv) and (v) in Theorem 2.1. This means that after imposing condition G1, Theorem 1 of Fotopoulos et al. (1994) becomes a special case of our Theorem 2.3. References Babu, G.J. and K. Singh (1978), On deviations between empirical and quantile processes for mixing random variables, J. Multivariate Anal. 8, 532-549. Berkes, I. and W. Philipp (1977), An almost sure invariance principle for the empirical distribution function of mixing random variables, Z. Wahrsch. Verw. Gebiete. 41, 115-137. Csrrgr, M. and P. Rrvrsz (1978), Strong approximations of the quantile process, Ann. Statist. 6, 882-894. Fotopoulos, S., S.K. Ahn and S. Cho (1994), Strong approximation of the quantile processes and its applications under strong mixing properties, J. Multivariate Anal. 51, 17-45. Philipp, W. and L. Pinzur (1980), Almost sure approximation theorems for the multivariate empirical process Z. Wahrsch. Verw. Gebiete. 54, 1-13.