Robust adaptive beamforming with minimum sensitivity to correlated random errors

Robust adaptive beamforming with minimum sensitivity to correlated random errors

Signal Processing 131 (2017) 92–98 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro Rob...

1021KB Sizes 1 Downloads 36 Views

Signal Processing 131 (2017) 92–98

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

Robust adaptive beamforming with minimum sensitivity to correlated random errors Jie Zhuang n, Bai Shi, Xuanchen Zuo, Abdulrahman Hussein Ali School of Communication and Information Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China

art ic l e i nf o

a b s t r a c t

Article history: Received 9 April 2016 Received in revised form 15 July 2016 Accepted 2 August 2016 Available online 3 August 2016

The standard Capon beamformer is subject to substantial performance degradation in the presence of estimation errors of the signal steering vector and the array covariance matrix. In order to address this problem, robust adaptive beamformers (RABs) have been designed. In this study, we propose a novel RAB from the perspective of the beamformer sensitivity. In particular, we consider the general form of the beamformer sensitivity, implying that the random errors may be not white noise but correlated. Then we suggest to use the inverse of the array sample covariance matrix as the random error covariance. Using this, we propose to compute the Capon beamformer with minimum sensitivity to correlated random errors, considering a Euclidean ball as the uncertainty set for the signal steering vector. Moreover, the Lagrange multiplier methodology can be employed to solve the proposed optimization problem. Numerical results demonstrate the superior performance of the proposed beamformer in the presence of large mismatch relative to other existing approaches such as ‘diagonal loading’, ‘robust Capon’ and ‘maximally robust Capon’ beamformers. & 2016 Elsevier B.V. All rights reserved.

Keywords: Robust adaptive beamforming Beamformer sensitivity Correlated random errors

1. Introduction

l yields Performing eigen-decomposition on R

Capon beamformer is a representative example of adaptive array beamformer which intends to allow the signal of interest (SOI) to pass through without any distortion while the interference signals and noise are suppressed as much as possible, thereby maximizing the output signal-to-interference-plus-noise ratio (SINR). The standard Capon beamformer (SCB) can be formulated as

l=U lΓ lH = lU R

l min w H Rw s. t. w H a = 1 w

(1)

with the solution

wc = βc R a

(2)

where a denotes the nominal SOI steering vector. The immaterial 1 does not affect the array output SINR. The estiscalar βc = −1 l a HR

a

l can be formed by mated covariance matrix R

l= 1 R K where n

K

∑ x (k) xH (k) k=1 K

{ x (k) }k=1

i=1

(3)

denote the array observations or snapshots.

Corresponding author. E-mail address: [email protected] (J. Zhuang).

http://dx.doi.org/10.1016/j.sigpro.2016.08.004 0165-1684/& 2016 Elsevier B.V. All rights reserved.

(4)

l1, … , u lN ⎤⎦ where N is the array sensor number. The matrix l U = ⎡⎣ u l = diag {γl , … , γl } is a diagonal collects all the eigenvectors, and Γ 1 N matrix with the eigenvalues γl1 ≥ ⋯ ≥ γlN being nonincreasingly ordered. By incorporating knowledge of the white noise variance s2n, we can obtain the maximum likelihood estimation with a noise floor constraint [1]

lML = R

l−1

N

∑ γi ̂uli ul iH

N

∑ i=1

li u l iH . max γi,̂ σn2 u

{

}

(5)

It has been pointed out that the SCB may suffer from substantial performance degradations even for small mismatch bel ) and its actual value a 0 (or R ) [2–5]. tween the presumed a (or R This is because in such situation the SOI may be treated as an interference signal and therefore be suppressed erroneously, which is commonly referred to as signal self-nulling [10]. In order to address this problem, some robust adaptive beamformers (RABs) have been designed which aim to provide acceptable performance even when the nominal steering vector and covariance matrix depart from their actual values. An excellent review and comparison of the existing robust techniques have been provided in [11,12]; see also the references contained therein.

J. Zhuang et al. / Signal Processing 131 (2017) 92–98

The most popular RAB technique is the diagonal loading (DL) method [2] along with its generalized versions [3–5]. The conventional DL beamformer can be formulated as

l + ξ w Hw s. t. w H a = 1. min w H Rw w

(6)

beamforming problem in [10] is formulated as

min a, w

∥ w ∥2 w Ha 2

s. t. w =

The optimum solution of (6) is given by

l + ξI m w DL = βDL R

(

−1

)

(7)

l + ξI aH R

(

)

a

the DL method in (6) differs from the SCB of (1) in that an additional term ξw H w is used, which can be explained by the following fact. When the signal self-nulling occurs, we have w Ha 0 ≈ 0 (where a 0 stands for the true steering vector of the SOI) and simultaneously we also have w H a = 1 due to the distortionless constraint on the nominal SOI. Consequently, we can obtain the approximation expression w H (a − a 0) ≈ 1. However, the nominal and actual steering vectors are often close and hence ∥ a − a 0 ∥ is relatively small, where the notation ∥·∥ represents the Euclidean norm. This implies that the relation w H (a − a 0) ≈ 1 holds only if ∥ w ∥ is large [13]. Therefore, we use the term ξw Hw in (6) to prevent the norm of the weight vector to become large and in turn avoid signal self-cancellation. In the traditional DL method [2], the loading factor ξ is set in an ad hoc way, typically ξ = 10σn2 where sn denotes the noise power. The generalized versions of DL [3–5] specifically attempt to establish the relationship between the loading factor and the steering vector uncertainty level. For example, in the RAB presented in [3] it is assumed that the true SOI steering vector belongs to the uncertainty set

( (ε) ≜ {a ∥ a − a ∥ ≤ ε}

(8)

where ε is the pre-selected upper bound on the norm of the steering vector mismatch. Then the RAB in [3] maintains a gain no less than unity within the uncertainty set, while minimizing the output power. That is

l min w H Rw w

∀ a ∈ ( (ε).

(9)

Moreover, it is found that the least gain within the uncertainty set, which corresponds to the worst-case steering vector, has the following form [3]:

min w Ha = w Ha − ε ∥ w ∥ ≥ 1.

a ∈ ( (ε )

(10)

As a consequence, the problem in (9) can be rewritten in the convex second order cone programming form and be solved efficiently using the interior point method. It is also shown that this RAB technique based on worst-case performance optimization belongs to the diagonal loading approaches. Also, in [4] it is shown that the RABs proposed in [3–5] are equivalent and the essence of them is to replace the nominal SOI steering vector by the vector from the presumed uncertainty set, which results in the maximum output power. Recently, a novel RAB has been considered in [10] from the perspective of the beamformer sensitivity which is defined as

Tse ≜

∥ w ∥2 . |w Ha|2

l−1a R l−1a aH R

a ∈ ( (ε). a

where I denotes the identity matrix with appropriate size and the 1 scaling factor βDL = is also immaterial. We can see that −1

s. t. w Ha ≥ 1,

93

(11)

(12)

The basic idea of (12) is to find the vector within the uncertainty set which achieves the minimum beamformer sensitivity. As will be shown subsequently, the assumption of uncorrelated random errors is used implicitly in [10]. In other words, the signals are assumed to be perturbed by the white noise. However, this white noise assumption is not always satisfied [6]. In this paper, we suggest a robust adaptive beamforming which is also based on the beamformer sensitivity. Here we treat a more general case in which the signals are perturbed by correlated random errors. Then we find that it is reasonable to use the inversion of the sample covariance matrix as the random error covariance. We consider a beamformer optimization problem which intends to obtain the minimum beamformer sensitivity to the correlated random errors. Such optimization problem can be solved by the Lagrange multiplier methodology.

2. Proposed robust beamformer In [2], a general definition of the beamformer sensitivity is given by

Tg − se ≜

w HEw |w Ha|2

(13)

where the matrix E denotes the covariance of the random errors. It is pointed out in [2] that Tg − se is a classic measure of sensitivity to tolerance errors. When the errors are uncorrelated, the covariance becomes the identity matrix and thus the above definition reduces to that in (11). However, this white noise assumption is not always valid. If we take the general form into account, the problem that may arise is the choice of the error covariance matrix. As pointed out in [7,8], a strong priori on the shape of the random errors is seldom the case in practice and the robustness may be endangered if an imprecise covariance matrix is used. In this paper, we suggest l−1, to to use the inverse of the sample covariance matrix, i.e., R replace the matrix E in (13), implying that the random errors are l assumed to be more related to the subdominant eigenvectors of R than to the dominant eigenvectors. Thus the optimization problem considered in this paper takes the following form:

l w w HR a, w |w Ha|2 l−1a R s. t. w = l−1a aH R −1

min

a ∈ ( (ε).

(14)

l−1 can also be explained by the folThe reason of the choice of R lowing facts. Let us first rewrite the DL weight vector in (7) as

l + ξI m w DL = βDL R

(

−1

)

N

a = βDL

∑ i=1

From (10), we observe that the largest deviation of the array gain within the uncertainty set is ε ∥ w ∥. Therefore, the beamformer sensitivity measures the relative deviation in array response (which is normalized by the uncertainty level ε). Then, the

l iH a u li . u γi ̂ + ξ

From (15), we observe that for large eigenvalues the term

(15) 1 γli +ξ

is

almost unchanged whether ξ is loaded or not. However, for small eigenvalues the term 1 reduces significantly once a positive γli +ξ

94

J. Zhuang et al. / Signal Processing 131 (2017) 92–98

number ξ is loaded. This implies that the effect of the loading factor is to deemphasize the components corresponding to small eigenvalues whereas those associated with large eigenvalues are maintained. The signal subspace projection (SSP) method presented in [9] is another classical robust beamformer in which the nominal SOI steering vector is preprocessed by projecting it onto the signal subspace and the resulting weight vector can be given by

l−1Ps a = mSSP = R w

M

∑ i=1

l iH a u li = u γi ̂

M

∑ i=1

l iH a u li + u γi ̂ + 0

N

∑ i=M+1

H

li a u li u γi ̂ + ∞

−3

(16)

where the projection operator and the matrix Ps = l l1, … , u lM ⎤⎦ collects the eigenvectors associated with the M Us = ⎡⎣ u largest eigenvalues (where M is the signal number). We can see that the SSP approach uses ∞ as the loading factor to discard all the noise-subspace eigenvectors and zero loading factor to maintain the signal-subspace eigenvectors. The above analysis tells us that the loading-type methods aim to deemphasize the subdominant eigenvectors and thus let the weight vector stay away from these eigenvectors. This is reasonable. Since the subdominant eigenvectors are orthogonal with the signals (including the SOI), deemphasizing the components associated with subdominant eigenvectors can avoid the nulling for the SOI. Note that this robustness is gained at the cost of adaptive interference suppression and noise reduction. In (14), we use the similar strategy. Different from (12), we intend to minimize the l−1w such that the weight vector is deliberweighted norm w H R ately prevented from converging to the subdominant eigenvectors l , thereby avoiding the self-nulling. In comparison with the DL of R and SSP methods in which the loading factor is always fixed, our proposed method is a type of “soft” policy and therefore can be expected to have better flexibility in adaptive interference cancellation. Now let us turn to the solution of the optimization problem (14). By substituting the equality constraint of (14) in the objective function, we have

l a aH R −1 H l l −1 a a R a aH R −3

a

(17)

The uncertainty set ( (ε ) is an Euclidean ball centered at a with radius ε as shown in Fig. 1. The maximum angle between the nominal SOI steering vector a and any vector within the Euclidean ball can be readily obtained as

⎛ ∥ a ∥2 − ε2 ⎞ ⎟ max ∠(a , a) = arccos ⎜⎜ ⎟ a ∈ ( (ε ) ∥a∥ ⎝ ⎠

(18)

For any vector, if the angle between it and the nominal a is less than max ∠(a, a), we can always scale it such that the scaled vera ∈ ( (ε )

sion is located in the Euclidean ball and thus satisfies the constraint in (17). In addition, we observe that the objective function of (17) keeps unchanged for any scaling of a . This implies that the constraint in (17) can be replaced with

aH a ≥ ∥ a ∥∥ a ∥

∥ a ∥2 − ε2 . ∥a∥

(19)

aH a

(20)

where the constant

ν=

∥a

∥2



ε2 .

l a=1 s. t. aH R −1

ν ∥ a ∥ ≤ aH a .

(22)

Following the work of [10], we can solve the optimization problem of (22) by using the Lagrange multiplier methodology. The Lagrangian function of (22) can be written as l−3a + λ ν 2a Ha − a Ha a Ha + μ ⎛⎜ − a HR l−1 a − a HR l−1a + 2⎞⎟ 3 (a , λ , μ ) = a H R ⎝ ⎠

(

)

(23)

where λ and μ are real-valued Lagrange multipliers with λ ≥ 0 and μ being arbitrary respectively. Then differentiating the above Lagrangian function with respect to a gives us

∂3 (a , λ , μ) ⎡ l −3 ⎤ l −1 a . = 2 ⎢⎣ R + λ ν 2I − aa H ⎦⎥ a − 2μR ∂a

(

)

(24)

By defining the matrix

l−3 + λ ν 2I − aa H Cλ ≜ R

(

)

(25)

and setting the above differentiation result to zero, we obtain

l a. aλ, μ = μC−λ 1R −1

(26)

Since we use the Capon method to construct the final weight vector, i.e., w =

l−1a R l−1a aH R

, the scaling factor μ does not affect the array

output SINR and hence we can ignore μ. The remaining unknown variable in (26) is λ which can be found by substituting the equation of (26) into the ice-cream cone constraint. That is

l a ∥ = aH R l C −1 a . ⇒ ν ∥ C−λ 1R λ −1

−1

Next we will discuss how to find λ. Using the complement projection matrix P⊥ ≜ I −

ν is

l−3 + λν 2P⊥ − Cλ = R

λε2 a aH . ∥ a ∥2

a aH ∥ a ∥2

, we can

(28)

l is fixed and the projection matrix P⊥ has no any Since the matrix R components of a , we can see that the larger λ the less components associated with a is contained in Cλ . However, the inverse operation would result in more components associated with a in C−λ 1 when λ becomes large. This in turn renders the vector aλ, μ approaching the center of the ice-cream cone (i.e., a ). Next we will show how to determine the largest value of λ which maintains Cλ positive definite. By defining the positive definite matrix

(29)

we can rewrite Cλ as

(

)

Cλ = G1/2 I − λG−λ 1/2aa H G−λ 1/2 G1/2 λ λ

(21)

(27)

rewrite the matrix Cλ as

l−3 + λ ν 2I, Gλ ≜ R

Moreover, the above constraint can be simplified to

ν∥a∥≤

a

ν ∥ aλ, μ ∥ = aHλ, μ a

s. t. a ∈ ( (ε).

cos ∠(a , a) =



consequence, we can transform the optimization problem in (17) to

l a min aH R

H l Usl Us

min

Now we transform the Euclidean ball constraint into an ice-cream cone constraint. Also, we observe that the cost function in (17) and the constraint in (20) are both unchanged when the vector a undergoes an arbitrary phase rotation. Therefore, if a⁎ is an optimal solution to (17), we can always rotate, without affecting the obl−1 a is real. As a jective function value, the phase of a⁎ so that a H R

Thus, Cλ ≽0 if and only if

(30)

J. Zhuang et al. / Signal Processing 131 (2017) 92–98

f (λ ) ≜ λ a H G−λ 1 a H ≤ 1

(31)

Using (4), the function f (λ ) can be expressed as N

f (λ ) = λ

∑ n= 1

zn 2 γn−̂ 3 + λν 2

(32)

l H a . Since f (λ ) is strictly where zn is the n-th element of z = U monotonically increasing for λ > 0, we can find a unique λmax such that N

f (λmax ) = λmax

∑ n= 1

(33)

l a aH R . ∥ a ∥2 ε2 −3

(34)

Once the value of λmax is determined by solving (33), we can find the root of (27) within the area [0, λmax ] by searching a number of points. In order to reducing the computational coml . First, by deplexity, we can use the eigen-decomposition of R fining a vector bλ as

l a bλ ≜ C−λ 1R −1

(35)

we can rewrite the equation of (27) as

b Hλ a ν ∥ bλ ∥

l 2 a which is the same as the above optimal λ = 0 we have bλ= 0 = R solution without the Euclidean ball constraint. However, such vector is most likely outside the ice-cream cone. Then we increase the value of λ such that C−λ 1 contains more components associated with a and thus the vector bλ can be expected to approach the icecream cone (see Fig. 1). The final optimal solution λ⋆ is found when the vector bλ touches the ice-cream cone and as a consequence the Euclidean ball constraint is satisfied.

3. Simulation results

zn 2 = 1. γn−̂ 3 + λmax ν 2

Due to the fact Cλ ≽0 for any λ ≤ λmax , we can find an upper bound for λmax as follows:

a H Cλ a ≥ 0 ⇒ λ ≤

= 1.

(36)

Then using the matrix inversion lemma, we have

⎛ λG−λ 1a a H G−λ 1 ⎞ l−1 ⎟R a bλ = ⎜ G−λ 1 + 1 − λ a H G−λ 1a ⎠ ⎝ where the inverse matrix G−λ 1 can be computed efficiently by

⎡ ⎤ H 1 1 l diag ⎢ l ⎥U G−λ 1 = U , … , −3 3 − 2 2 ⎢⎣ γ1̂ + λν γN̂ + λν ⎥⎦

(38)

where the notation diag [·] denotes the diagonal matrix. So far, the proposed algorithm can be presented as a series of steps as follows: 1. Collect the snapshots to estimate the received signal covariance l by (3) and perform eigen-decomposition on R l. matrix R l 2. Replace the covariance matrix R with the maximum likelihood lML in (5). estimation R 3. Solve (33) for λmax , using the knowledge that the solution is unique and its upper bound is (34). 4. Search the optimum point λ⋆ ∈ [0, λmax ] which satisfies (36) where (37) and (38) are utilized to improve the computational efficiency. 5. Construct the beamformer weight vector by

l −1 b ⋆ . w=R ML λ

Assume that three signals (one SOI plus two interferers) are incident on a uniform linear array with N¼ 10 isotropic sensors and half-wavelength sensor spacing. The direction-of-arrivals (DOAs) of these two interfering signals are fixed at [θ1, θ 2 ] = [30°, 50°]. For each interfering signal, the interference-to-noise ratio (INR) in a single sensor is equal to 30 dB. Unless stated otherwise, we use K¼ 50 snapshots to estimate the covariance matrix, the input signal-tonoise ratio (SNR) is −10 dB, the uncertainty level is ε = 3, and the nominal DOA of the SOI is θ = 0°. Four methods are compared in terms of the mean output SINR: (1) the proposed method; (2) the traditional diagonal loading (DL) method in (7) with the loading factor ξ = 10σn2; (3) the robust Capon beamforming (RCB) presented in [3]; (4) the maximally robust Capon beamforming (MRCB) proposed in [10]. For each scenario, the average of 200 independent runs is used to plot each simulation point. The optimal SINR is also plotted for reference. 3.1. Example 1: Random steering vector mismatch In the first example, the actual steering vector of the SOI is generated randomly as

(37)

a0 = a − e

Note that in our proposed method, we use the maximum likelML in (5) to replace R l in (3). Here we assume lihood estimation R that the noise power is a priori information or can be estimated. Remark: If we remove the Euclidean ball constraint, the optimization problem of (17) becomes a generalized Rayleigh quotient l−3, R l−1a a H R l−1 , the optimal solution of for the matrix pencil R which can be readily obtained as

)

a⋆

l 2 a . Interestingly, by setting =R

(40)

where the presumed steering vector a corresponds to the steering vector with DOA θ = 0°, and e denotes the random mismatch. In each simulation run the random error vector e is drawn independently from a zero-mean circularly symmetric complex Gaussian distribution with covariance δI . We can obtain different average norm of the error vector e by setting the parameter δ. First, we study the beamformer output SINR versus the average norm of the mismatch (i.e., ∥ e ∥). The resulting output SINR performances of four methods are depicted in Fig. 2 where we can see that our proposed method outperforms the others when ∥ e ∥ is larger than one. In Fig. 3, the presumed uncertainty level ε varies from 0.5 to 3 whereas the average ∥ e ∥ remains at 2. Fig. 3 shows

(39)

(

95

Fig. 1. The Euclidean ball ( (ε ) covered by an ice-cream cone.

96

J. Zhuang et al. / Signal Processing 131 (2017) 92–98

0

Array output SINR (dB)

-0.5

-1

-1.5

-2 DL RCB MRCB Proposed Optimal SINR

-2.5

-3 0

0.5

1

1.5

2

2.5

3

Average norm of the mismatch Fig. 2. Output SINR versus the average norm of the steering vector mismatch (for the snapshot number K ¼50, the input SNR = − 10 dB, and the uncertainty level ε = 3); first example.

versus the snapshot number and the input SNR respectively where the average ∥ e ∥ remains at 2 as well. It can be seen that when K ≥ 20 and the input SNR ≥ − 12 dB our algorithm achieves better performance than three other methods tested.

0

Array output SINR (dB)

-0.5

-1

3.2. Example 2: Look direction error

-1.5

-2

-2.5 RCB MRCB Proposed Optimal SINR

-3

-3.5 0.5

1

1.5

2

2.5

3

ε Fig. 3. Output SINR versus the uncertainty level ε (for the snapshot number K¼ 50, the input SNR = − 10 dB, and the average ∥ e ∥ = 2); first example.

We consider the case of look direction error in the second example where the actual SOI DOA maintains at 0°. In Fig. 6, the nominal DOA θ changes from −5° to 5°. Figs. 7 and 8 display the performance versus the snapshot number and the input SNR respectively when the nominal SOI DOA is fixed at 3°. This corresponds to a 3° mismatch in the signal look direction. In Fig. 6, we can see that our method is worse than the others for the case of small look direction error (less than 2°). In such case the mismatch is relatively small. However, once the look direction error becomes large, our method can outperform other methods tested. Again, this observation shows that the proposed method has better performance than the others for the situation with large mismatch, which has been previously seen in Fig. 2. 3.3. Example 3: Mismatch due to coherent local scattering

0

In our last example, the actual SOI steering vector is formed by five coherent signal paths as [3]

-0.5

Array Output SINR (dB)

Fig. 5. Output SINR versus the input SNR (for the snapshot number K¼ 50, the uncertainty level ε = 3, and the average ∥ e ∥ = 2); first example.

-1

-1.5

-2 DL RCB MRCB Proposed Optimal SINR

-2.5

-3 10

20

30

40

50

60

70

80

90

100

Number of Snapshots Fig. 4. Output SINR versus the snapshot number (for the input SNR = − 10 dB, the uncertainty level ε = 3, and the average ∥ e ∥ = 2); first example.

that the uncertainty level ε = 0.3N is a good choice for all the three beamformers (RCB, MRCB and ours), which is consistent with the simulation results of [3]. Therefore, we choose ε = 0.3N for other simulation experiments. Figs. 4 and 5 illustrate the performance

Fig. 6. Output SINR versus the look direction error (for the actual SOI DOA θ0 = 0°, the snapshot number K¼50, the input SNR = − 10 dB, and the uncertainty level ε = 3); second example.

J. Zhuang et al. / Signal Processing 131 (2017) 92–98

Fig. 7. Output SINR versus the snapshot number (for the actual SOI DOA θ0 = 0° , the nominal SOI DOA θ = 3° , the input SNR¼ −10 dB , and the uncertainty level ε = 3); second example.

97

Fig. 10. Output SINR versus the input SNR (for the nominal SOI DOA θ = 0°, the snapshot number K¼ 50, and the uncertainty level ε = 3); third example.

the path phases that are independently and uniformly drawn from ∼ the interval [0, 2π ] in each simulation run. The angles {θi } are independently drawn in each simulation run uniformly from the ∼ interval [ − 5°, 5°]. Note that {θi } and {ϕi } vary from run to run while keeping unchanged from snapshot to snapshot. In Figs. 9 and 10 we test the output SINR performance versus the snapshot number and the input SNR respectively, where the nominal SOI DOA is θ = 0°. As clearly illustrated in Figs. 9 and 10, our proposed beamformer enjoys the best performance among the approaches tested.

4. Conclusions

Fig. 8. Output SINR versus the input SNR (for the actual SOI DOA θ0 = 0° , the nominal SOI DOA θ = 3°, the snapshot number K¼ 50, and the uncertainty level ε = 3); second example.

The beamformer sensitivity is a classic measure of sensitivity to tolerance errors. In many practical applications, however, the random errors may be not white noise but correlated. Despite the lack of the accurate knowledge of the random error covariance matrix, we find that the inverse of the sample covariance matrix can be a good substitute of the random error covariance. Then we propose to compute the Capon beamformer with minimum sensitivity to correlated random errors. Furthermore, the proposed optimization problem can be solved by the Lagrange multiplier methodology. Although we cannot obtain a close-form solution for the Lagrange multiplier, it can be calculated efficiently by using the results of the eigen-decomposition of the sample covariance matrix.

Acknowledgments This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61571090, China Postdoctoral Science Foundation under Grant 2015M580785 and the Fundamental Research Funds for the Central Universities under Grant ZYGX2014J007.

References Fig. 9. Output SINR versus the snapshot number (for the nominal SOI DOA θ = 0° , the input SNR = − 10 dB, and the uncertainty level ε = 3); third example. 4

a = a (θ ) +

∑ i=1

e jϕi ai

∼ θi

( )

(41)

∼ where θ is the DOA of the direct path, whereas θi corresponds to the i-th coherently scattered path. The parameters {ϕi } represent

[1] K. Harmanci, J. Tabrikian, J.L. Krolik, Relationships between adaptive minimum variance beamforming and optimal source localization, IEEE Trans. Signal Process. 48 (January (1)) (2000) 1–13. [2] H. Cox, R. Zeskind, M. Owen, Robust adaptive beamforming, IEEE Trans. Acoust. Speech Signal Process. 35 (October (10)) (1987) 1365–1376. [3] S.A. Vorobyov, A.B. Gershman, Z.-Q. Luo, Robust adaptive beamforming using worst-case performance optimization: a solution to the signal mismatch problem, IEEE Trans. Signal Process. 51 (February (2)) (2003) 313–324.

98

J. Zhuang et al. / Signal Processing 131 (2017) 92–98

[4] J. Li, P. Stoica, Z. Wang, On robust capon beamforming and diagonal loading, IEEE Trans. Signal Process. 51 (July (7)) (2003) 1702–1715. [5] R.G. Lorenz, S.P. Boyd, Robust minimum variance beamforming, IEEE Trans. Signal Process. 53 (May (5)) (2005) 1684–1696. [6] J. Gu, P.J. Wolfe, Robust adaptive beamforming using variable loading, in: 2006 IEEE Workshop on Sensor Array and Multichannel Processing, 2006, pp. 1–5. [7] O. Besson, F. Vincent, Performance analysis of beamformers using generalized loading of the covariance matrix in the presence of random steering vector errors, IEEE Trans. Signal Process. 53 (February (2)) (2005) 452–459. [8] Y. Selén, R. Abrahamsson, P. Stoica, Automatic robust adaptive beamforming via ridge regression, Signal Process. 88 (1) (2008) 33–49.

[9] D. Feldman, L. Griffiths, A projection approach to robust adaptive beamforming, IEEE Trans. Signal Process. 42 (April (4)) (1994) 867–876. [10] M. Rübsamen, M. Pesavento, Maximally robust capon beamformer, IEEE Trans. Signal Process. 61 (April (8)) (2013) 2030–2041. [11] S.A. Vorobyov, Principles of minimum variance robust adaptive beamforming design, Signal Process. 93 (December (12)) (2013) 3264–3277. [12] A. Khabbazibasmenj, S.A. Vorobyov, A. Hassanien, Robust adaptive beamforming based on steering vector estimation with as little as possible prior information, IEEE Trans. Signal Process. 60 (May (6)) (2012) 2974–2987. [13] J. Li, P. Stoica, Z. Wang, Doubly constrained robust capon beamformer, IEEE Trans. Signal Process. 52 (August (9)) (2004) 2407–2423.