Computing enclosures for the inverse square root and the sign function of a matrix

Computing enclosures for the inverse square root and the sign function of a matrix

JID:LAA AID:12491 /FLA [m1G; v 1.120; Prn:17/12/2013; 14:00] P.1 (1-15) Linear Algebra and its Applications ••• (••••) •••–••• Contents lists avail...

681KB Sizes 0 Downloads 42 Views

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.1 (1-15)

Linear Algebra and its Applications ••• (••••) •••–•••

Contents lists available at ScienceDirect

Linear Algebra and its Applications www.elsevier.com/locate/laa

Computing enclosures for the inverse square root and the sign function of a matrix Andreas Frommer a,∗ , Behnam Hashemi b,c,1 , Thomas Sablik d a

Department of Mathematics, Bergische Universität Wuppertal, 42097 Wuppertal, Germany School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran c Department of Mathematics, Faculty of Basic Sciences, Shiraz University of Technology, Modarres Boulevared, Shiraz 71555-313, Iran d Faculty of Electrical, Information and Media Engineering, Bergische Universität Wuppertal, 42097 Wuppertal, Germany b

a r t i c l e

i n f o

Article history: Received 1 September 2013 Accepted 28 November 2013 Available online xxxx Submitted by C.H. Guo MSC: 65F05 65G20 Keywords: Matrix inverse square root Matrix sign function Interval arithmetic Krawczyk’s method Verified computation

a b s t r a c t We study computational methods for obtaining rigorous a posteriori error bounds for the inverse square root and the sign function of an n × n matrix A. Given a computed approximation for the inverse square root of A, our methods work by using interval arithmetic to obtain a narrow interval matrix which, with mathematical certainty, is known to contain the exact inverse square root. Particular emphasis is put on the computational efficiency of the method which has complexity O (n3 ) and which uses almost exclusively matrix–matrix operation, a key to the efficient use of available software for interval computations. The standard formulation of the method assumes that A can be diagonalized and that the eigenvector matrix of A is well-conditioned. A modification relying on a stable similarlity transformation to block diagonal form is also developed. © 2013 Elsevier Inc. All rights reserved.

1. Introduction Given a function f : Ω ⊆ C → C, the matrix function f ( A ) for A ∈ Cn×n is defined as soon as spec( A ) ⊆ Ω and f is s(λ) − 1 times differentiable at any eigenvalue λ of A, where s(λ) denotes the

*

Corresponding author. E-mail addresses: [email protected] (A. Frommer), [email protected] (B. Hashemi), [email protected] (T. Sablik). 1 This research was in part supported by a grant from IPM (No. 91650055). 0024-3795/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.laa.2013.11.047

JID:LAA

AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.2 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

2

index of λ, i.e. the size of the largest Jordan block belonging to λ, and λ lies in the interior of Ω whenever s(λ) > 1, see [1, Definitions 1.1 and 1.2] or [2, Definition 6.2.4]. In the present work we consider the matrix inverse square root, f ( z) = z−1/2 . This matrix function has applications, e.g., in the optimal symmetric orthogonalization of a set of vectors [3] and the generalized eigenvalue problem [4]. It also appears in the matrix sign function sign( A ) which can be defined as A ( A 2 )−1/2 and which arises in the solution of algebraic Riccati equations [5] and also in applications from Theoretical Physics [6]. The complex function, f ( z) = z−1/2 has two different branches. Referring to [1] for details,2 it suffices here to indicate that there exist several (primary) inverse square roots of A depending on which branch of f is used for the different Jordan blocks. If A has no eigenvalues on (−∞, 0], the so-called principal inverse square root (see also [3]) is uniquely defined by requiring that for all λ ∈ spec( A ) the branch of f is chosen such that f (λ) lies in the open right half plane. For the matrix square root and A non-singular, it has been shown [1, Thm. 1.26] that all primary square roots are isolated (and non-singular) solutions of the matrix equation X 2 − A = 0. Observing that for A non-singular any solution X of one of the equations

X 2 − A −1 = 0, −2

(1a)

− A = 0,

(1b)

X A X − I = 0,

(1c)

X

2

X A − I = 0,

(1d)

2

AX − I = 0

(1e)

is non-singular and satisfies ( X −1 )2

− A = 0, we conclude that a primary inverse square roots is always an isolated solution of any of the above equations. Several different iterative schemes are available in the literature for computing the matrix inverse square root, see e.g. [7–10,3]. All these classical, floating point numerical methods will always yield a result which is not an exact inverse square root of A but rather an approximation to it. The purpose of this paper is to propose and analyze an efficient interval-arithmetic based method which, given a relatively accurate floating point approximation to a primary (usually the principal) inverse square root, obtains a narrow enclosure for the exact inverse square root. It does so by proving that one of the equations in (1) has exactly one solution X within a whole set of matrices, represented as an interval matrix, i.e. a matrix with interval entries. Our approach is based on techniques developed earlier for the square root in [11]. The paper is organized as follows: In Section 2 we introduce some notation and standard results which are at the basis of our method. In particular, we discuss the necessary background from interval analysis. In Section 3 we develop our method for computing enclosures for the inverse square root, investigate its algorithmic complexity and shortly discuss some variants. We then explain in some detail how the method can be used to also obtain enclosures for the matrix sign function in Section 4. Section 5 contains results of numerical experiments, and our conclusions are summarized in Section 6. 2. Preliminaries Throughout this paper lower case letters are used for scalars and vectors and uppercase letters for matrices. For A ∈ Cm×n , the vector a = vec( A ) ∈ Cmn is obtained by stacking the columns of A. We will systematically—and often implicitly—use the convention that a lower case letter denotes the vector obtained via vec from a matrix denoted by the respective uppercase letter. The point-wise division A ./ B of two matrices A , B ∈ Cm×n is given by

A ./ B = C ∈ Cm×n , 2

where C = (c i j ) with c i j = ai j /b i j .

[1] treats the matrix square root, but all considerations there immediately carry over to the inverse square root.

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.3 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

3

For d = (d1 , . . . , dn ) T ∈ Cn , the matrix Diag(d) denotes the diagonal matrix in Cn×n whose i-th diagonal entry is di . Also for D ∈ Cn×m we put Diag( D ) = Diag(vec( D )) ∈ Cnm×nm . Kronecker products will appear routinely in this paper, and we will in particular use the following identities which we formulate using b = vec( B ). Lemma 1. (See [2,11].) For real matrices A, B, C and D with compatible sizes we have a) ( A ⊗ B )(C ⊗ D ) = ( AC ⊗ B D ). b) vec( A BC ) = (C T ⊗ A )b. c) Diag( A )−1 b = vec( B ./ A ). We use the standard notation for interval analysis suggested in [12]. Accordingly, interval quantities will always be typeset in boldface. IR and ICdisc denote the set of all compact real and circular complex intervals, respectively. For x = [x, x] ∈ IR, its midpoint is mid x := 12 (x + x) and its radius is

rad x := 12 (x − x). Midpoint and radius also define a circular complex interval as part of the complex plane. The topological interior of a real or complex circular interval x is denoted by int x. For an ×n interval matrix A ∈ ICm (IRm×n ), the interval vector a := vec( A ) ∈ ICmn (IRmn ) is again obtained by disc stacking the columns. We assume that the reader is familiar with the basics of interval arithmetic, see [13,14]. For any arithmetic operation ◦ ∈ {+, −, ∗, /} and every two real or circular complex intervals a and b one has

a ◦ b ⊇ {a ◦ b: a ∈ a, b ∈ b},

(2)

and the midpoint and the radius of a ◦ b can be computed from the midpoints and the radii of a and b. The fundamental relation (2) yields the very crucial enclosure property of real or complex circular interval arithmetic: If r (x1 , . . . , xn ) is an arithmetic expression in the variables x1 , . . . , xn , then its interval arithmetic evaluation r (x1 , . . . , xn ) contains the range of r for x1 ∈ x1 , . . . , xn ∈ xn . In order to apply techniques of verified numerical computations, in particular for the enclosure property to hold, one needs a correct implementation of machine interval arithmetic. Two examples of software which provide such a reliable machine interval arithmetic are C-XSC [15,16] and the MATLAB toolbox INTLAB [17]. The computation of an interval in machine interval arithmetic routinely means a switching between rounding modes, since, e.g., the computation for a lower bound of an interval must be rounded downwards, that for an upper bound upwards in order to maintain the enclosure property. For reasons of computational efficiency, INTLAB as well as the latest release of C-XSC allow to use the restriction of complex circular arithmetic to the real axis as its default arithmetic for real intervals. This results in a different multiplication and division as in standard real interval arithmetic. For the purposes of this work, we do not depend on the particular interval arithmetic in use. All we need is the enclosure property to hold, and this is true for standard real arithmetic, complex circular arithmetic as well as the restriction of the latter to real intervals. Lemma 1b) does not hold for interval quantities. However, the following enclosure property is still valid. Lemma 2. (See [11].) Let A, B, C be interval matrices of compatible sizes. Then







C T ⊗ A vec( B ): A ∈ A , B ∈ B , C ∈ C ⊆



vec(( A B )C ), vec( A ( B C )).

A typical proof of the existence (and often also uniqueness) of a zero of a non-linear, continuous and usually also differentiable function f : D ⊆ C N → C N using interval computation works by transforming f ( z) = 0 into a fixed point equation z = g ( z) with g continuous, which may be constructed using a computed approximate zero of f . Then, interval arithmetic is used to prove that g maps an interval vector z into itself, so that Brouwer’s fixed point theorem yields the existence of a fixed point z∗ of g in z.

JID:LAA

AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.4 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

4

We consider here functions g which are variants of the Krawczyk operator [18], for which we need the concept of a slope. Definition 1. A mapping S : D × D → C N × N is called a slope for f if

f ( y ) − f (x) = S ( y , x)( y − x)

for all x, y ∈ D .

(3)

If S ( y , x) depends continuously on x and y, and if x is close to a zero of f and R −1 ∈ C N × N is close to the slopes S (x, y ) for y from some (narrow) interval vector y, the function





k( y ) = x − R f (x) + I − R S ( y , x) ( y − x) is likely to be a contraction on y, and its fixed point y ∗ then satisfies − f (x) = S ( y ∗ , x)( y ∗ − x) which by (3) implies f ( y ∗ ) = 0. The following result turns this informal observation into a strict theorem. It goes back to Rump’s Ph.D. thesis [19] and is based on [18]. A proof can also be found in [11]. Theorem 1. Assume that f : D ⊂ C N → C N is continuous in D. Let xˇ ∈ D and z ∈ IC N be such that xˇ + z ⊆ D. Moreover, assume that S ⊂ C N × N is a set of matrices containing all slopes S (ˇx, y ) for y ∈ xˇ + z =: x. Finally, let R ∈ C N × N . Denote by K f (ˇx, R , z , S ) the set

  K f (ˇx, R , z , S ) := − R f (ˇx) + ( I − R S ) z: S ∈ S , z ∈ z .

(4)

Then, if

K f (ˇx, R , z , S ) ⊆ int z ,

(5)

the function f has a zero x∗ in xˇ + K f (ˇx, R , z , S ) ⊆ x. Moreover, if S also contains all slope matrices S (x, y ) for x, y ∈ x, then this zero is unique in x. The next section will discuss the two crucial issues allowing us to apply Theorem 1 for obtaining interval enclosures for the matrix inverse square root: The choice of the function f , and the appropriate use of interval arithmetic to efficiently compute an interval vector k as a superset of K f (ˇx, R , z , S ). 3. Computing enclosures for the inverse square root In the situation of Theorem 1, the standard approach to numerically compute a superset k of

K f (ˇx, R , z , S ) is to use interval arithmetic to evaluate the Krawczyk operator (cf. [18]) K f (ˇx, R , z , S ) := − R f (ˇx) + ( I − R S ) z , where S is an interval matrix containing all slopes S (x, y ) for x, y ∈ x. The matrix R is an approximation to an inverse of some matrix from S , typically obtained as the computed inverse of mid( S ) using standard floating point arithmetic. The enclosure property of interval arithmetic shows that

K f (ˇx, R , z , S ) ⊂ int( z )

(6)

implies (5). Thus, if (6) is fulfilled, which we can check computationally, we know by Theorem 1 that f has a zero in xˇ + K f (ˇx, R , z , S ) which is unique in x.

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.5 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

5

3.1. The basic algorithm For the inverse square root we can, in principle, choose as f any of the functions from (1). All these functions are formulated as an equality between n × n-matrices, so they represent functions from C N → C N with N = n2 , and we can write them as such using the vec operator. A fundamental aspect of complexity arises: Slopes are now N × N matrices, and obtaining the matrix R in the Krawczyk operator as a computed inverse means that we have to invert a matrix of size n2 × n2 . In the worst case this has computational cost O (n6 ), but even if we can reduce this by making use of sparsity, we have to compute the product R S with R a full matrix. It can be shown that S has at least O (n) non-zeros per column for each of the functions in (1) Since R is usually a dense matrix, this gives a computational cost of at least O (n5 ) just for the matrix product R S , which is prohibitively large. Our goal is thus to develop an alternative to the standard Krawczyk operator which has computational complexity O (n3 ). From now on we focus on the function (1c), since a computational study in [20] showed that it gives the best results with respect to the quality of the enclosures obtained while its run time with the yet to be described variant of the Krawczyk operator is comparable to that for the other functions. Recall our convention that lower case letters represent vectors obtained via the vec operator from matrices denoted by the respective upper case letter. Using Lemma 1b), we can write

F (X) = X A X − I from (1c) as the vector function





f (x) = X T ⊗ X a − i , where vec( F ( X )) = f (x). For any two matrices X and Y we have

F ( X ) − F (Y ) = X A X − Y AY = X A X − X AY + X AY − Y AY

= X A ( X − Y ) + ( X − Y ) AY , which, again using Lemma 1b) gives





f (x) − f ( y ) = I ⊗ X A + ( AY ) T ⊗ I (x − y ),

(7)

showing that S (x, y ) = I ⊗ X A + ( AY ) T ⊗ I is a slope for f . For a possible application of the Krawczyk operator we see that we can take S = I ⊗ X A + ( A X ) T ⊗ I . Here we explicitly see that the computation of R S has complexity O (n5 ), since X A and A X usually are dense matrices and thus S has 2n non-zeros per column. In order to reduce the computational complexity we now develop an alternative to the Krawczyk operator. The idea is to use a factorization of A to express the Krawczyk operator in a different basis so that R can be chosen to be diagonal or, at least, very sparse. For sake of simplicity, let us first assume that the matrix A is diagonalizable. We then can decompose A as

A = V ΛW ,

with V , W , Λ ∈ Cn×n ,

Λ = Diag(λ1 , . . . , λn ),

V W = I.

(8)

V −1 , but it will turn out useful to have this additional notation available when we

Of course, W = have to account for the fact that inverses are usually not available exactly when computed in floating point arithmetic. The inverse square root(s) of A can be expressed as A −1/2 = V Λ−1/2 W with

 −1/2 −1/2  . Λ−1/2 = Diag λ1 , . . . , λn Given the decomposition (8) of A, the slope S (x, y ) of f can be factorized as



S (x, y ) = I ⊗ X A + ( AY ) T ⊗ I



   T     = V −T ⊗ W −1 I ⊗ W X AW −1 + V −1 AY V ⊗ I V T ⊗ W .

(9)

JID:LAA

AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.6 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

6

If X and Y are close to the inverse square root of A, the quantities X A and AY appearing in the middle term are close to the square root of A, and the similarity transformations with W and V −1 , respectively, almost diagonalize X A and AY . As a consequence, the matrix









R = V − T ⊗ W −1 −1 V T ⊗ W ,

with  =



I ⊗ Λ1/2 + Λ1/2 ⊗ I



diagonal

(10)

is a good approximate inverse for S (x, y ). It is still a good approximate inverse if V , W and Λ fulfill (8) only approximately, as is the case when they are obtained via some floating point algorithm, which is what we assume from now on. The matrix R from (10) can be used in a Krawczyk type operator based on Theorem 1. For any matrices X and Y from Cn×n we have













I − R I ⊗ X A + ( AY ) T ⊗ I = V − T ⊗ W −1 · −1 · G · V T ⊗ W ,

(11)

where







G = ( I ⊗ Λ1/2 − W X AW −1 + Λ1/2 − V −1 AY V

T

⊗ I.

Algorithm 1 shows how we can use (11) to evaluate ( I − R S (x, y )) z for some vector z ∈ C N using a maximum of matrix–matrix operations. It relies on the relations given in Lemma 1, and in particular on Lemma 1c) to cast the multiplication with −1 into a point-wise division with the matrix D ∈ Cn×n defined as

D = [d1 , . . . , dn ] ∈ D n×n ,

with diag ( D ) = .

Algorithm 1: Efficient computation of u = ( I − R S (x, y )) z. 1 Z := W Z V

2 S := V −1 ( AY ) V

3 T := W ( X A ) W −1

4 Q := (Λ1/2 − T ) Z + Z (Λ1/2 − S ) 5 N := Q ./ D 6 U := W −1 N V −1

In view of Theorem 1, we want to compute an enclosure for the set

  K f (ˇx, R , z , S ) = − R f (ˇx) + ( I − R S ) z: S ∈ S , z ∈ z with S containing all slopes S (x, y ) for x, y ∈ xˇ + z. We do so using the factorization (11) for R and Algorithm 1, while at the same time taking care that quantities which are not available exactly are enclosed into computable quantities. This means that the matrices V −1 and W −1 are to be replaced by interval matrices I V and I W which contain the respective inverse. These can be obtained using Krawczyk’s method for linear systems, e.g., the state-of-the-art being implemented in INTLAB’s function verifylss.m. The matrices I V and I W are used in Algorithm 2 for which we want to stress three important aspects: Firstly, in view of the factorization (11), the diagonal matrices Λ1/2 are allowed to consist of floating point approximations to the square roots on their diagonal as long as D is built using the same approximate values. Secondly, when evaluating the function f on the real (‘point’) matrix Xˇ , we have to account for all possible rounding errors by using machine interval arithmetic even in this case. This is reflected in line 6 of the algorithm, where F = Xˇ A Xˇ − I is an interval matrix. Thirdly, the algorithm uses almost exclusively matrix–matrix operations. INTLAB and also C-XSC implement interval matrix–matrix operations in a manner such that the rounding mode has to be switched only once for the whole matrix–matrix operation.3 This contributes crucially to the efficiency of the interval algorithm since switching the rounding mode is orders of magnitudes slower than a floating point operation on modern processors.

3 And not within each arithmetic operation between the interval entries of the matrices; see our discussion of machine interval arithmetic in Section 2.

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.7 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

7

Algorithm 2: Efficient computation of K such that K f (ˇx; R ; z ; S ) ⊆ vec( K ) with S = { I ⊗ X + Y T ⊗ I , X , Y ∈ Xˇ + Z }.

1 Z  := W Z V

ˇ + Z )V 2 S := I V A ( X

ˇ + Z)AI W 3 T := W ( X

4 Q := (Λ1/2 − T ) Z + Z (Λ1/2 − S ) 5 N := Q ./ D 6 7 8 9

F := Xˇ A Xˇ − I G := W F V H := G ./ D K := I W (− H + N ) I V

To complete our algorithm for computing enclosures for the inverse square root it remains to describe how to get xˇ and z. The matrix Xˇ is the result of a floating point method for computing the inverse matrix square root and should be as accurate as possible. For example, given the factorization (8), we can obtain Xˇ as the floating point result of V Λ−1/2 W , but we can as well take the result of any other floating point approach to compute A −1/2 . The choice for the matrix Z is crucial to whether K ⊆ int Z will hold or not. We suggest to use the standard approach in verification methods based on interval arithmetic called ε -inflation, see [21,22]. It is based on the observation that 0 ∈ z and that if k is not (yet) contained in z we should try anew with z being an enlarged version of k. The details are given in Algorithm 3, where δ refers to the smallest positive subnormal floating point number, i.e. δ = 2−1023 for IEEE 754 double precision, E is the matrix of all ones, and 2 denotes the interval hull operator which returns the smallest interval containing its two arguments. Algorithm 3: Computation of an interval matrix X containing exactly one inverse square root of A; stops if kmax consecutive iterations fail. 1 Compute approximations V , W , Λ for the spectral decomposition of A = V Λ W in floating point

ˇ of A in floating point 2 Compute approximate inverse square root X

3 Compute interval matrices I V , I W containing V −1 and W −1 , resp. 4 Compute the interval matrix Z := − I W H I V with − R f (ˇx) ∈ z, where H is obtained from lines 6–8 of Algorithm 2 5 for k = 1, . . . , kmax do 6 Put Z := 2(0, Z · [0.9, 1.1]) + [−δ, δ] · E // ε -inflation 7 8 9 10 11 12

Compute K using Xˇ , Z as in Algorithm 2 if K ⊂ int( Z ) then Return X := Xˇ + K END

// successful

else Z := K ∩ Z

13 Return “not successful”

3.2. Affine transformation Line 9 of Algorithm 2 performs a two-sided interval matrix multiplication on the interval matrix

− H + N . The result is an interval matrix which can be quite substantially larger than the interval hull of all point matrices W −1 L V −1 , L ∈ − H + N due, in particular, to the so-called wrapping effect of interval arithmetic. If we can avoid multiplications with I W and I V , we might succeed more often— and with narrower intervals—to computationally prove the enclosure (5) from Theorem 1. The key to do so is to consider the linearly transformed function

     ˆf (ˆx) = V T ⊗ W f V −T ⊗ W −1 xˆ , or, equivalently, in matrix form





Fˆ ( Xˆ ) = W F W −1 Xˆ V −1 V .

JID:LAA

AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.8 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

8

The slope Sˆ (ˆx, yˆ ) for ˆf is then given as





Sˆ (ˆx, yˆ ) = V T ⊗ W · S



 

  

V − T ⊗ W −1 xˆ , V − T ⊗ W −1 yˆ · V − T ⊗ W −1



    = V T ⊗ W · S (x, y ) · V −T ⊗ W −1 ,

where x = vec( X ), y = vec(Y ), X = W −1 Xˆ V −1 , Y = W −1 Yˆ V −1 . A comparison with (9) gives









Sˆ (ˆx, yˆ ) = I ⊗ W X AW −1 + V −1 AY V

T

⊗I



for which we know by the discussion in Section 3.1 that the diagonal matrix −1 from (10) is a good approximate inverse when X and Y are close to an inverse square root of A. Given xˇ = vec( Xˇ ) as an approximate inverse square root of A and ( V T ⊗ W )ˇx the corresponding approximate zero of ˆf , the interval matrix



S = I ⊗ ( W X A I W ) + ( I V A X V )T ⊗ I



with X = Xˇ + I W Zˆ I V

contains all slopes from Sˆ = { Sˆ (ˆx, yˆ ), xˆ , yˆ ∈ xˆ = ( V T ⊗ W )ˇx + zˆ }. In view of Theorem 1, we can therefore compute a superset for K ˆf (( V T ⊗ W )ˇx, −1 , zˆ , Sˆ ) as

    −1 − V T ⊗ W f (ˇx) + ( − S )ˆz . Algorithms 4 and 5 give the computational details of the resulting method based on ˆf in a similar manner as Algorithms 2 and 3 did for f . Note that line 10 of Algorithm 5 performs the back transformation of an enclosure of a zero of ˆf to that for a zero of f . Note also that by Theorem 1 we know ˆ and thus of F in the set { Xˇ + W −1 Kˆ V −1 , K ∈ Kˆ }. The that there is a unique zero of Fˆ in W Xˇ V + K interval matrix Xˇ + I W Kˆ I V is, in general, larger than this set, so we cannot assert any more that the interval matrix returned by Algorithm 5 contains just one zero of f .

ˆ. Algorithm 4: Efficient computation of K ˆ IV 1 Z := I W Z

ˇ )V 2 S := I V A ( Z + X

ˇ )AI W 3 T := W ( Z + X

4 Qˆ := (Λ1/2 − T ) Z + Z (Λ1/2 − S )

ˆ := Qˆ ./ D 5 N

ˇ · A · Xˇ − I 6 F := X 7 Fˆ := W F V

ˆ := Fˆ ./ D 8 H

ˆ := − Hˆ + Nˆ 9 K

The numerical experiments will show that Algorithm 5 usually computes narrower interval matrices X and is successful for larger dimensions than Algorithm 3. Its only, minor, drawback is that we cannot be sure that the interval matrix X contains just one zero of f and may thus contain more than one inverse square root of A. 3.3. Modification when A cannot be stably diagonalized Algorithm 3 will have difficulties to succeed if V (and thus W ) is ill-conditioned. Then, the interval enclosures I V and I W , if they can be obtained at all, will tend to contain wide intervals as some of their entries, and the parallel epiped { W −1 H V −1 : H ∈ H }, which is enclosed by I W H I V will have widely varying extensions in the different dimensions. Due to the wrapping effect this will result in an interval matrix K = I V H I W with wide entries, so that it will be difficult to satisfy the crucial condition K ⊂ int Z . The same considerations also apply in principle, although to a lesser extent, to Algorithm 5.

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.9 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

9

Algorithm 5: Computation of an interval matrix X containing at least one inverse square root of A; stops if kmax consecutive iterations fail. 1 Compute approximations V , W , Λ for the spectral decomposition of A = V Λ W in floating point

ˇ of A in floating point 2 Compute approximate inverse square root X

3 Compute interval matrices I V , I W containing V −1 and W −1 , resp.

ˆ − R f (ˇx) as in lines 6–8 of Algorithm 4 4 Compute an interval matrix H ˆ := Hˆ 5 Z 6 while k  kmax do 7 8

Put Zˆ := 2(0, Zˆ · [0.9, 1.1]) + [−δ, δ] · E

10 11

ˆ := − Hˆ + Nˆ ⊂ int( Zˆ ) then if K ˆ IV Return X := Xˇ + I W K END

12

else

9

13

// ε -inflation

ˆ using Xˇ , Zˆ as in line 1–5 of Algorithm 4 Compute N // successful

ˆ ∩ Zˆ Zˆ := K

Both algorithms can be modified in the case that V is ill-conditioned (and also when A is not diagonizable at all) in the following manner: Instead of the diagonalization (8) we use the block diagonalization of Bavely and Stewart [23] to control the condition number of V at the expense of having D with (hopefully small) blocks along the diagonal. The block diagonal factorization can be written as

A = V −1 Γ V ,

(12)

where Γ is block diagonal with each diagonal block being triangular. We now can proceed in exactly the same manner as outlined in Section 3.1 with Λ1/2 replaced by Γ 1/2 everywhere, where for each diagonal block of Γ we obtain its square root, a triangular matrix, just approximately via some floating point algorithm. Occurrences of −1 must be replaced by a forward substitution with the large, sparse triangular matrix I ⊗ Γ 1/2 + Γ 1/2 ⊗ I . This forward substitution cannot be cast into a matrix–matrix operation, making the modified algorithm substantially slower when implemented in INTLAB or C-XSC. Also, the diagonal blocks should all be small in size, because otherwise the dependence property of interval arithmetic will yield very large intervals as a result of the substitution process. We refer to [11] for further details and a discussion of why a standard Schur decomposition of A, i.e. an orthogonal reduction to (full) triangular form, is not a viable approach for an enclosure method based on machine interval arithmetic. 4. Sign function The scalar sign function is defined for z ∈ C lying off the imaginary axis by



sign( z) =

+1, Real( z) > 0, −1, Real( z) < 0.

We therefore have

sign( z) =

z

( z2 )1/2

(13)

,

if we take that branch of the square root which maps into the right half plane. Let the non-singular matrix A have no purely imaginary eigenvalues and let J = T −1 AT be the Jordan canonical form of A with the Jordan blocks arranged such that J = diag( J 1 , J 2 ), where the eigenvalues of J 1 ∈ C p × p lie in the open left half-plane and those of J 2 ∈ Cq×q lie in the open right half-plane. Then,



sign( A ) = T

−I p

0

0

Iq



T −1 ,

JID:LAA

AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.10 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

10

which, analogously to (13), can be equivalently expressed as (cf. [1])



sign( A ) = A A 2

−1/2

(14)

.

Here, ( A 2 )1/2 denotes the principal square root of A 2 . Note that A having no purely imaginary eigenvalues is equivalent to A 2 having no eigenvalues on R− . The matrix sign function has important applications in eigenvalue problems [24,25], lattice quantum chromodynamics (QCD) [6], and system and control theory, particularly in Riccati matrix equations [5]. For a comprehensive account of different aspects of computing the matrix sign function we refer to [1, Ch. 5]. As opposed to the (inverse) square root, there seems to be no non-linear function for which sign( A ) would appear directly as a non-isolated zero and on which we could use interval arithmetic to compute an enclosure for its zero. We can, however, use the relation (14) and proceed as follows: Step 1. Use interval arithmetic to compute an interval matrix A 2 that contains the exact result of the matrix–matrix multiplication A 2 = A · A. Step 2. Use a modification of Algorithm 3 or 5 to compute an interval matrix X that satisfies







X ⊇ B −1/2 B ∈ A 2 . Step 3. Use interval arithmetic to compute an interval matrix S = A X which, by (14) and the enclosure property of interval arithmetic, contains sign( A ). The modification to be applied to Algorithm 3 is to simply replace all occurrences of the (point) matrix A by the interval matrix A 2 , and similarly in Algorithm 5, and to obtain V , W and Λ as a computed eigendecomposition of the computed midpoint mid( A 2 ) of A 2 . Due to the enclosure property of interval arithmetic, the thus modified Algorithm 3 computes an interval matrix K such that

vec( K ) ⊇ K f A (ˇx, R , z , S ) 2

for all A 2 ∈ A 2 ,

where F A 2 ( X ) = X A 2 X − I and S is an interval matrix which contains all slopes S f A (x, y ) for all 2

A 2 ∈ A 2 and all x, y ∈ X = Xˇ + Z . Therefore, if K ⊂ int Z we can, in analogy to Theorem 1, apply Brouwer’s fixed point theorem to each of the functions f A 2 individually. This gives us the following result. Theorem 2. Suppose that K obtained by the modified Algorithm 3 satisfies K ⊆ int( Z ) and put X = Xˇ + K . Then

S = AX is an interval matrix that contains sign( A ). The same result holds for X = Xˇ + I W Xˆ I V from the modified Algorithm 5 if Kˆ ⊂ int Zˆ . −1/2

Proof. From the preceding discussion we already know that X contains A 2 for any matrix A 2 ∈ A 2 , particularly so for A 2 = A 2 . By the enclosure property of interval arithmetic, therefore, sign( A ) = A ( A 2 )−1/2 ∈ A X . 2 5. Numerical examples Tables 1 to 5 give a comparison of Algorithms 3 and 5. We tried both algorithms on five classes of matrices, “frank”, “randsvd”, “gcdmat”, “poisson” and “tridiag” available from the MATLAB gallery function, and we chose matrices of various sizes for each of the classes. In the tables we report the number of iterations k needed to find a solution as well as the wall clock time in seconds. We also report two measures of precision for the cases where the algorithms successfully computed an

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.11 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

11

Table 1 Numerical results for the inverse square root of the “frank” matrix. Alg.

Size

3

4

5

6

7

8

Alg. 3

k time mrp arp

1 2.2E−2 9.9E−15 2.6E−15

1 9.1E−3 7.9E−14 1.7E−14

1 8.9E−3 5.7E−13 9.6E−14

1 9.0E−3 6.2E−12 8.8E−13

1 9.4E−3 2.9E−10 2.0E−11

30 1.2E−1 – –

Alg. 5

k time mrp arp

1 8.5E−3 1.1E−14 2.6E−15

1 8.7E−3 8.3E−14 1.6E−14

1 8.8E−3 5.0E−13 1.1E−13

1 8.8E−3 4.2E−12 1.0E−12

1 8.8E−3 1.3E−10 3.6E−11

2 1.2E−2 8.0E−9 8.2E−10

1.7E−16

7.9E−16

2.5E−15

2.0E−15

4.4E−14

1.8E−11

erp

Table 2 Numerical results for the inverse square root of the “randsvd” matrix with condition number κ = 10+2 and one small singular value. Alg.

Size

25

100

150

200

225

250

Alg. 3

k time mrp arp

1 2.0E−1 6.7E−10 3.1E−11

30 5.6 – –

30 1.5E+1 – –

30 3.0E+1 – –

30 4.0E+1 – –

30 5.2E+1 – –

Alg. 5

k time mrp arp

1 1.2E−1 1.2E−9 3.8E−11

1 5.3E−1 1.9E−7 2.3E−9

1 1.3 7.1E−6 8.8E−9

1 2.8 6.9E−6 1.9E−8

1 3.8 2.5E−6 1.5E−8

30 4.5E+1 – –

Table 3 Numerical results for the inverse square root of the “gcdmat” matrix. Alg.

Size

200

400

600

800

1000

1500

Alg. 3

k time mrp arp

1 1.1E−1 2.3E−5 2.3E−9

1 7.5E−1 2.1E−3 1.6E−8

7 7.3 1.9E−2 7.1E−8

30 5.4E+1 – –

30 1.0E+2 – –

30 3.1E+2 – –

Alg. 5

k time mrp arp

1 1.3E−1 1.4E−5 1.7E−9

1 6.4E−1 1.1E−3 1.0E−8

1 1.9 6.2E−3 2.7E−8

1 4.6 8.9E−3 5.2E−8

1 8.4 4.7E−2 8.2E−8

1 2.7E+1 1.1E−1 1.9E−7

enclosure X of the inverse square root. The first measure is the largest relative width of all interval entries in X , defined as n



mrp = max min 1, i , j =1

rad( X i j )



| mid( X i j )|

;

the second is the average relative precision

arp =

n 1

n2

i , j =1

min 1,

rad( X i j )

| mid( X i j )|

. rad( X )

Roughly speaking, the negative decimal logarithm of min(1, | mid( Xi j )| ) gives the number of significant ij digits in which the lower and upper bound of the interval X i j coincide. It thus represents the number of correct significant digits of the respective entry of the exact inverse square root known from the computed enclosure. A dash indicates that the algorithm did not succeed in computing an enclosure after kmax = 30 iterations.

JID:LAA

AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.12 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

12

Table 4 Numerical results for the inverse square root of the “poisson” matrix. Alg.

Size

225

400

625

900

1225

1600

Alg. 3

k time mrp arp

1 1.7E−1 5.9E−8 1.7E−10

1 7.6E−1 6.2E−7 7.3E−10

30 2.7E+1 – –

30 7.5E+1 – –

30 1.8E+2 – –

30 3.7E+2 – –

Alg. 5

k time mrp arp

1 1.6E−1 5.8E−8 1.7E−10

1 7.3E−1 5.9E−7 6.8E−10

1 2.6 3.6E−6 2.0E−9

1 6.7 1.7E−5 5.2E−9

1 1.6E+1 6.1E−5 1.1E−8

1 3.3E+1 1.9E−4 2.1E−8

Table 5 Numerical results for the inverse square root of the “tridiag” matrix. Alg.

Size

200

400

600

800

1000

1100

Alg. 3

k time mrp arp

1 1.1E−1 7.0E−8 2.5E−11

30 7.6 – –

30 2.4E+1 – –

30 5.3E+1 – –

30 1.0E+2 – –

30 1.3E+2 – –

Alg. 5

k time mrp arp

1 1.1E−1 6.5E−8 2.3E−11

1 7.0E−1 1.2E−6 1.1E−10

1 2.1 6.3E−6 2.6E−10

1 4.7 2.1E−5 4.9E−10

3 1.4E+1 5.0E−5 7.5E−10

5 2.6E+1 7.4E−5 9.2E−10

Upon a suggestion by one of the referees, the last row of Table 1 tries to hint on how much the interval enclosure overestimates the exact inverse square root. To this purpose we computed the inverse square root in quadruple precision using the Advanpix Multiple Precision Toolbox for MATLAB. Considering this result as the “exact” inverse square root X e , we report the maximum relative difference with the midpoint of the computed interval enclosure X obtained by the formula n

erp = max



i , j =1

| mid( X i j ) − X iej | | X iej |

.

This quantity should be compared with mrp to get an idea of how much the interval enclosures over-estimate the “true” deviation of mid( X i j ) from the exact inverse square root. The fact that these relative differences are about two orders of magnitude smaller than mrp indicate that the approximation of mid( X ) to the exact inverse square root is probably more precise than what is guaranteed by the width of the computed enclosing intervals. Summarizing the results from Tables 1 to 5, we see that Algorithm 3 is not as effective as Algorithm 5 since it is slower, less precise and cannot handle bigger matrices. For both algorithms the timings nicely scale with n3 , the complexity we aimed for. An alternative approach for computing enclosures for A −1/2 would be to first compute an interval matrix B which contains A 1/2 using the techniques from [11] and then compute an interval matrix containing all inverses B −1 for B ∈ B via, e.g., the function verifiylss in INTLAB. The new methods presented here usually obtain better (i.e. narrower) enclosures, the reason being that computing enclosures for all inverses from an interval matrix usually results in relatively wide interval entries. For example, the values for arp in Table 5 (Algorithm 5) become 3 orders of magnitude larger for all matrix sizes when the alternative approach is used. Tables 6–10 show the numerical results obtained using our modifications of Algorithms 3 and 5 for enclosing the sign function of a matrix. Our approaches for the inverse square root and the sign function are based on the spectral decomposition of A and mid A 2 ≈ A 2 , respectively. This means that we work with approximately the same matrices V and W in both cases, so that the wrapping effects due multiplication with V , W , I V , I W ˆ for the sign function, we have to should be similar. On the other hand, when we compute K or K use A 2 instead of A. Even when we neglect that A 2 is an interval matrix, we see that its condition is

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.13 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

13

Table 6 Numerical results for the sign function of the “frank” matrix. Alg.

Size

3

4

5

6

7

8

Alg. 3 modified

k time mrp arp

1 2.8E−2 3.4E−13 1.9E−13

1 2.9E−2 2.3E−11 8.1E−12

1 2.9E−2 1.5E−9 3.0E−10

1 3.0E−2 1.1E−7 1.2E−8

30 2.0E−1 – –

30 2.0E−1 – –

Alg. 5 modified

k time mrp arp

1 2.8E−2 3.7E−13 2.2E−13

1 2.8E−2 3.4E−11 1.2E−11

1 2.8E−2 5.0E−9 9.1E−10

1 2.3E−2 1.0E−6 7.1E−8

9 6.2E−2 7.0E−4 1.7E−5

30 1.8E−1 – –

Table 7 Numerical results for the sign function of the same “randsvd” matrices as in Table 2. Alg.

Size

25

100

150

200

225

250

Alg. 3 modified

k time mrp arp

1 8.3E−2 2.9E−6 5.5E−8

30 6.0 – –

30 2.6E+1 – –

30 3.0E+1 – –

30 4.2E+1 – –

30 5.3E+1 – –

Alg. 5 modified

k time mrp arp

1 1.2E−1 9.4E−8 1.2E−9

1 5.7E−1 3.7E−5 1.6E−8

30 1.5E+1 – –

3 6.7 3.9E−3 1.4E−8

13 3.0E+1 2.0E−3 1.3E−7

30 8.7E+1 – –

Table 8 Numerical results for the sign function of the “gcdmat” matrix. Alg.

Size

100

150

200

250

300

350

Alg. 3 modified

k time mrp arp

4 2.3E−1 5.7E−8 2.4E−9

30 3.2 – –

30 6.7 – –

30 1.2E+1 – –

30 2.2E+1 – –

30 3.4E+1 – –

Alg. 5 modified

k time mrp arp

1 1.1E−1 4.1E−8 1.4E−9

1 3.2E−1 1.4E−7 3.6E−9

2 8.4E−1 3.9E−7 7.1E−9

10 3.9 8.3E−7 1.2E−8

24 1.5E+1 1.4E−6 1.8E−8

86 7.8E+1 2.4E−6 2.5E−8

Table 9 Numerical results for the sign function of the “poisson” matrix. Alg.

Size

100

225

400

625

900

1225

Alg. 3 modified

k time mrp arp

1 1.6E−1 8.2E−12 6.7E−12

1 9.8E−1 7.0E−11 5.8E−11

30 6.7 – –

30 1.2E+1 – –

30 2.2E+1 – –

30 3.4E+1 – –

Alg. 5 modified

k time mrp arp

1 1.2E−1 7.9E−12 6.5E−12

1 9.9E−1 6.4E−11 5.4E−11

1 4.8 3.1E−10 2.7E−10

1 1.9E+1 1.1E−9 9.5E−10

1 6.3E+1 3.1E−9 2.8E−9

1 1.5E+2 7.4E−9 6.7E−9

approximately squared as compared to A, so that the wrapping effect will be more pronounced and it will thus be more difficult to satisfy the crucial conditions in line 8 of Algorithm 3 and line 9 of Algorithm 5. The numerical experiments for the matrix sign function reported in Tables 6–10 confirm this observation: The dimensions for which these crucial conditions cannot be fulfilled any more are substantially smaller than for the inverse square root case. Moreover, the enclosures we get are wider for comparable matrix sizes.

JID:LAA

AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.14 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

14

Table 10 Numerical results for the sign function of the “tridiag” matrix. Alg.

Size

60

80

90

100

120

140

Alg. 3 modified

k time mrp arp

4 8.4E−2 1.1E−8 9.4E−9

30 7.5E−1 – –

30 9.3E−1 – –

30 1.1 – –

30 1.7 – –

30 2.5 – –

Alg. 5 modified

k time mrp arp

1 4.9E−2 8.2E−9 6.9E−9

1 7.1E−2 3.3E−8 2.8E−8

1 9.1E−2 5.9E−8 5.0E−8

2 1.5E−1 1.1E−7 9.4E−8

12 6.6E−1 3.8E−7 3.0E−7

30 2.2 – –

An alternative approach to compute an interval enclosure for the sign function of A is to use

 1/2 sign( A ) = A −1 A 2 , and thus compute an enclosure X for all the square roots of all matrices in A 2 (using a modification of the algorithms from [11] relying in the function F ( X ) = X 2 − A) and a verified enclosure I A for the inverse of the point matrix A and finally put S = I A · X . Since the slopes for this particular function F ( X ) do not contain the matrix A (and therefore also do not contain the matrix A 2 ∈ A 2 for the sign function), we expect a reduced wrapping effect so that this approach might be applicable to larger and more ill-conditioned matrices. This is confirmed by our numerical tests where we observed that for the “frank” matrix with n = 9 the alternative approach obtains arp = 2.9E−7 and for the “randsvd” matrix with n = 250 it obtains arp = 3.1E−7. For the “gcdmat” matrix with n = 1000 it yields arp = 2.0E−10, for the “poisson” matrix with n = 900, it succeeds with arp = 6.7E−10, and, finally, for the matrix “tridiag” with n = 400, it obtains enclosures with arp = 2.2E−7. A different approach relying directly on (14) would have to compute an enclosure for all inverses within a given interval matrix. For the same reasons already explained for the case of the inverse square root, this works less well. 6. Conclusion and outlook We have proposed two methods based on interval arithmetic to compute an enclosing interval matrix for the exact inverse square root of a matrix A. Starting from an approximate inverse square root, these methods use a modification of the Krawczyk operator which reduces the computational complexity from O (n5 ) or higher of the standard approach to O (n3 ). Moreover, the methods use almost exclusively matrix–matrix operations and are thus particularly efficient with current implementations of machine interval arithmetic in INTLAB and C-XSC. Heuristic arguments and numerical evidence show that the method based on a linear transformation of the basic equation X A X − I = 0 is superior to the method which does not use this transformation in the sense that it is successful more often and that its computed enclosures tend to be narrower. In the case that the matrix A cannot be diagonalized in a stable manner, we have the option of using a different decomposition, and our suggested methods can, in principle, still be applied. Future research will address the question whether in this case we can modify our approach in a manner that it again uses only matrix–matrix operations and to which extent the dependence problem when solving linear recurrences can be avoided. As was suggested by one of the referees, the techniques presented here can also be applied to the computation of the geometric mean A#B = A ( B −1 A −1/2 ) of two positive definite matrices A and B, A#B solving the equation X A −1 X = B, see [26]. References [1] [2] [3] [4]

N.J. Higham, Functions of Matrices: Theory and Computation, SIAM, Philadelphia, 2008. R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1994. N. Sherif, On the computation of a matrix inverse square root, Computing 46 (1991) 295–305. P. Laasonen, On the iterative solution of the matrix equation A X 2 − I = 0, Math. Tables Other Aids Comput. 12 (1958) 109–116.

JID:LAA AID:12491 /FLA

[m1G; v 1.120; Prn:17/12/2013; 14:00] P.15 (1-15)

A. Frommer et al. / Linear Algebra and its Applications ••• (••••) •••–•••

15

[5] R. Byers, Solving the algebraic Riccati equation with the matrix sign function, Linear Algebra Appl. 85 (1987) 267–279. [6] A. Frommer, T. Lippert, B. Medeke, K. Schilling (Eds.), Numerical Challenges in Lattice Quantum Chromodynamics, Lect. Notes Comput. Sci. Eng., vol. 15, Springer-Verlag, Berlin, 2000. [7] G. Alefeld, Zur numerischen Auflösung der Matrizengleichung A X 2 − I = 0, Beitr. Numer. Math. 9 (1981) 13–19. [8] A. Boriçi, A Lanczos approach to the inverse square root of a large and sparse matrix, J. Comput. Phys. 162 (1) (2000) 123–131. [9] C.-H. Guo, N.J. Higham, A Schur–Newton method for the matrix pth root and its inverse, SIAM J. Matrix Anal. Appl. 28 (3) (2006) 788–804. ´ An iterative method for the computation of a matrix inverse square root, ZAMM Z. Angew. Math. Mech. 75 (11) [10] S. Lakic, (1995) 867–873. [11] A. Frommer, B. Hashemi, Verified computation of square roots of a matrix, SIAM J. Matrix Anal. Appl. 31 (2009) 1279–1302. [12] R.B. Kearfott, M. Nakao, A. Neumaier, S. Rump, S. Shary, P. van Hentenryck, Standardized notation in interval analysis, Reliab. Comput. 15 (1) (2010) 7–13. [13] G. Alefeld, J. Herzberger, Introduction to Interval Computations, Comput. Sci. Appl. Math., Academic Press, New York, 1983. [14] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to Interval Analysis, SIAM, Philadelphia, 2009. [15] R. Klatte, U.W. Kulisch, A. Wiethoff, C. Lawo, M. Rauch, C-XSC: A C++ Class Library for Extended Scientific Computing, Springer-Verlag, Berlin, 1993. [16] W. Hofschuster, W. Krämer, C-XSC 2.0: A C++ library for extended scientific computing, in: Numerical Software With Result Verification, in: Lecture Notes in Comput. Sci., vol. 2991, Springer, Berlin, 2004, pp. 15–35. [17] S.M. Rump, INTLAB – INTerval LABoratory, in: T. Csendes (Ed.), Developments in Reliable Computing, Kluwer Academic Publishers, Dordrecht, 1999, pp. 77–104. [18] R. Krawczyk, Newton-Algorithmen zur Bestimmung von Nullstellen mit Fehlerschranken, Computing 4 (1969) 187–201. [19] S.M. Rump, Kleine Fehlerschranken bei Matrixproblemen, Ph.D. thesis, Fakultät für Mathematik, Universität Karlsruhe, 1980. [20] T. Sablik, Verifizierte Berechnung der inversen Matrixwurzelfunktion, Master thesis, Department of Mathematics, University of Wuppertal, 2011. [21] S.M. Rump, Solving algebraic problems with high accuracy, in: W. Miranker, E. Kaucher (Eds.), A New Approach to Scientific Computation, in: Comput. Sci. Appl. Math., vol. 7, Academic Press, New York, 1983, pp. 51–120. [22] S.M. Rump, Verification methods for dense and sparse systems of equations, in: J. Herzberger (Ed.), Topics in Validated Computations, in: Stud. Comput. Math., vol. 5, Elsevier, Amsterdam, 1994, pp. 63–135. [23] A. Bavely, G. Stewart, An algorithm for computing reducing subspaces by block diagonalization, SIAM J. Numer. Anal. 16 (1979) 359–367. [24] Z. Bai, J. Demmel, Using the matrix sign function to compute invariant subspaces, SIAM J. Matrix Anal. Appl. 19 (1) (1998) 205–225. [25] E.U. Stickel, Separating eigenvalues using the matrix sign function, Linear Algebra Appl. 148 (1991) 75–88. [26] B. Iannazzo, The geometric mean of two matrices from a computational viewpoint, Tech. rep., available on http://arxiv.org/ abs/1201.0101.