A fast numerical algorithm for the determinant of a pentadiagonal matrix

A fast numerical algorithm for the determinant of a pentadiagonal matrix

Available online at www.sciencedirect.com Applied Mathematics and Computation 196 (2008) 835–841 www.elsevier.com/locate/amc A fast numerical algori...

126KB Sizes 10 Downloads 49 Views

Available online at www.sciencedirect.com

Applied Mathematics and Computation 196 (2008) 835–841 www.elsevier.com/locate/amc

A fast numerical algorithm for the determinant of a pentadiagonal matrix q Tomohiro Sogabe Department of Computational Science and Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan

Abstract Recently, a two-term recurrence for the determinant of a general matrix has been found [T. Sogabe, On a two-term recurrence for the determinant of a general matrix, Appl. Math. Comput., 187 (2007) 785–788] and it leads to a natural generalization of the DETGTRI algorithm [M. El-Mikkawy, A fast algorithm for evaluating nth order tridiagonal determinants, J. Comput. Appl. Math. 166 (2004) 581–584] for computing the determinant of a tridiagonal matrix. In this paper, we derive a fast numerical algorithm for computing the determinant of a pentadiagonal matrix from the generalization of the DETGTRI algorithm. Ó 2007 Elsevier Inc. All rights reserved. Keywords: Pentadiagonal matrices; Quindiagonal matrices; Determinants

1. Introduction We consider the determinant of a pentadiagonal matrix of the form 1 0 a11 a12 a13 C B a21 a22 a23 a24 C B C B C B a31 a32 a33 a34 a35 C B C B . . . . . . . . . . C; B A¼B . . . . . C C B .. .. .. .. B . an2;n C . . . C B C B @ an1;n3 an1;n2 an1;n1 an1;n A an;n2 an;n1 an;n where we assume that aij = 0 if ji  jj > 2. q

Partially supported by the MEXT. KAKENHI (Grant No. 18760063). E-mail address: [email protected]

0096-3003/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2007.07.015

ð1Þ

836

T. Sogabe / Applied Mathematics and Computation 196 (2008) 835–841

From practical point of view, pentadiagonal matrices frequently arise from boundary value problems involving fourth order derivatives and fast computational formulas for the determinants are required to test efficiently the existence of unique solutions of the PDEs, see, e.g., [4]. On the other hand, from theoretical point of view, a recursive relation of one of the fast computational formulas for pentadiagonal determinants is used in the inverse problem of constructing symmetric pentadiagonal Toeplitz matrices from three largest eigenvalues [1]. Hence, it is important to devise a fast numerical algorithm for computing the determinant of a pentadiagonal matrix. It is widely known that Sweet’s algorithm [6] and Evans’s algorithm [3] are fast numerical algorithms for evaluating the pentadiagonal determinants and they are used in practice, see, e.g., [1,4]. In this paper, we show that a more efficient algorithm is derived from the use of our framework [5] that can be regarded as a natural generalization of the DETGTRI algorithm [2]. The rest of this paper is organized as follows: in the next section, we review Sweet’s and Evans’s algorithms and give their algorithms. In Section 3, we describe a more efficient algorithm based on our framework [5]. In Section 4, we give the results of some numerical examples to show the performance of our algorithm. Finally, we make some concluding remarks in Section 5.

2. Sweet’s and Evans’s algorithms In this section, we describe Evans’s and Sweet’s algorithms. First, we give Sweet’s algorithm below. Algorithm 2.1. Sweet’s algorithm [6] for i = 1, 2, . . ., n  2, ai = ai,i, bi = ai,i+1ai+1,i, bi = ai,i+2ai+2,i, ci = ai,i+1ai+1,i+2ai+2,i, cti ¼ aiþ1;i aiþ2;iþ1 ai;iþ2 , end an1 = an1,n1, an = an,n, bn1 = an1,nan,n1, d1 = a 1, d2 = a22a11  b1, d1 = a 2, d 3 ¼ a3 d 2  b2 d 1  b1 d1 þ c1 þ ct1 , d2 = a3d1  b1, 1 ¼ d 1  ct1 =b2 , e1 = d1  c1/b2, d 4 ¼ a4 d 3  b3 d 2  b2 d2 þ c2 1 þ ct2 e1 , for i = 5, 6, . . ., n, di2 = ai1di3  bi3di4, i3 ¼ d i3  cti3 ei4 =bi2 , ei3 = di3  ci3i4/bi2, d i ¼ ai d i1  bi1 d i2  bi2 di2 þ ci2 i3 þ cti2 ei3 , end det(A) = dn. From the above algorithm, we can see that Sweet’s algorithm requires 6(n  2) + 25 + 18(n  4) = 24n  59 operations for computing the determinant of (1). Evans derived a more compact and simpler algorithm from leading principal minors of the matrix. The algorithm is given below.

T. Sogabe / Applied Mathematics and Computation 196 (2008) 835–841

837

Algorithm 2.2. Evans’s algorithm [3] P1 = a11, P2 = a22P1  a21a12, R2 = a12, S2 = a21, P3 = a33P2  a32a23P1  a31a13a22 + a31a23R2 + a32a13S2, R3 = a23P1  a13S2, S3 = a32P1  a31R2, P4 = a44P3  a43a34P2  a42a24(a33P1  a31a13) + a42a34R3 + a43a24S3, for i = 5, 6, . . ., n, Ri1 = ai2,i1Pi3  ai3,i1Si2, Si1 = ai1,i2Pi3  ai1,i3Ri2, Pi = ai,iPi1  ai,i1ai1,iPi2  ai,i2ai2,i (ai1,i1Pi3  ai1,i3ai3,i1Pi4) + ai,i2ai1,iRi1 + ai,i1ai2,iSi1, end det(A) = Pn. From the above algorithm, we can see that Evans’s algorithm requires 38 + 22(n  4) = 22n  50 operations for computing the determinant of (1). This leads to the result that Evans’s algorithm is about 9% faster than Sweet’s algorithm when the matrix size is large enough. 3. A more efficient algorithm for the determinant evaluation In this section, we present a more efficient algorithm. Before describing the detail, let us show the brief structure of the algorithm. The algorithm consists of the following two steps: Step 1. Transform a pentadiagonal matrix into a sparse Hessenberg matrix. Step 2. Apply our framework [5] to the sparse Hessenberg matrix. In the next section, we give the detail of Step 1 and derive a fast computational formula for the determinant of the sparse Hessenberg matrix in Section 3.2. 3.1. Transformation into a sparse Hessenberg matrix Here we consider transforming the matrix (1) into a sparse Hessenberg matrix of the form: 1 0 h11 h12 h13 C B h21 h22 h23 h24 C B C B h32 h33 h34 h35 C B C B .. .. .. .. C: ð2Þ H ¼B . . . . C B C B .. .. .. B . hn2;n C . . C B @ hn1;n2 hn1;n1 hn1;n A hn;n1 hn;n We show that the above sparse Hessenberg matrix can be obtained from the original matrix (1) and a suitable choice xi+1,i of the following matrix: 0 1 I i1 B C 1 C; Xiþ1 ¼ B @ A xiþ1;i 1 I ni1

838

T. Sogabe / Applied Mathematics and Computation 196 (2008) 835–841

where Ii1 and Ini1 are identity matrices of order i  1 and n  i  1. For simplicity, we consider the case n = 7 and give the value of xi+1,i to see the transformation. From (1) and setting x32 ¼  aa31 we have 21 10 1 0 a11 a12 a13 1 CB C B 1 CB a21 a22 a23 a24 C B C C B B a31 C C B B  a21 1 CB a31 a32 a33 a34 a35 C B CB C B X3 A ¼ B 1 a42 a43 a44 a45 a46 CB C CB C B CB C B 1 a a a a a 53 54 55 56 57 C CB B CB C B 1 a64 a65 a66 a67 A [email protected] @ 1 a75 a76 a77 1 0 a11 a12 a13 C B C B a21 a22 a23 a24 C B C B ~a32 ~ a33 ~ a34 a35 C B C B ¼B a42 a43 a44 a45 a46 C: C B B a53 a54 a55 a56 a57 C C B C B a64 a65 a66 a67 A @ a75 a76 a77 As we can see from the above, multiplication on the left by the matrix X3 only changes values in the third row of the matrix A. Similarly, choosing x43 ¼  a~a4232 ; . . . ; x76 ¼  a~a75 , we finally have the 7 by 7 sparse Hessenberg 65 matrix of the form: 1 0 a11 a12 a13 C B C B a21 a22 a23 a24 C B C B ~ ~ ~ a a a a 32 33 34 35 C B C B ~ X7 X6    X3 A ¼ B a43 ~ a44 ~ a45 a46 C: C B B ~54 a ~55 a~56 a57 C a C B C B ~65 a~66 a~67 A a @ ~a76

~a77

Generally, the value xi+1,i is defined as aiþ1;i1 ; where ~ a21 ¼ a21 : xiþ1;i ¼  ~ ai;i1 Denoting W = XnXn1    X3, it follows that WA ¼ H : Since we can readily see from the definition of Xi that det(W) = 1, we readily obtain the following relation: detðAÞ ¼ detðW Þ detðAÞ ¼ detðWAÞ ¼ detðH Þ: Hence, the determinant of A can be implicitly evaluated by computing the determinant of H. The computational cost of the transformation requires 7(n  3) + 5 = 7n  16 operations. In the next section, we give a fast computational formula for the determinant of H. Here, we note that if all elements in the matrix A are integers, i.e. ai;j 2 Z for all i, j, then we may use the matrix Xi+1 as 0 1 I i1 B C 1 B C Xiþ1 ¼ B C @ A ~i;i1 aiþ1;i1 a I ni1

T. Sogabe / Applied Mathematics and Computation 196 (2008) 835–841

839

since in this case there are no rounding errors in the sparse Hessenberg transformation although the use of the above matrices leads to slightly additional costs. We may also use the above matrices even in the case of which all elements are rational numbers, i.e. ai;j 2 Q for all i, j since suitable scaling can transform the matrix into a integer matrix. 3.2. A fast computational formula for the determinant of the sparse Hessenberg matrix In this section, we describe an algorithm based on our framework [5] that concerns a two-term recurrence relation for the determinant of a general matrix. First, we give a brief review on the framework, and then derive a fast computational formula for the determinant of the sparse Hessenberg matrix (2). Let An be an n by n matrix and let An1 be the n  1 by n  1 leading principal submatrix of An. Then, matrix An can be written as   An1 bn1 An ¼ ; bn1 ; cn1 2 Cn1 ; d n 2 C: cTn1 dn It is shown in [5] that the determinant of An is then given by the following recurrences: f0 ¼ 1; fi ¼ ki fi1 ;

i ¼ 1; 2; . . . ; n;

ð3Þ

where k1 is the (1, 1)th entry of An, i.e. k1 = d1, ki ¼ d i  cTi1 A1 i1 bi1 for i = 2, 3, . . ., n, and det(Ai) = fi for i = 1, 2, . . ., n. Now, we consider applying the above framework to the sparse Hessenberg matrix (2). It follows from (2) and the recurrence formula for kn that we have T 1 T 1 kn ¼ hn;n  hn;n1 eTn1 H 1 n1 ðhn2;n en2 þ hn1;n en1 Þ ¼ hn;n  hn;n1 ðhn2;n en1 H n1 en2 þ hn1;n en1 H n1 en1 Þ;

ð4Þ where Hn1 is the (n  1)th order leading principal submatrix of H and ei is the ith unit vector, i.e. ith entry is one and other entries are all zeros. Using the following relation: eTn1 H 1 n1 en2 ¼ 

fn3  hn1;n2 fn1

and eTn1 H 1 n1 en1 ¼

fn2 ; fn1

Eq. (4) can be rewritten as   fn3 fn2 kn ¼ hn;n þ hn;n1  hn1;n2 hn2;n   hn1;n : fn1 fn1 Hence, from (3) and multiplying the above equation by fn1 we have a formula of the form fn ¼ kn fn1 ¼ fn1 hn;n þ hn;n1 ðfn3 hn1;n2 hn2;n  fn2 hn1;n Þ:

ð5Þ

Now, by reducing the number n of (5) recursively, we obtain all recurrence formulas of fi for i < n. The resulting algorithm for computing the determinant is given below. Algorithm 2.3. Determinant evaluation for the sparse Hessenberg matrix f1 = h11, f2 = f1h22  h21h12, f3 = f2h33 + h32(h21h13  f1h23), for i = 4, 5, . . ., n, fi = fi1hi,i + hi,i1(fi3hi1,i2hi2,i  fi2hi1,i), end det(A) = fn. From the above we can readily show that the algorithm requires 7(n  3) + 9 = 7n  12 operations. Now, we are ready for describing the summary of our algorithm. The summary is given below.

840

T. Sogabe / Applied Mathematics and Computation 196 (2008) 835–841

Step 1. Transform a pentadiagonal matrix into a sparse Hessenberg matrix. Step 2. Start Algorithm 2.3. We can see from Section 3.1 and Algorithm 2.3 that total operations for computing the determinant of a pentadiagonal matrix is the summation of 7n  16 and 7n  12, i.e. 14n  28. Hence, the total operations are much less than those of Sweet’ and Evans’s algorithms. The comparison among them is shown in Table 1. 4. Numerical examples In this section, we give the results of some simple numerical experiments to confirm that our algorithm works as well as Sweet’s and Evans’s algorithms. All tests were performed in MATLAB 5.3 using double precision arithmetic. Example 1. First, we consider the 105 0 2:5 1:5 1 B 1:5 2:5 1:5 1 B B B 1 1:5 2:5 1:5 1 B B .. .. .. .. A¼B . . . . B B . . . . . . B . . . B B @ 1 1:5 1

by 105 symmetric diagonally dominant matrix of the form 1 C C C C C C .. C: . C C .. . 1 C C C 2:5 1:5 A 1:5 2:5

The results with Sweet’s, Evans’s, and our algorithms are shown in Table 2 on the left. We can see from Table 2 that our algorithm generated almost the same value as Sweet’s and Evans’s algorithm. In the next example, we use another type of matrix that is neither symmetric nor diagonally dominant. Example 2. Next, we consider the 105 0 0:2 1:3 1:2 B 0:3 0:2 1:3 1:2 B B B 0:1 0:3 0:2 1:3 B B .. .. .. A¼B . . . B B .. .. B . . B B @ 0:1

by 105 nonsymmetric matrix of the form 1 1:2 .. . .. . 0:3

. 0:2

0:1

0:3

..

.

..

C C C C C C C: C C 1:2 C C C 1:3 A 0:2

Table 1 Total operations for the determinant of a pentadiagonal matrix, where n (P3) denotes the order of the matrix

Total operations

Sweet’s algorithm

Evans’s algorithm

Our algorithm

24n  59

22n  50

14n  28

Table 2 Numerical results of the determinants for Examples 1 and 2

Sweet’s algorithm Evans’s algorithm Our algorithm

Example 1

Example 2

8.82555077824680 8.82555077821375 8.82555077820632

217.4331905328500 217.4331905328210 217.4331905301605

T. Sogabe / Applied Mathematics and Computation 196 (2008) 835–841

841

The results with Sweet’s, Evans’s, and out algorithms are shown in Table 2 on the right. Similar to the results shown in Example 1, our algorithm generated almost the same value as the others. 5. Concluding remarks In this paper, we derived a numerical algorithm for computing the determinant of a pentadiagonal matrix and showed that the computational cost is much less than those of two well-known algorithms, i.e. Sweet’s and Evans’s algorithms. From some numerical examples we have learned that our algorithm works as well as Sweet’s and Evans’s algorithms. Hence, it may become a useful tool for the determinant evaluation. References [1] M.T. Chu, F. Diele, S. Ragni, On the inverse problem of constructing symmetric pentadiagonal Toeplitz matrices from three largest eigenvalues, Inverse Probl. 21 (2005) 1879–1894. [2] M. El-Mikkawy, A fast algorithm for evaluating nth order tridiagonal determinants, J. Comput. Appl. Math. 166 (2004) 581–584. [3] D.J. Evans, A recursive algorithm for determining the eigenvalues of a quindiagonal matrix, Comput. J. 18 (1973) 70–73. [4] J. Monterde, H. Ugail, A general 4th-order PDE method to generate Be´zier surfaces from the boundary, Comp. Aided Geom. Design 23 (2006) 208–225. [5] T. Sogabe, On a two-term recurrence for the determinant of a general matrix, Appl. Math. Comput. 187 (2007) 785–788. [6] R.A. Sweet, A recursive relation for the determinant of a pentadiagonal matrix, Comm. ACM 12 (1969) 330–332.