End-correction integration formulae with optimized terminal sampling points

End-correction integration formulae with optimized terminal sampling points

Computer Physics Communications ELSEVIER Computer Physics Communications 83 (1994) 236—244 End-correction integration formulae with optimized termin...

767KB Sizes 0 Downloads 17 Views

Computer Physics Communications ELSEVIER

Computer Physics Communications 83 (1994) 236—244

End-correction integration formulae with optimized terminal sampling points Paolo Luchini Dipartimento di Progettazione Aeronautica, Università di Napoli, Pie Tecchio 80, 80125 Napoli, Italy Received 8 April 1992; revised 24 March 1994

Abstract A new family of integration rules is presented that optimizes the choice of terminal sampling points in a Gregory-type end-correction formula. Besides applying the various methods to a number of test functions as is traditionally done, a comparison between the new and the classical numerical integration rules is conducted by the Fourier method, which allows their approximation properties to be estimated not only in the limit of step size tending to zero but also for nonzero step size. From these tests our optimized rule turns out to surpass all the other methods, including past improvements of the Gregory rule such as the Gauss—Gregory and Hamming—Pinkham rules.

1. Introduction Existing numerical integration rules may roughly be divided into two categories: composite polynomial-interpolation rules, which stem from the integration of a piecewise polynomial interpolation of the integrand function, and end-correction rules, which start from a uniform discrete summation of the integrand function and add suitable corrections for endpoint effects. A comparative analysis of the most widespread rules of both categories was performed in paper [1]. We recall that sampling the integrand funclion at constant spacing gives rise to Newton— Côtes integration, whereas a purposeful choice of the sampling positions within each interpolation interval yields the, much more accurate, Gauss rules. The rationale behind end-correction rules is in the observation that there is no reason why an OO1O-4655/94/$07.OO © 1994 Elsevier Science SSDI OO1O-4655(94)00080-L

integration rule should use unequal point weights and/or spacings when it is meant to be applied to a spatially homogeneous class of functions (e.g., functions with bounded derivatives up to a certam order), except near the ends of the integration interval where homogeneity is locally violated. Therefore end corrections are all that is really needed. In fact, it is easily realized that Newton—Côtes and Gauss methods could even allow discontinuities in the integrand function, not to mention its derivatives, provided these discontinuities occurred at subinterval boundaries: they are exact for discontinuous piecewise polynomials. It is from the position of the allowed-for discontinuities that the dishomogeneity in point spacing and weights arises. Nevertheless, the prototype of end-correction formulae, the Gregory formula of 1670, exhibits a disappointing performance, as appears from both standard tests and the Fourier analysis of Ref.

By. All rights reserved

P. Luchini / Computer Physics Communications 83 (1994) 236—244

[1]. Modifications of the Gregory rules that use a different spacing near the ends, such as those bearing the names of Gauss [2,3] and Hamming— Pinltham [4], have achieved a very noticeable improvement; however, nobody to our knowledge [2—9]has yet tried to use optimized, unequally spaced, sampling points, just as is done in passing from the Newton—Côtes to the Gauss family of polynomial-interpolation based rules. This is what we proposed ourselves to do at the outset of this research.

2. Optimizing the Gregory rule A general integration formula employing end corrections with unequal spacing may be written as f(M+2ö)hf() dx

237

This transfer function may be written in terms of the single variable kh by the identity h 1E(o,,,,,h)(k) = E(o,,~l)(kh).If we wish the integration rule to be exact for polynomials of the highest possible order, or equivalently that its error for smooth functions should decrease asymptotically with the highest possible power of the spacing h, we must choose our free parameters in such a manner that E( 0~i)(kh)may approximate 1/ikh (the transfer function of true integration) to as high an order as possible for kh —*0. We must therefore impose that as large as possible a number of derivatives of E(o,,l)(kh) + 1/i kh vanish at kh = 0. There is some simplification in the algebra if we adopt as new parameters —



5’

= S 1/2 and X~= 5’ X,,, so that Eq. (2) may be rewritten as 2 e’k~~E(O,,~,l)(kh) = ~N ‘~‘;~ e~’~ + e~”~ 1 eil~!~ —





0

_ E~o~p~±~,j

n=1

(3)

h)(f(x))

E

m= 0

=1

~



W~f(hX~) + ~ f((m +s)h)

N

+

and restate the problem by asking that the r.h.s. of Eq. (3) approximate e I6’/Ch/j kh as precisely as possible when kh -*0. On expanding both the r.h.s. of Eq. (3) multiplied by i kh and the

M

)





(1)

H’~f((M+ 25 —X~)h)

def

function e~’1c~in powers of s = ikh and equating term by term, we obtain the infinite set —

n=1

In words, we assign certain weights J4’, to a

of equations m

N

=

5,m±1 —

predefined number

N

of points located at dis-

tances hX~from each endpoint of the integration interval, and then unit weights to a sequence of equally spaced points, the first and last of which are at a distance Sh from the endpoints, Theofdetermination the approximation properties Eq. (1), from of which the criterion for the choice of the numbers J4’~,, X,, and 6 will follow, is easily provided by the Fourier method [1]. The first step is to calculate the transfer function of EOr,,h(f), by applying the rule (1) itself to the integration of the function e~ over a semi-in-

)

finite interval while pretending that k has an infinitesimal positive imaginary part: 6 I N e~ E(o, 00,h)(k) =h~~ W, eiX~~/th + 1— ei~~h (2) 1

Sm+l (m

~

=

0,...

,co),

m+1 ,~=

(4)

where m Sm = d

~

I

se —

dm

2~’— d ~tm

(

e~L=o S

)

sinh s s a’ (5) (The first few values of the coefficients Sm are: =

=

s

0 = 1, ~2

= 1/12, S4 = 7/240, S6 = 31/1344, 127/3840, s10 = 2555/33792 All the odd-numbered coefficients are zero because the function s/sinh(s) is even.) =







Having 2N+ 1 free parameters at our disposal, we may hope to satisfy Eq. (4) for m =

P. Luchini / Computer Physics Communications 83 (1994) 236—244

238

0,. 2 N; but let us initially try to satisfy Eq. (4) for m = 0,. 2N 1 only, considering 5’ as given. Systems of this type (which also appear in the theory of Gaussian integration; see e.g. [4]) can be solved with the aid of a characteristic polynomial, defined as that polynomial of degree N which has X~, X~as its roots. Let c0,.. c~be the coefficients of the char-

zero. It follows that, for the system formed by the first 2N equations to be compatible, similar linear combinations of the r.h.s. must be zero as well, i.e.

acteristic polynomial; we have, for all m and

must be true for m = 0,. N 1. Eqs. (6) represent a system of N homogeneous equations in the N + 1 unknowns c~,which may thereby be calculated up to an uninfluential multiplicative constant. Once the coefficients c~of the characteristic polynomial are determined, the original unknowns X~ may be calculated as its roots. Subsequently, the solution of any N of Eqs. (4)

. . ,

. . ,



. . .,

N p0

,m±p+1

c ~



m +p

Sm ~P ±1 + 1

=

0

(6)

. ,

m~’=

c

~

w x’



‘~

‘~

~

=o

‘~

p

,~

= ~

‘~



p

so that a linear combination with coefficients c~ of the l.h.s. of N + 1 consecutive equations extracted from the sequence (4) turns out to be

..,

Table 1 Coefficients and sampling points of the optimized rule f(M+25)hf() dx = h( ~W,,f(hX~) +~f((m + 5)h) + LJf~,f((M + 25

1 2

1.2339449125 2.0060697370

3

2.7938658441

4

3.5903863128

5

4.3925426810

Variant a: 1 Variant b: 2



_X~)h))

0.3 102016197 0.2265822138 1.0192589986 0.1794407727 0.8559150049 1.7964687837 0.1490194659 0.7335612736 1.6082486064 2.5909325920 0.1276790147 0.6403684200 1.4451895006 2.3977009222 3.3926608949

0.7339449125 0.55838493 17 0.9476848053 0.4493964259 0.8558024045 0.9886670137 0.3762262093 0.7633539946 0.9533705490 0.9974355599 0.3238394832 0.6823973361 0.9017489045 0.9851476173 0.9994093398

1

1/6

1/2

2

0.2245784980 1.0137193744

0.5540781644 0.9459218356

1.2472613690

0.3289055304

0.7472613690

2.0513517321

0.2597245559 1.0715426623

0.6136251472 0.9377265849

Variant c (k/i 0 = 2.15): 1 Variant d (kh0 2

=

1.565):

P. Luchini / Computer Physics Communications 83 (1994) 236—244

viewed as a linear system in J+’~will yield these latter quantities. (The other N equations will be satisfied automatically because polynomial has been chosen so the as tocharacteristic make them linearly dependent on the first N.) Notice that it is not a priori guaranteed that the characteristic polynomial will have N real, as opposed to cornplex, roots; this turns out to be the case, however, in the examples considered below. With 5’ at our disposal as an additional parameter, we may ask that Eq. (4) be satisfied up to order 2N, rather than 2N 1. The method discussed above applies unchanged, but now the system of Eqs. (6) is formed by N + 1 hornogeneous equations in N + 1 unknowns, and we must ensure that its determinant is zero in order to have a nonzero solution. In other words, the equation for 5’ is obtained by equating to zero the determinant of the system (6). After this equation is solved (it is an algebraic equation of degree (N + 1)2 in 6’ and might have more than one acceptable solution) the other unknowns can be determined just as before. The results of these calculations, that is the sampling positions and weights to be used in the integration formula (1), are reported in Table 1 for N between 0 and 5. The approximation properties of the thus formed integration rule will be discussed in the next section.

T

r

For N = 0 the only parameter to be calculated in the rule (1) is 6, and the procedure described in Section 2 gives S = 1/2. The method then reduces to the midpoint rule, the best classical first-order method. For N = 1 the (fourth-degree) equation for 6’ has the two real solutions 5’ = ±(2/%/3 + 1)1/2/2 (in addition to two imaginary solutions which we do not consider, although it is not in principle impossible to do so). The corresponding sampling point and weight are given by X = 5’/~I3 and W1 = 5’. (In fact, it follows from the vanishing of the odd-numbered coefficients ~m that at all orders the Eqs. (4) are invariant with respect to the

/

,

/

-

/

/

.~

/ /

-

I

2

//

/

-

./

/

/

// ,-

/

/

///

/

~‘

/

/

/

/

/

// /

/ /./ / /

//

/

/////L

/~
//

jS

/

/~

~—



3. Examples and tests of the new rule

239

7 5/

/// //

~/‘ ~/

/

/

______________________________________ 1

10~ 0.5

Fig. 1. Relative spectral error plot of the optimized integration rules introduced in this paper. The curves corresponding to Euler’s and Simpson’s rules are also shown for comparison purposes.

symmetry transformation —5’ 6’, —X~—* X,~, — VV~, l4’~,so that the possible values of 5’ come in symmetrical pairs. This symmetry corresponds to the fact that from a method of integration valid for the interval 0, + cc it is always possible to derive a method for — cc, 0. We shall only consider the positive solution, which is characterized by the sampling points being located inside the integration interval rather than outside, but totally similar error estimates are valid for its specular counterpart.) —*

—*

The spectral error of the resulting integration rule, defined according to Ref. [1] as r kh) ~ Ii khE~oso 1)( kh) + ii,

(

is plotted in Fig. 1, together with those of Euler’s and Simpson’s rules included for comparison purposes: it is asymptotically parallel to the curve of Simpson’s rule, going with the same (fourth) power for kh 0, but it stays consistently below it, reaching the same error level for a value of kh 1.4 times as large. It arrives at the 1% error level for kh = 1.57, a value slightly larger than that of the best Gaussian rule. (The 1% and 0.1% spec—*

tral ranges and the asymptotic behaviour of the error for k/i 0 are given in Table 2.) For N = 2 the equation for 5’ is of the ninth degree; only three of its roots are real: 5’ = 0 and —*

P. Luchini / Computer Physics Communications 83 (1994) 236—244

240

Table 2 Characteristic spectral parameters Rule

1% spectral range

0.1% spectral range

Euler Simpson Newton—Côtes (N = 4) Gauss (N = 3) Gauss (N = 4) Hamming—Pinkham (N 2) Hamming—Pinkham (N 5) Optimized rule (N = 1) Optimized rule (N = 2) Optimized rule (N = 3) Optimized rule (N = 4) Optimized rule (N = 5) Optimized rule (N = 10) Variant a (N= 1) Variant b (N = 2) Variant c (N = 1) Variantd(N=2)

0.35 1.11 1.12 1.50 1.47 0.98

0.11 0.64 0.82 1.10 1.27 0.46

Asymptotic error 2 (0.29 k/i) (0.27 kh)4 (0.36 kh)6 (0.27 k/i)6 (0.29 k/i)8 (0.22 kh)3 6

2.25 1.57 2.28 2.75

3.09 3.35 3.99 1.11 2.28 2.35 3.28

1.60 0.89 1.56 2.06 2.44 2.74 3.50 0.52 1.56 0.35 0.73

(0.17 k/i) (0.20 k/i)4 (0.20 k/i)6 (0.21 kh)8 (0.21 kh)~° (0.21 k/i) 2 (0.21 k/i22 (0.19 k/i)3 (0.10 k/i)5 (0.09 kh)2

(0.05k/i)2

points than all the others (in particular, less than half that required by Simpson’s rule), and the decrease of the error with increasing M is in general agreement with the asymptotic laws given in Table 2.

4. Variants Just as is true of classical Gauss integration, the sampling positions in our modified end-cor.

.

.

rection method may be determined by minimizing the error under various possible constraints, rather than by aiming at the absolute highest order for h 0 as was done in Section 3. Here we shall consider two possible variants: constrain—*

ing S to assume integral values only, so that the bulk sampling points sit at an integral number of steps from the endpoint of the integration interval, and having the spectral error vanish at a number of nonzero values of kh, rather than asymptotically for k/i 0. —*

6’ = ±1.506. For 5’ = 0 (i.e. S = 0.5) the sampling points and weights are complex. For ~i = 1.506 the sampling points and weights are reported in Table 1. For 5’ = — 1.506 specular values apply. The spectral error of this integration rule, which is asymptotically of sixth order (that is, exact for fifth-degree polynomials), may be seen in Fig. 1. It reaches the 1% level for kh = 2.28, a value very close to that of the best Hamming—Pinkham rule and more than twice the range of Simpson’s rule. A few more cases are considered in Table 2 and Fig. 1. It is interesting to observe that each curve is consistently lower than the previous one for all kh, although, as may be expected, there is less and less to be gained from each successive increment in N. Finally, for the sake of showing how our method performs in a more traditional kind of comparison, Table 3 reports the relative errors committed by a few selected rules in three more or less arbitrarily chosen test integrals, for various values of the total number of sampling points M. All the conclusions of the Fourier analysis are confirmed: the present method attains the 1% and 0.1% error levels for a lower number of

4.1. Integral S If, for instance, one wished to choose the integration step by successive trial-and-error halvings, he would prefer that the end of the interval fell in the equally-spaced set of points adopted for discretizing the bulk of the integral, so that the sampling values could be re-used on the next iteration. Therefore, a special interest is attached to integration rules with integral 5. Of course, if we satisfy Eq. (4) for m = 0,. . . ,2N 1 and not for m = 2N, the rule will only be exact for polynomials of degree up to 2N rather than 2N + 1. For N = 1 the integral value of S closest to the optimum is 1. The resulting integration rule (whose parameters are reported as variant a in Table 1) is very much like Euler’s rule, having unit weights for all points except the terminal one which has a weight of 0.5, with the only difference that this terminal sampling point is not located at the endpoint but a distance h/6 inwards from it. As may be seen from Fig. 2 and Table 2, this rule has a spectral error r(kh) considerably higher than the optimized 1-point rule, but it is still far better than Euler’s rule, and —

P. Luchini/Computer Physics Communications 83 (1994) 236—244

241

4.2. Optimizing the transfer function for nonzero stepsize

in fact comparable at the 1% error level with Simpson’s rule. For N = 2 luck has it that the optimal value of 5, i.e. 2.006, is already very close to an integer. The algorithm obtained by giving S the value of exactly 2 (variant b in Table 1), although formally of a lower order for k/i 0, has an error curve indistinguishable within the graphical resolution of Fig. 2 from that of the optimized algorithm, and gives very nearly the same value for the range of kh within which the error stays below 1% or 0.1%.

Constructing a numerical integration formula essentially becomes in Fourier space an interpolation problem, namely that of approximating the function 1/ikh by a discrete combination of imaginary exponentials under various constraints. Rather than determining the free parameters in Eq. (2) so that the Taylor series of the interpolant coincides with that of the function to be interpolated for as many terms as possible, we may

—‘



Table 3 Relative error committed by a few selected rules in three test integrals

(1)IlOex dX

5

Simpson

1.2 8.1

x x

1.2

x

10_I 3.2 x 102 10_2 3.9 X i0~ 1.1 x 10_i 4.4 x 10_2

4.3

x

i0~

N.—C. Gauss Gauss H.—P. H.—P. Or. O.r.

4 3 4 2 5 2 5

6

7

8

5.9 x iO~

9

10

1.1 x 10—2 4.1 x i0~ 4.8 x iO~ iO~ 1.9 x 10—2 1.3

x i0~

11

12

13

5.0 X i0~

2.5 X 5.1 x 9.9 x iO~ 6.0 x 10—6 1.0 x 10—2 6.0 x 6.9 x i0~ 8.5 x 4.0 X i0~ 1.5 x 6.8 x iO~ 7.8 x

14

i0~ iO~ i0~ i0~

io~ 10—10

15

16

1.4 X i0~ 2.8 X iO~ 7.0 x 3.8 x iO~ 1.5 x iO~ 6.2 x 10—6 1.2 x 10—10

17

18

8.1 X i0~ 4 1.0 x i0 9.7 x i0~ 2.8 x i0~ 3.3 x 10—6 2.9 x 10—6 2.5 x 10—11

19

5.1 X iO~ 10—6 1.8 x iO~ 7.8 x iO~ 1.5 x 10—6 6.2 x 10i2

3/2 dx (2) fi0(1 +x) Rule NM= 5 Simpson N.—C. Gauss Gauss H.—P. H.=P. O.r. O.r.

4 3 4 2 5 2 5

6

7

8

1.5 X 10’ 6.4 x 10_2 13 x l0~ 1.7 X 10_2 3.6 X 5.4 x 102 3.2 x 102 1.8

x

10—2

9 3.1 2.3

10

x x

10—2 10_2 5.4 x iO~ i0~ 1.9 x 10_2

5.6 x i0~

2.2 X i03

7

9

11 1.7

12

x

1.2 x 9.8 x 9.7 x 6.4 x

10—2 2.1 x 7.8 x 10_2 iO~ i0~ i0~

13 1.0 x 6.6 x iO~ iO~ 8.2 x 3.1 x 4.8 x 2.0 X

14 10—2 iO~

15

6.4 X i03 9.1

iO~ iO~ i0” iO~

16

x

iO~ 2.2 x 5.7 X i0~ 1.2 x iO~ 2.5 x i0~ 7.2 x 10_6

17

18

4.2 x i0~ 2.4 x iO~ 4.4 x iO~ 4.1 x iO~ 5.0 x iO~ 1.4 x i0~ 2.8 x 10_6

19 2.9

x

io~

iO~’ 3.1 X 2.4 x 8.5 x 1.2 x

iO~ i0~ iO~ 10_6

,io sin(x) (3)1 dx .‘o Iog(1 +x)

Rule Simpson N.—C. Gauss Gauss H.—P. H.—P. O.r. O.r.

NM= 5 4 3 4 2 5 2 5

6

8

1.7 X 10_I 1.7 x 10_2 3.1 x 10’ 6.5 x iO~ 1.5 X 1.5 x 10_I 2.1 x 10_2 1.8 x iO~

3.2

x

iO~

10

4.4 X iO~ 6.6 x i0~ 3.1 x i0~ iO~ 6.3 x i0~

7.2 x

io~

11

12

1.7 x i0~

13

7.6 x 3.2 x 5.3 x iO~ 4.7 X iO~ 2.6 x i0~ 1.3 x 1.3 x iO~ 9.8 x 2.2 x i05 8.2 x 1.8 x iO~ 5.0 x

14

15

i0~ iO~

4.0

iO~ iO~ 10—6 10_8

16

x

iO~

17

18

2.3 x 1O~ 5.6 x i0~ 1.5 x iO~ 5.3 x 6.0 X i0~ 7.5 x i0” 4.6 x iO~ 1.3 x i0~ 2.8 x 10—6 35 x 10_6 1 7 x 10_6 1.6 x 10_8 5.6 x i0~

19 1.4 X iO~ 10—6 3.1 x i0~ 7.8 x i0~ 89 x i07 22 X iO~

N.—C. Newton—Côtes, H.—P. = Hamming=Pinkham, 0. r. = Optimized rule. M denotes the total~i~umber of sampling points, inclusive of any used for endpoint corrections.

242

P. Luchini/Computer Physics Communications 83 (1994) 236—244

-1

5

- -

10

-.5

~-/

,.

a formally analogous technique. With 2N+ 1 free parameters at our disposal, we may impose that

5,, ~

./

--

Eq (8) be satisfied for — N m N On intro ducing the characteristic polynomial that has Pi’ ‘PN as its roots, we find that the coeffi cients c~of this polynomial obey the linear sys-

/

/

~ /

-

/ /

//,/

/

// //

7

/

_______________________

io2 //

/5

/

5/5/ 7

/ /

/7



//

/

io~

,

/ 5/

1

-.

instead decide to seek a prefixed nonzero error level to be achieved with the least number of sampling points over a finite range of kh. In general, this is a complicated interpolation problem (which, nevertheless, might be solved once and for all numerically). An analytical solution becomes possible if we are willing to accept an interpolation that equals the interpolated function at a number of equally spaced points on the k-axis, rather than minimizing some integral error measure. In this case, in fact, the problem can be solved by Prony’s exponential interpolation method [4]. Our goal is to impose that the r.h.s. of Eq. (2) be exactly equal to — 1/1k for a number of regularly spaced values kh = mkh0. Let us define a new function F(kh) as 1 =





e~/dh2 —

e

(7)

On putting p,, = ~ we may write the condition that we want Eq. (2) to satisfy as N ~

. =

0

(9)

F(mkh0).

for m = —N,.. ,0. The compatibility condition of this system determines 5, the solution of the same system gives the characteristic polynomial with roots p,,, and finally the linear system formed by any N of Eqs. (8) allows the calculation of the weights w,. Seemingly a difficulty exists because the values .

//

Fig. 2. Relative spectral error plot for the integral-S variants of Section 4.1. The error curves of the corresponding Optimized-S rules are also shown.

F( kh)

=

=

/

~‘

/

0.5

N

~ c,~,F((m+p)kh0)

/5

//

//

tem of equations

(8)

n=1

Eq. (8), written for several integral values of m, yields a system of the same form as Eq. (4), with p,, in the place of X~,and can be solved by

of F(mkh0) are complex but general complex values of W, and p,, are not acceptable; in fact we want J4’, to be real and p,, to be of unit modulus (so that its logarithm ikh0X~ is purely imaginary) for the values of X~ and W, to be interpreted as sampling points and weights in an integration formula. However, it can be shown that because of the Hennitian symmetry of F(kh) (that is, the property that F( — kh) = F(kh)*) real values of W’, and X,, result. First, we may observe that when F( — kh) = F(kh)* the solution of Eq. (9) must have a similar symmetry property, namely *

c~, for the replacement of F(kh) simultaneously of c by c~_ unchanged. Polynomials with have a particular character to CN_P

=

(10) by F*(_kh) and leaves the system the property (10) their roots, analo-

gous in some sense to that which polynomials with real coefficients have. When the coefficients of P(p) are real it is immediate to show that if p is a root p”° is too, and it follows that the roots are either real or come in conjugate pairs. Similarly, when the coefficients of P(p) have the property (10) it is easily verified that if P(p) = 0 then also P(1/p*) = 0, and therefore the roots must either be of unit modulus (so that p = 1 /p*) or come in “reciprocal conjugate” pairs with reciprocal moduli and the same argument. Al-

P. Luchini / Computer Physics Communications 83 (1994) 236—244

though it is not known, a priori, which of these alternatives will occur, just as is not known in general whether a polynomial with real coefficients will have real or complex conjugate roots, there are continuous “islands” of nonzero measure in coefficient space where all the roots are

of unit modulus, and in practice in all our calculations with the present problem the roots did turn

up of unit modulus, as in the two examples

below. Second, once it is established that the numbers are of unit modulus, it follows from Eq. (8) that J4’~,= J4~,* (because this transformation together with the change of m into — m leaves the system unchanged), i.e. the weights LV~, are real. More generally, if p~and were a pair of roots such that Pq = l/p~, it would be true that Wq =

243

of Fig. 3, with a considerable improvement in spectral range with respect to the base formulae.

5. Concluding remarks The purpose of this paper has been to present a novel numerical integration method obtained by optimizing a Ia Gauss the choice of the termi-

nal sampling points in the Gregory-type formula (1). We have determined the set of sampling points and weights that maximizes the order of the method (the degree of polynomials for which the result is exact) and found real sampling points which are all located inside the integration inter-

val. This last result was not a priori warranted: the roots of the characteristic polynomial might as well have fallen out of the integration interval

The above procedure yields a different set of sampling points and weights for every choice of

or be outright complex, as happens, for instance, for Gaussian-type methods using the derivatives

the wave-number spacing kh0, starting from the

of the integrand function at the sampling points and for Gauss’ central-difference variant of the

same values of Section 2, which are re-obtained in the limit for kh0 0, and arriving at a singu—*

Gregory rule [4].

larity that makes the solution impossible for kh0 = 2Tr/N. For example, suppose one aims at a 1% maximum error with N = 1 or N = 2; choosing k/i0 = 2.15 in the first case, and 1.565 in the second, gives the coefficients reported as variants

In addition, it is worth noting that a pattern appears from Table 1: for large N the sampling points farther from the endpoint rapidly tend to assume very nearly unit spacing and weight, so that every line in the table is obtained from the

c and d in Table 1 and the spectral error curves

one above by mild changes and the dependence on N appears to be, in some sense, smooth. It would seem that, just as in the case of the Gregory rules, a definite limit is approached for N cc, although we have not yet been able to find it. They too appear, just as the standard

5, /

/

/

/

-/

/

-

/

5/ /

/

/

/

/

/ /

/

/ S

-

~

//

// /

‘7—

/

10-s

/ ~ 05 1 Ok Fig. 3. Relative spectral error plots of the nonzero k/i variants of Section 4.2.

—+

Gregory formulae, to improve smoothly and indefinitely with increasing N. We do not know 2iT whether for N their cc or useful to a lower spectral limit range (if one tendswere to to judge from the trend of the asymptotic error coefficients shown in Table 2, a limit in the neighbourhood of 4.6 would seem more plausi—*

ble), but it is certainly greater than that of all the other known numerical integration formulae. Incidentally, it might be observed as a curiosity

that the theoretical limit of 2’rr is higher than the Nyquist limit posed by the sampling theorem to the spectral extension of a function that can be reconstructed from a finite-step sampling, which

244

P. Luc/iini / Computer Physics Communications 83 (1994) 236—244

in our variables is ‘rr. This is not a real contradiction, because we are not trying to reconstruct the whole function but only its integral, and also because we are locally violating the constraint of

(six-point Hamming—Pinkham) method among

uniform spacing near the endpoints, but never-

Acknowledgement

theless it is amusing that in this restricted sense the Nyquist limit can be defeated. Some of the methods presented in Table 2 do indeed exhibit a 1% error spectral range greater than ir. Concluding, for general-purpose usage our formula with N = 2 is probably a good compromise between precision and ease of use. The variant presented as case b in Section 4 is practically as precise and even easier to use, requiring only two special terminal sampling points near each end and a sequence of unit-weight sampling points, which are equally spaced between them and from the endpoints of the integration interval; it only needs less than half the computational effort to reach a similar precision as Simpson’s rule, with no practical aggravation of programming. The

only previously known rule that offers a similar performance is the five- or six-point Hamming— Pinkham rule. As a replacement of Euler’s rule, the formula presented as case a in Section 4

achieves a very considerable improvement in approximation by simply moving the first and last sampling point inwards by h/6. For an even greater sampling step at the error level of 1%, the formula presented as case d in Section 4 might be considered. It allows a step half again as large as the basic 2-point rule, in

fact comparable to that of the unmodified 5-point rule and 1.5 times as large as that of the best

those analysed in Ref. [1].

Work supported by the Italian Ministry of University and Research.

References [1] P. Luchini, Comput. Phys. Commun. 83 (1994) 227, this issue. [2] D.R. Hartree, Numerical Analysis (Oxford University Press, Oxford, 1958). [3] J. Todd, Classical numerical analysis: quadrature and differentiation, in: Survey of Numerical Analysis, ed. J. Todd (McGraw-Hill, New York, 1962) pp. 59—74. [4] R.W. Hamming, Numerical Methods for Scientists and Engineers (Dover, New York, 1986). [5] K.S. Kunz, Numerical Analysis (McGraw-Hill, New York, 1957). [6] V.!. Krylov, Approximate Calculation of Integrals, translated by A.H. Stroud (McMillan, New York, 1962). [71D.K. Kahaner, Comparison of numerical quadrature formulas, in: Mathematical Software, ed. J.R. Rice, Proc. Mathematical Software Symp., Lafayette, Indiana, U.S.A. April 1—3, 1970 (Academic Press, New York, 1971) pp. 229—259. [8] V.A. Dixon, Numerical quadrature. A survey of the available algorithms, in: Software for Numerical Mathematics, ed. D.J. Evans, Proc. Loughborough University of Technolons Conf. of the Institute of Mathematics and its Applications, Loughborough, Leicestershire, U.K., April 1973 (Academic Press, London, 1974) pp. 105—137. [9] A.H. Stroud, Bibliography on approximate integration, Math. Comput. 15, (1961) 52—80.