- Email: [email protected]

Contents lists available at SciVerse ScienceDirect

Chemical Physics Letters journal homepage: www.elsevier.com/locate/cplett

Computing polarizabilities without a Hamiltonian matrix Alexander Wlotzka, Tucker Carrington Jr. ⇑ Chemistry Department, Queen’s University, Kingston, Ontario K7L 3N6, Canada

a r t i c l e

i n f o

Article history: Received 17 September 2011 In ﬁnal form 13 December 2011 Available online 20 December 2011

a b s t r a c t Using techniques of numerical linear algebra it is possible to calculate polarizabilities and hyperpolarizabilities without computing or storing a Hamiltonian matrix. This is done by exploiting the structure of the basis to evaluate matrix–vector products. The ideas are tested on a one dimensional Hubbard model. Ó 2011 Elsevier B.V. All rights reserved.

1. Introduction The electrical properties of a molecule determine how it responds to an electric ﬁeld [1–7]. They can be inferred from various spectroscopic experiments and are important for several reasons. For example, they determine the long-range interaction between molecules and play an important role in Van der Waals complexes. The energy of a molecule in an electric ﬁeld can be written as

EðFÞ ¼ Eð0Þ

X i

li F i

1X 1X aij F i F j b FiFjFk þ 2 ij 6 ij ijk

ð1Þ

In this equation E(0) is the electronic energy of the molecule when no ﬁeld present, and F i is a ﬁeld component. aij and bijk are second and third derivatives of the energy with respect to ﬁeld components and are called polarizabilities and hyperpolarizabilities. The polarizability is an important electrical property that characterizes the ability of an electric ﬁeld to distort the electronic distribution of a molecule. The electronic polarizability (vibrational contributions are ignored in this Letter) is usually calculated using either a ﬁnite ﬁeld or a sum-over-states approach. To use the ﬁnite difference approach one computes the energy at a set of ﬁnite ﬁeld values and extracts numerical derivatives [1,8]. This approach often works well, but it requires solving the Schroedinger equation many times. In addition one must be careful to choose the ﬁnite ﬁeld values so that the difference of two energies is not too small (to avoid numerical error) and the difference of ﬁeld strengths is not too large (to avoid contamination with higher derivatives). To use the sum-over-states approach one must compute a large number of solutions of the Schroedinger equation [1]. In this Letter, we instead calculate polarizabilities and hyperpolarizabilities by differentiating the Schroedinger equation [1]. Proceeding in this way means that to calculate the polarizability of a molecule in its ground state we need to solve the Schroedinger ⇑ Corresponding author. Fax: +1 613 533 6669. E-mail addresses: [email protected] (A. Wlotzka), Tucker. [email protected] (T. Carrington Jr.). 0009-2614/$ - see front matter Ó 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.cplett.2011.12.031

equation only once and we need only to compute the ground state. Needing only the ground state is an advantage if iterative methods are used to solve the matrix eigenvalue problem. We test the ideas by applying them to a linear Hubbard model used in previous work [8]. We show that it is possible to directly compute derivatives of the energy without computing a Hamiltonian matrix. It is only necessary to evaluate matrix–vector products and this can be done by storing (4 4) matrices representing factors in terms of the Hamiltonian. Similar ideas could be applied with other Hamiltonians and to Hamiltonians describing systems in two or three dimensions.

2. Determining polarizabilities by differentiating the Schroedinger equation We consider a linear molecule aligned with a uniform electric ﬁeld in the z direction, and compute lz ; azz ; bzzz ; . . .. Once the ground state energy and wavefunction have been determined, it is straightforward to calculate the permanent dipole moment by using the Hellmann–Feynman theorem,

dE0 ¼0 Ez0 dF z F z ¼0

ð2Þ

dE0 ¼ hw0 jHz jw0 i; dF z

ð3Þ

lz ¼

where Hz is the derivative of the Hamiltonian of the molecule, in the presence of the ﬁeld, with respect to Fz, the left superscript on 0 EF0 indicates that the derivative has been evaluated at F = 0, and jw0 i is the (normalized) ground state of the ﬁeld-free Hamiltonian which we denote HFF . Knowing lz , one can compute j0 wz0 i, the ﬁrst derivative of jw0 i evaluated at F = 0, from

ðHFF E0 Þj0 wz0 i ¼

0

Ez0 0 Hz jw0 i:

ð4Þ

Introducing a basis set converts this equation into a system of linear equations. Similar ideas can be used to determine higher derivatives [1]. To determine polarizabilities and hyperpolarizabilities

97

A. Wlotzka, T. Carrington Jr. / Chemical Physics Letters 524 (2012) 96–99

one therefore needs to: (1) compute the ground state energy and ground state wavefunction; (2) average the dipole moment operator; (3) solve systems of linear equations. 3. Hamiltonian and basis functions We use a 1D Hubbard Hamiltonian,

H¼

NS X X i¼1

r

Ei cyi;r ci;r þ

NS X

0

U i cyi;" cyi;# ci;# ci;"

i¼1

NS X S1 X NX ! X þT cyi;r ciþ1;r þ h:c: ej F j zi cyi;r ci;r

r

i¼1

i¼1

ð5Þ

r

for a linear chain A–X–A–X . . . [8–10]. The Hamiltonian is written in terms of creation and annihilation operators labelled by values of i and r, where i labels an atom and r denotes the spin of an electron and can be either up or down. We set Ei¼odd ¼ 0 so that Ei¼ev en is the zeroth-order energy difference between neighbouring atoms A and X. T and U i are parameters. The T term is the only term in the ﬁeld free Hamiltonian with non-zero off-diagonal matrix elements. The dipole moment operator is

^ ¼e Hz ¼ l

N X X i¼1

r

us to evaluate matrix–vector products without either computing or storing the Hamiltonian matrix elements. Similar ideas are used for doing matrix–vector products to compute vibrational spectra [13–17]. To use this approach we need to store two vectors with as many elements as the number of basis functions and small matrices that represent, for a single atom, the creation and annihilation operators for spin up and spin down electrons in the basis jn ¼ 1i; jn ¼ 0i; jn ¼ 1i; jn ¼ 2i. These matrices are:

zi cyi;r ci;r :

ð6Þ

The basis functions are products of factors, one for each atom,

0

0 0 0

B0 0 B cy" ¼ B @0 1 1 0 0 0 1 B0 0 B cy# ¼ B @0 0

0 0C C C 0 0A 0 0 1 0 0 0 0C C C 0 0A

ð7Þ

where nk can be 1,0,1, or 2, and NS is the number of atoms in the chain. When nk = 1 there is one electron with spin down on atom k; when nk = 0 there are no electrons on atom k; when nk = 1 there is one electron with spin up on atom k; when nk = 2 there is one electron with spin up and one with spin down on atom k. The number of basis functions is 4NS . Allowed states are a sub-set of the full basis. If the number of spin up electrons is n" and the number of spin down electrons is n# then the number of allowed states is

Nallow ¼

N! N! n" !ðN n" Þ! n# !ðN n# Þ!

ð8Þ

Although the number of allowed basis functions is much less than 4NS , it is still very large. 4. Computational methods 4.1. Computing the ground state energy and wavefunction for the ﬁeld-free Hamiltonian If the number of atoms is larger than about 5 the size of even the set of allowed basis functions is huge. This precludes using methods of direct linear algebra to calculate the ground state. We have used the Lanczos algorithm [11,12]. By doing M matrix– vector products one transforms the Hamiltonian matrix, H, into a smaller tridiagonal matrix, TM and among the eigenvalues of TM are eigenvalues of H. The Lanczos algorithm is an efﬁcient tool for computing the lowest eigenvalues of a matrix if it is possible to evaluate matrix–vector products efﬁciently. It is, of course, possible to do the matrix–vector products at a cost that scales as N 2allow . Although N allow < 4NS this is still costly. In addition this approach has two other disadvantages: (1) it requires identifying which of the 4NS direct product functions are allowed; and (2) it requires computing and storing in memory the Hamiltonian matrix elements. For example, a problem with 10 atoms, 5 spin up electrons, and 5 spin down electrons would require 32 GB of memory to store the Hamiltonian matrix. Instead, we exploit the structure of the direct product basis. This enables

0

0 0

B0 0 B c" ¼ B @0 0 0 0 0 0 0 B1 0 B c# ¼ B @0 0

0 0 1 0

0 1

1

1 0C C C 0 0A 0 0

0 1 0 0 0C C C 0 1A

0 0 0

ð9Þ

0

Matrix–vector products for the three terms of HFF are done separately. The matrix–vector products for the diagonal terms are easy. The matrix–vector product for the T terms is not as simple. The T term matrix–vector product that couples, for example, atoms 3 and 4, is evaluated, wn01 ;n02 ;n03 ;n04 ;n05 ;;n0N ¼ T s

jn1 n2 nNS i;

1

X y c# n3 ;n4

n03 ;n3

ðc# Þn0 ;n4 þ cy# 4

n04 ;n4

ðc# Þn0 ;n3 þ cy" 3

n03 ;n3

þ cy" 0 ðc" Þn0 ;n3 v n01 ;n02 ;n3 ;n4 ;n05 ;;n0N s 3 n4 ;n4 X ¼ Mn03 ;n04 ;n3 ;n4 v n01 ;n02 ;n3 ;n4 n05 ;;n0N n3 ;n4

S

ðc" Þn0 ;n4 4

ð10Þ ð11Þ

The cost of this matrix–vector product scales as 4Ns þ2 . In this fashion, we exploit the fact that each term in the T sum couples only basis functions associated with 2 atoms. If N = 10 this way of evaluating the matrix–vector product is many orders of magnitude less costly than multiplying rows of the Hamiltonian by the vector (for which the cost scales as 42Ns . It is quite simple to use the Lanczos algorithm to compute the ground state energy. It is also possible to use Lanczos to determine the ground state wavefunction, but this is a bit trickier. If one wishes only eigenvalues, the loss of orthogonality of the Lanczos vectors, due to numerical error, manifests itself only by introduction of copies and spurious eigenvalues. Both are easy to deal with [12]. If one wishes eigenvectors one must be careful to properly select the number of Lanczos vectors to combine [18,19]. Because we need to solve linear equations to obtain polarizabilities it is simple, knowing the ground state energy, to also compute the ground state wavefunction by solving linear equations. This we do using the conjugate gradient algorithm [20] and the ideas explained above for evaluating matrix–vector products. 4.2. Computing derivatives of wavefunctions by solving linear equations To calculate the ﬁrst and higher derivatives of wavefunctions required to obtain polarizabilities we must solve linear equations (see Eq. (4) and Ref. [1]). In all cases the matrix is symmetric positive deﬁnite and we used the conjugate gradient algorithm [20]. To compute polarizabilities for excited states it would be necessary to solve linear equations with a matrix that is not positive deﬁnite. In this case the conjugate gradient algorithm might work poorly, but MINRES or QMR [20] could be used. We did some calculations with QMR to test the efﬁciency of diagonal preconditioning. In some cases it helped signiﬁcantly. Other scientists have also solved linear equations to compute optical properties. For example, Soos

98

A. Wlotzka, T. Carrington Jr. / Chemical Physics Letters 524 (2012) 96–99

Table 1 Parameter values for the calculated cases. Units: hartree. Case

E even atom

E odd atom

U even atom

U odd atom

T

1 2 3 4 5

3 3 1 2.5 2.25

0 0 0 0 0

2 2 4 4 3.5

6 5 7 4 3.5

1.5 2 1 1.43 1.48

Table 2 Seven atoms: spin conditions, ground energies and derivatives. Case

n"

n#

Ground energy

First derivative

Second derivative

Third derivative

Fourth derivative

1 2 3 4 5

4 3 3 4 4

4 3 2 3 4

6.270 4.846 2.792 0.215 1.481

24 18 15 21 24

4.816 4.601 7.341 3.523 3.992

0 0 0 0 0

40.780 17.249 119.197 17.833 12.279

Table 3 Eight atoms: spin conditions, ground energies and derivatives. Case

n"

n#

Ground energy

First derivative

Second derivative

Third derivative

Fourth derivative

1 2 3 4 5

4 4 4 4 4

4 4 3 3 4

3.603 1.646 1.151 1.270 0.650

27.576 26.946 24.209 23.358 26.436

2.674 3.765 12.926 6.536 4.376

3.387 2.760 9.023 0.056 4.011

53.230 28.729 885.876 19.123 35.190

Table 4 Nine atoms: spin conditions, ground energies and derivatives. Case

n"

n#

Ground energy

First derivative

Second derivative

Third derivative

Fourth derivative

1 2 3 4 5

5 4 4 5 5

5 4 3 4 5

6.880 5.721 3.020 0.342 1.057

40 32 28 36 40

9.616 9.249 15.833 7.689 7.063

0 0 0 0 0

201.467 63.359 238.102 44.558 36.521

Table 5 More atoms: spin conditions, ground energies and derivatives. No. atoms

Case

n"

n#

Ground energy

First derivative

Second derivative

Third derivative

Fourth derivative

10 10 11 11 12 12

1 2 1 2 1 2

5 5 6 5 6 6

5 5 6 5 6 6

4.328 2.336 7.526 6.543 5.050 3.032

44.445 43.516 60 50 65.310 64.027

4.635 6.915 16.778 16.110 7.258 11.402

9.441 7.038 0 0 21.876 14.225

200.994 102.559 749.106 165.937 598.550 285.562

et al. used Gauss–Seidel and conjugate gradient algorithms [21]. They exploit the sparsity of the Hamiltonian matrix. We, on the other hand, take advantage of the direct product structure of the basis we use. Soos et al. must store all the non-zero elements of the matrix. We store only 4 4 matrices.

5. Results We used the parameter sets of Ondrechen and Murga (OM) as shown in Table 1 [8]. The atoms of the chain are at z = 0, 1, 2, . . .. We use hartree units. Atoms on the chain are equally spaced. One set of calculations was done for a chain with 7 atoms. The results shown in Table 2 are very close to those of OM. The ﬁrst

derivative is the dipole moment of the molecule. When the number of atoms is odd the odd derivatives must be exactly zero (but the ﬁrst derivative is non-zero due to our choice of the origin of the z axis). Error estimates are obtained by doing calculations with different starting vectors. In most cases root means square errors are about 1012 (hartree units). We have also done calculations for larger chains. Results with 8 atoms are given in Table 3. Results with 9 atoms are given in Table 4 and results with 10, 11, 12 atoms are given in Table 5. In all of these calculations iterative linear solvers are terminated when the norm of the residual is less than 1013. We note that errors in the derivatives are cumulative, i.e., higher derivatives have larger errors than lower derivatives. Smaller errors can be achieved by using a stricter termination criterion.

A. Wlotzka, T. Carrington Jr. / Chemical Physics Letters 524 (2012) 96–99

99

6. Conclusion

Acknowledgments

The method we present in this Letter is based on two ideas. First, polarizabilities and hyperpolarizabilities can be computed by solving linear equations without either solving the Schroedinger equation many times, as in the ﬁnite ﬁeld approach, or computing many solutions of the Schroedinger equation, as in the sum over states approach. This is explained well in Ref. [1]. The second idea makes it possible to use the ﬁrst when the number of basis functions is large. In this case the Hamiltonian matrix is so large that it cannot be stored in memory. This makes the use of iterative linear algebra methods imperative. In this Letter, we present a technique for evaluating the matrix–vector products required to use iterative methods to compute polarizabilities (and hyperpolarizabilities) that obviates the need to build and store large matrices. It is necessary only to store a few vectors in memory. For the Hubbard model we use, this essentially removes the memory bottleneck. For example, for a problem with 16 atoms, 7 spin down electrons and 8 spin up electrons, one needs only 102 GB to store the three vectors required to use the method proposed in this Letter, but to store the N allow N allow matrix one would need 173 106 GB. The CPU cost of the calculations could easily be reduced by parallelizing the matrix–vector product. Very similar ideas could be applied to models with many different kinds of atoms and, importantly, in more than 1D. The method we propose is applicable to Hamiltonians that are written as a sum of products of creation and annhilation operators. Many Hamiltonians of quantum chemistry are of this form [22].

This work has been supported by the Natural Sciences and Engineering Research Council of Canada and the Deutscher Akademischer Austausch Dienst. References [1] C.E. Dykstra, Ab initio Calculation of the Structures and Properties of Molecules, Elsevier, Amsterdam, 1988. [2] D. Bishop, Rev. Mod. Phys. 62 (1990) 343. [3] B. Kirtman, D.M. Bishop, Chem. Phys. Lett. 175 (1990) 601. [4] D.M. Bishop, B. Kirtman, J. Chem. Phys. 95 (1991) 2646. [5] D.M. Bishop, B. Kirtman, J. Chem. Phys. 97 (1992) 5255. [6] A.D. Buckingham, J. Chem. Phys. 36 (1962) 3096. [7] C.E. Dykstra, S-Y. Liu, D.J. Malik, Adv. Chem. Phys. 75 (1990) 37. [8] Leonel F. Murga, Mary Jo Ondrechen, J. Comput. Chem. 22 (2001) 468. [9] J. Hubbard, Proc. R. Soc. Lond. A 276 (1963) 238. [10] Ihsan A. Shehadi, Leonel F. Murga, Mary Jo Ondrechen, Jan Linderberg, Chem. Phys. Lett. 291Z (1998) 325332. [11] C.C. Paige, J. Inst. Math. Appl. 10 (1972) 373. [12] J.K. Cullum, R.A. Willoughby, Lanczos Algorithms for Large Symmetric Eigenvalue Computations vol. 2, Birkhäuser, Boston, 1985. [13] M.J. Bramley, T. Carrington Jr., J. Chem. Phys. 99 (1993) 8519. [14] N.M. Poulin, M.J. Bramley, T. Carrington, H. Kjaergaard, B. Henry, J. Chem. Phys. 104 (30) (1996) 7807. [15] S.-W. Huang, T. Carrington, Chem. Phys. Lett. 312 (1999) 311. [16] Xiao-Gang Wang, T. Carrington, J. Chem. Phys. 121 (2004) 2937. [17] Xiao-Gang Wang, T. Carrington, J. Chem. Phys. 115 (2001) 9781. [18] X.-G. Wang, T. Carrington Jr., J. Chem. Phys. 118 (2003) 6946. [19] X.-G. Wang, T. Carrington Jr., J. Chem. Phys. 119 (2003) 101. [20] G. Golub, C. van Loan, Matrix Computations, Wiley, New York, 1985. [21] Z.G. Soos, S. Ramasesha, J. Chem. Phys. 90 (1989) 1067. [22] Poul Jorgensen, Jack Simons, Second Quantization-Based Methods in Quantum Chemistry, Academic Press, Toronto, 1981.

Copyright © 2019 KUNDOC.COM. All rights reserved.