- Email: [email protected]

Decision Aiding

Singular value decomposition in AHP S.I. Gass a, T. Rapcs ak b

b,*

a Robert H. Smith School of Business, University of Maryland, College Park, MD 20472, USA Laboratory of Operations Research and Decision Systems, Computer and Automation Institute, Hungarian Academy of Sciences, P.O. Box 63, Budapest, Hungary

Received 6 November 2001; accepted 25 September 2002

Abstract The analytic hierarchy process (AHP) has been accepted as a leading multiattribute decision-aiding model both by practitioners and academics. The foundation of the AHP is the SaatyÕs eigenvector method (EM) and associated inconsistency index that are based on the largest eigenvalue and associated eigenvector of an (n n) positive reciprocal matrix. The elements of the matrix are the decision makerÕs (DM) numerical estimates of the preference of n alternatives with respect to a criterion when they are compared pairwise using the 1–9 AHP fundamental comparison scale. The components of the normalized eigenvector provide approximations of the unknown weights of the criteria (alternatives), and the deviation of the largest eigenvector from n yields a measure of how inconsistent the DM is with respect to the pairwise comparisons. Singular value decomposition (SVD) is an important tool of matrix algebra that has been applied to a number of areas, e.g., principal component analysis, canonical correlation in statistics, the determination of the Moore–Penrose generalized inverse, and low rank approximation of matrices. In this paper, using the SVD and the theory of low rank approximation of a (pairwise comparison) matrix, we oﬀer a new approach for determining the associated weights. We prove that the rank one left and right singular vectors, that is the vectors associated with the largest singular value, yield theoretically justiﬁed weights. We suggest that an inconsistency measure for these weights is the Frobenius norm of the diﬀerence between the original pairwise comparison matrix and one formed by the SVD determined weights. How this measure can be applied in practice as a means of measuring the conﬁdence the DM should place in the SVD weights is still an open question. We illustrate the SVD approach and compare it to the EM for some numerical examples. Ó 2002 Elsevier B.V. All rights reserved. Keywords: Analytic hierarchy process; Eigenvector method; Singular value decomposition; Consistency measures

1. Introduction The analytic hierarchy process (AHP) [23] has been accepted as a leading multiattribute decision

*

Corresponding author. E-mail address: [email protected] (T. Rapcsak).

model both by practitioners and academics. AHP can solve decision problems in various ﬁelds by the prioritization of alternatives. The heart of the most familiar version of the AHP is the SaatyÕs eigenvector method (EM) which approximates an (n n) positive reciprocal matrix A ¼ ðaij Þ, aji ¼ ð1=aij Þ, i; j ¼ 1; . . . ; n, by a vector w ¼ ðwi Þ 2 Rnþ , where Rnþ is the positive orthant of the n-dimensional

0377-2217/$ - see front matter Ó 2002 Elsevier B.V. All rights reserved. doi:10.1016/S0377-2217(02)00755-5

574

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

Euclidean space Rn , such that the matrix of the ratios (wi =wj ), i; j ¼ 1; . . . ; n, is the best approximation to A, in some sense. It is emphasized that the EM results in a priority vector w ¼ ðwi Þ 2 Rnþ and an inconsistency number kmax . However, the EM has been criticised both from prioritization and consistency points of view and several new techniques have been developed. There are two diﬀerent approaches in AHP: deterministic and statistical or stochastic. In the statistical approach, it is assumed that the preference judgments are random variables associated to an unknown probability distribution [3]. In the deterministic approach, the underlying assumption is that one can observe the preferences with certainty and the only uncertainty lies in the elicitation of these preferences which give rise to the inconsistency condition. The EM belongs to the deterministic approach. Other approaches include the least-squares method (LSM) and the weighted least-squares method (WLSM) proposed by Chu et al. [6], the logarithmic least-squares method (LLSM) introduced and considered as a model for a regression estimation of priorities by De Jong [8] and Crawford and Williams [7], the chi-square method (CSM) [15] which are distanceminimizing methods deﬁned in Table 1 as well as the logarithmic goal programming method (LGPM) [5] and the fuzzy programming method (FPM) [20]. The LSM was studied by Jensen [15,16], the WLSM by Blankmeyer [4] and the LLSM by Barzilai et al. [1]. In the case of consistent matrices, all these approaches yield the same solution. From theoretical point of view, the LSM solution seems to be the most desirable, but the corresponding nonlinear optimization problem has no special tractable form and generally has

multiple solutions, therefore, it is diﬃcult to solve it numerically. In order to eliminate the drawback of the LSM, the diﬀerent methods suggest some relaxations and modiﬁcations. Some comparative analyses of the above scaling methods can be found in the literature, but the conclusions are often contradictory. Some authors, e.g., Crawford and Williams [7]; Takeda et al. [31] and Barzilai [2] assert that the LLSM overperforms the EM. Saaty and Vargas [26,27] claim that the EM is superior to the LLSM. In [24], Saaty compares EM with LLSM and states that EM captures the inconsistency of dominance, while LLSM minimizes a distance function and searches for symmetry without an explicit attempt to capture dominance. But inconsistent dominance judgments are asymmetric. In this case, EM and LLSM give rise to diﬀerent derived scales, and, at times, to a diﬀerent choice. Moreover, he summarized 10 reasons for not using LLSM instead of EM. In [6], WLSM is compared with EM. WLSM has the advantage of involving the solution of a system of linear algebraic equations and is, thus, conceptually easier to be understood than EM. Comparisons show that sums for WLSM are less than those for EM, and the dominant weight seems to be larger for WLSM. An excellent comparison analysis is given among the commonly used methods, the EM, the LSM, the WLSM and the LLSM, for deriving priorities in Golany and Kress [12]. They conclude that there is no prioritization method that is superior to the other ones in all cases. Throughout SaatyÕs work only the right eigenvectors are used. However, theoretically, the use of left eigenvectors should be equally justiﬁed. This

Table 1 Distance-minimizing methods for prioritization Method LSM WLSM LLSM CSM

Minimand

Constraints

2 Pn Pn wi i¼1 j¼1 aij wj Pn Pn ða w wi Þ2 ij j i¼1 j¼1 Pn Pn 2 i¼1 j¼1 ðlog aij log wi þ log wj Þ 2 Pn Pn wi wj i¼1 j¼1 aij wj wi

Pn

wi ¼ 1;

w 2 Rnþ

Pn

i¼1

wi ¼ 1;

Qn

w 2 Rnþ

i¼1

wi ¼ 1;

w 2 Rnþ

Pn

wi ¼ 1;

w 2 Rnþ

i¼1

i¼1

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

problem was observed ﬁrst by Johnson et al. [17]. In [9], a new method, known as the modiﬁed AHP (MAHP), claimed that the right and left eigenvectors inconsistency problem can be eﬀectively reduced. In [32], an attempt was made to compare AHP and MAHP by using 42 models comprising 294 reciprocal matrices. It was revealed that MAHP is not better than AHP. The geometry of the scaling problem is studied in Euclidean spaces in [34] and the scaling problem under the multiplicative and additive normalizations of weight in order to obtain a unique solution in [2]. Singular value decomposition (SVD) is an important tool of matrix algebra that has been applied to a number of areas, for example, principal component analysis, canonical correlation in statistics, the determination of the Moore–Penrose generalized inverse, and low rank approximation of matrices, Kennedy and Gentle [18]; Eckart and Young [10]; Greenacre [13]. The matrix algebra and computational aspects of SVD are discussed in Kennedy and Gentle [18], and statistical applications are described in Greenacre [13]. The aim of this paper is to show that the SVD seems to be a good tool to provide a theoretically well-founded solution in AHP both for the scaling and consistency problems. First, the main features of SVD will be presented, then the SVD of pairwise comparison matrices will be studied by developing further the results of Gass and Rapcs ak [11], in order to derive the weight vector in a convenient and consistent way in AHP. Finally, some examples will be presented and the EM and SVD solutions will be compared.

2. EM, consistency index and ratio A basic tool is the pairwise comparison method in the AHP methodology [23]. Here, the preferences of the DM are represented by evaluating the pairwise comparison matrices related to a set of

575

attributes and all the alternatives with respect to every attribute. The pairwise comparisons by the DM are scaled from 1 to 9 and arrayed in n n matrices where n either denotes the number of the attributes or alternatives. The elements aij , i; j ¼ 1; . . . ; n, are the estimations on the dominance of the object i over j, and satisfy aij > 0, aij ¼ ð1=aji Þ, i; j ¼ 1; . . . ; n. Thus, the pairwise comparison matrices are positive reciprocal matrices. If A is such a matrix, the priority vector, obtained by using the EM, is the normalized principal right eigenvector corresponding to the maximum eigenvalue kmax of A. If A is a consistent matrix, i.e., aij ajk ¼ aik for all the indices i; j; k, then A is of rank one and kmax ¼ n. Moreover, a positive reciprocal matrix is consistent iﬀ kmax ¼ n. But, in general, A is inconsistent in practice and kmax P n in the case of a positive reciprocal matrix [23]. Further, since small variations in the aij cause a small variation in kmax , then the diﬀerence (kmax n) can be used as a measure of consistency. In a consistency ratio (CR) derived by Saaty, this is represented by ðkmax nÞ=ðn 1Þ, the consistency index (CI). When the CI has been calculated, the result is compared with that of the same index of a randomly generated reciprocal matrix from the scale of 1 to 9. Table 2 [25] contains the average of randomly generated reciprocal matrices from the scale of 1 to 9 which is called the random index (RI). The ratio of CI to the average RI for the same order matrix is called the CR. An argument can be given that CR should be about 0.1 or less with regard to an acceptable decision. If the CR is small, the estimates are accepted, otherwise, an attempt is made to improve consistency by obtaining additional information. In [33], a convergent method is given which results in a new positive reciprocal matrix with a reduced maximum eigenvalue and improved CI. A disadvantage of this approach is the change of the judgments

Table 2 n RI

1 0

2 0

3 .52

4 .89

5 1.11

6 1.25

7 1.35

8 1.40

9 1.45

10 1.49

11 1.51

12 1.54

13 1.56

14 1.57

15 1.58

576

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

required of the DM. In [22], a new consistency threshold is presented.

Theorem 3.2. Let A½k ¼

k X

ai ui vTi ;

ð3:4Þ

i¼1

3. Singular value decomposition The SVD of a general matrix A is a transformation into a product of three matrices, each of which has a simple special form and geometric interpretation. This SVD representation is given by the following theorem [13]. Theorem 3.1. Any real (m n) matrix A of rank k ðk 6 minðm; nÞÞ, can be expressed in the form of A ¼ UDV T ;

ð3:1Þ

where D is a (k k) diagonal matrix with positive diagonal elements a1 ; . . . ; ak , U is an (m k) matrix and V is an (n k) matrix such that U T U ¼ I, V T V ¼ I, i.e., the columns of U and V are orthonormal in the euclidean sense. An equivalent formulation of (3.1) in terms of diads is k X A¼ ai ui vTi ; ð3:2Þ i¼1

where u1 ; . . . ; uk , and v1 ; . . . ; vk , are the columns of U and V , respectively. The diagonal numbers ai of D are called singular values, while the vectors ui and vi , i ¼ 1; . . . ; k, are termed the left and right singular vectors, respectively. The left and right singular vectors form an orthonormal basis for the columns and rows of A in m-dimensional and ndimensional spaces, respectively. If the singular values ak þ1 ; . . . ; ak are small, compared to a1 ; . . . ; ak for some k < k, then by dropping the last k k terms of the right-hand side of (3.2), a good approximation of A is obtained with a k -dimensional matrix. One of the most frequently used matrix norms in numerical analysis is the Frobenius norm (F-norm) given by !1=2 n X n X 2 kAkF ¼ aij : ð3:3Þ i¼1

j¼1

The theorem of low rank approximation ﬁrst stated and proved by Eckart and Young [10] is as follows [13].

be the (m n) matrix of rank k formed from the largest k singular values and the corresponding singular vectors of A. Then, A½k is the rank k least squares approximation of A in that it minimizes the function m X n X 2 2 ðaij xij Þ ð3:5Þ kA X kF ¼ i¼1

j¼1

for all matrices X of rank k or less. Let the 2-norm of matrices be given by the formula kAk2 ¼ sup kAxk:

ð3:6Þ

kxk¼1

Remark 3.1. The matrix A½k is the rank k least squares approximation of A with respect to the 2norm of matrices as well. Theorem 3.3. [30] Let A be an (m n) matrix having singular values a1 P a2 P P an . Then, kAk2 ¼ a1 and kAk2F ¼ a21 þ a22 þ þ a2n : Theorem consistency, seems to be because the in it.

3.3 shows that for measuring the the Frobenius norm of matrices better than the 2-norm of matrices, complete matrices are represented

4. Singular value decomposition of pairwise comparison matrices In this section, the SVD of pairwise comparison matrices is discussed. Theorem 4.1. [11] The SVD of a positive, consistent matrix A is the diad

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

0

c2 w1

1

where 0 < m 6 aj 6 M, for j ¼ 1; . . . ; n. If aj ¼ w2j , j ¼ 1; . . . ; n, then m ¼ 1, M ¼ 81 and by the Schweitzer inequality, n n X 1 X 822 2 2 2 n; kAkF ¼ w 6 j w2j j¼1 4 81 j¼1

B c w C 2 2C c1 B 1 1 1 B C A¼ B . C ; ;...; ; c1 wn c2 @ .. A c1 w1 c1 w2 c2 wn w ¼ ðwi Þ 2 Rnþ ;

577

ð4:1Þ

wherePc1 and c2 are positive Pn constants such that n c21 ¼ i¼1 ð1=w2i Þ, c22 ¼ ð1= i¼1 w2i Þ, the right singular vector is equal to the left eigenvector multiplied by a normalizing constant, the left singular vector to the right eigenvector multiplied by a second normalizing constant, Rnþ is the positive orthant, and ðc1 =c2 Þ is the only singular value of A. In AHP, the values wi , i ¼ 1; . . . ; n, belong to the interval ½1; 9 and are integers. Theorem 4.2. The upper bound of the Frobenius norm of an n-dimensional, positive and consistent matrix A in AHP is equal to ð41=9Þn. Proof. If an n-dimensional positive matrix A is consistent, then by Theorems 3.3 and 4.1, the square of the Frobenius norm of A is equal to n n c2 X 1 X 2 kAkF ¼ 12 ¼ w2j ; w 2 Rnþ ; ð4:2Þ 2 w c2 j j¼1 j¼1 and the integers wi , i ¼ 1; . . . ; n, belong to the interval ½1; 9. By the Schweitzer inequality [21,28], ! ! 2 n n 1X 1X 1 ðM þ mÞ ; ð4:3Þ aj 6 n j¼1 n j¼1 aj 4Mm

from which we have that kAkF 6

82 41 n ¼ n: 29 9

ð4:4Þ

By Theorem 4.2, an estimation is obtained for the upper bound of the Frobenius norm of experimental pairwise comparison matrices that depend on their size only. Standard [29] shows that the numbers 6, 7, 8, 9 are each used less than 10% of the time in real-world problems, thus we would expect the maximum of the matrix elements aij , i; j ¼ 1; . . . ; n, to 2 be 5 or less for 75% of the time. If M ¼ maxi;j aij in formula (4.3), then a tighter bound can be obtained in formula (4.4). Table 3 and Fig. 1 show the Frobenius bounds of matrices as a function of the maximum element. Let A½1 be the (n n) matrix of rank one formed from the largest singular value and the corresponding left and right singular vectors of A. Theorem 4.3. Let A be an (n n) matrix of rank k. Then, kA A½1 k2F ¼

k X

a2i ;

ð4:5Þ

i¼2

where ai are the singular values. Proof. By formula (3.2),

Table 3 Frobenius bounds for n n consistent matrices with max aij nn 3 4 5 6 7 8 9

Max aij 2

3

4

5

6

7

8

9

3.7500 5.0000 6.2500 7.5000 8.7500 10.0000 11.2500

5.0000 6.6667 8.3333 10.0000 11.6667 13.3333 15.0000

6.3750 8.5000 10.6250 12.7500 14.8750 17.0000 19.1250

7.8000 10.4000 13.0000 15.6000 18.2000 20.8000 23.4000

9.2500 12.3333 15.4167 18.5000 21.5833 24.6667 27.7500

10.7143 14.2857 17.8571 21.4286 25.0000 28.5714 32.1429

12.1875 16.2500 20.3125 24.3750 28.4375 32.5000 36.5625

13.6667 18.2222 22.7778 27.3333 31.8889 36.4444 41.0000

578

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

Fig. 1.

ðA A½1 Þ ¼

k X

ai ui vTi :

ð4:6Þ

i¼2

Based on the orthogonality properties U T U ¼ I and V T V ¼ I, as well as the deﬁnition of the Frobenius norm, ! ! k k X X T T T ðA A½1 ÞðA A½1 Þ ¼ ai ui v i ai v i ui i¼2

¼

k X

a2i ui uTi ;

and since, 2

n X n X i¼1

ðaij xij Þ

5. Deriving weights by SVD in AHP

i¼2

i¼2

kA X kF ¼

kA A½1 kF is good and the kA A½1 kF could be used to gauge consistency. Additionally, the rank one matrix does not appear to be useful in determining what element of A should be adjusted to improve the consistency.

2

j¼1

T ¼ trace ðA X ÞðA X Þ for any (n n) matrices X , the statement is obtained. Although the rank one matrix A½1 is not a pairwise comparison matrix, it is the closest one rank matrix to A in the least-squares sense. By Standard [29], the correlation between CI and

In this part, it will be shown how to derive uniquely a positive weight vector, based on SVD in AHP. Though Saaty reduced this problem to a matrix eigenvalue problem, various techniques have been proposed and analyzed for approximating an inconsistent pairwise comparison matrix A by a matrix of rank one. The LSM minimizes the Frobenius norm 2 n X n X wi aij ð5:1Þ wj i¼1 j¼1 under some normalizing constraint for the weight vector and tends to produce ratios ðwi =wj Þ for all i; j within the range of the matrix A. From theoretical point of view, the LSM seems to be more desirable than the others, on the other hand, the

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

method requires nonlinear computations, so it is much more diﬃcult to solve the problem numerically and may have multiple solutions if A is very inconsistent. Here, this idea will be developed further. Theorem 5.1. Let a pairwise comparison matrix A be given and let u and v be the left and right singular vectors belonging to the largest singular value of A, respectively. Then, the priority of the DM, based on A, can be approximated by the uniquely determined, normalized positive weight vector 1 ui þ v i ; i ¼ 1; . . . ; n; wi ¼ ð5:2Þ Pn 1 u þ j j¼1 vj which is obtained by solving a distance-minimization and a measure-minimization problem with a unique solution of each. Proof. By Theorems 3.1 and 3.2, the diad A½1 ¼ a1 uvT ;

ð5:3Þ

is the best one rank approximation of the matrix A with respect to the Frobenius norm, i.e., the matrix A½1 is the global optimal solution of the distanceminimization problem n X n X 2 min ðaij xij Þ ð5:4Þ i¼1

j¼1

for all matrices X of rank one. By Remark 3.1, A½1 is the best one rank approximation of A with respect to the 2-norm of matrices as well. Next, we show that the rank one approximation A½1 of a pairwise comparison matrix A is unique. By Theorem 3.1, any real (n n) matrix A of rank k can be expressed in the form of A ¼ UDV T ;

ð5:5Þ

where D is a (k k) diagonal matrix with positive diagonal elements a1 ; . . . ; ak , U is an (n k) matrix and V is an (n k) matrix such that U T U ¼ I and V T V ¼ I. Based on formula (5.5), we obtain that AAT u ¼ a21 u; AT Av ¼ a21 v;

ð5:6Þ

from which it follows that the left and right singular vectors u and v belonging to the largest sin-

579

gular value a1 are the principal eigenvectors of the matrices AAT and AT A, respectively. Since A is a pairwise comparison matrix, the elements of the matrices A, AAT and AT A are positive numbers. By the Perron theorem, the maximal eigenvalue a21 of the matrices AAT and AT A is a simple root of their characteristic equations, the corresponding eigenvector has only positive components and is unique up to a scalar multiple. By taking the orthonormality conditions of the SVD into account, problem (5.4) has only one solution. In AHP, the purpose is not to ﬁnd the best one rank approximation of A, but to determine a good approximation of the priority vector. In order to solve this problem, a strictly convex measureminimization problem can be solved. Since the components of the vectors u and v are positive numbers, the measure n n n X X xi X DðxkyÞ ¼ xi log xi þ yi ; x; y 2 Rnþ ; y i i¼1 i¼1 i¼1 ð5:7Þ Rnþ ,

deﬁned in the positive orthant adapted from Kullback and LeiblerÕs I-divergence information theory approach for measuring the diﬀerence of two discrete probability distributions, used in statistics and in decision making (see, e.g., [19]), is chosen. T Let eT ¼ ð1; . . . ; 1Þ and ð1=vÞ ¼ ðð1=v1 Þ; . . . ; n ð1=vn ÞÞ, v 2 Rþ . Consider the optimization problem as follows: 1 w min DðukwÞ þ D v ð5:8Þ eT w ¼ 1;

w 2 Rnþ ;

which is equivalent to n n X ui X 1 1 min f ðwÞ ¼ ui log þ log ; v v w wi i i i i¼1 i¼1 n X hðwÞ ¼ wi 1 ¼ 0; w 2 Rnþ :

ð5:9Þ

i¼1

By the Lagrange multiplier rule, there exists a real number l 2 R at a stationary point w 2 Rnþ such that rf ðw Þ ¼ lrhðw Þ; eT w ¼ 1;

w 2 Rnþ :

ð5:10Þ

580

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

By (5.10), the following system of equations holds at the stationary point w : 1 ui þ vi ¼ l; i ¼ 1; . . . ; n; wi ð5:11Þ n X wi ¼ 1; w 2 Rnþ : i¼1

A consequence of (5.11) is that 1 uj þ vj wj ¼ ; 8i; j; 1 wi ui þ vi eT w ¼ 1; w 2 Rnþ ;

ð5:12Þ

and by summing the terms with respect to the index j, we obtain that n 1 X 1 1 uj þ ¼ ; i ¼ 1; . . . ; n; ð5:13Þ 1 j¼1 vj wi ui þ vi from which 1 ui þ v i ; wi ¼ Pn 1 j¼1 uj þ vj

i ¼ 1; . . . ; n:

ð5:14Þ

Let ui 1 1 þ log ; vi wi wi vi i ¼ 1; . . . ; n; wi > 0:

fi ðwi Þ ¼ ui log

ð5:15Þ

Then, ui þ fi00 ðwi Þ ¼

1 vi

> 0; i ¼ 1; . . . ; n; ð5:16Þ w2i thus, fi , i ¼ 1; . . . ; n, are strictly convex functions and the measure-minimization problem (5.8) with the separable and strictly convex objective function f has the unique solution (5.14) which consists of positive components.

Corollary 5.1. For a positive consistent matrix A with left singular vector u ¼ ðui Þ and right singular vector v ¼ ðvi Þ, we have ui wi ¼ Pn

j¼1 uj

1=vi ¼ Pn ; j¼1 1=vj

i ¼ 1; . . . ; n:

ð5:17Þ

Proof. As A is consistent, it is of rank one and its SVD representation (3.2) is given by A ¼ auvT or 0 1 u1 B u2 C B C A ¼ aB .. Cðv1 v2 vn Þ @ . A un 0 1 u1 v 1 u1 v 2 u1 v n B . .. C .. ¼ [email protected] .. ð5:18Þ . A; . un v 1 un v 2 un v n where a > 0 is the only non-zero singular value. The matrix A, interpreted as an AHP pairwise comparison matrix, is given by 0 1 w1 =w1 w1 =w2 w1 =wn w2 =wn C B w2 =w1 w2 =w2 A¼B ð5:19Þ .. C . . @ . A; .. . . wn =w1 wn =w2 wn =wn Pn where wi > 0 and i¼1 wi ¼ 1. Since all elements of (5.19) are positive, then the elements of (5.18) are also positive, that is, u ¼ ðui Þ > 0 and v ¼ ðvi Þ > 0. Equating the elements of the ﬁrst columns of (5.18) and (5.19), we have au1 v1 ¼ w1 =w1 ; au2 v1 ¼ w2 =w1 ; .. . aun v1 ¼ wn =w1 : Summing the above n equations gives n n X X wj 1 uj ¼ ¼ av1 w w 1 1 j¼1 j¼1 or w1 ¼

av1

1 Pn

j¼1

uj

:

Clearly, for any column i 1 Pn ; i ¼ 1; . . . ; n: wi ¼ avi j¼1 uj

ð5:20Þ

In a similar fashion, equating the elements of the ﬁrst rows of (5.18) and (5.19), we have au1 v1 ¼ w1 =w1 ; au1 v2 ¼ w1 =w2 ; .. . au1 vn ¼ w1 =wn :

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

Inverting the above n equations and summing yields " # n 1 X 1 1 ¼ au1 j¼1 vj w1 or w1 ¼

au1 : Pn 1 j¼1 vj

Then, for any row i, this generalizes to aui ; i ¼ 1; . . . ; n: wi ¼ Pn 1 j¼1 vj

581

Let A be an n n pairwise comparison matrix e an n n positive and consistent matrix. A and A lower bound and an upper bound can be given for e ) dethe Frobenius norm of the matrix (A A pending on the norm kAkF and the size of the matrix A. Lemma 6.1. Let A be an (n n) matrix of rank k. Then, e k 6 kAk þ ð41=9Þn: 0 6 kA A F F

ð6:1Þ

ð5:21Þ

For the diagonal elements of the matrices (5.18) and (5.19), we have wi =wi ¼ aui vi ¼ 1:

ð5:22Þ

In (5.20), substituting avi ¼ 1=ui gives 1 ui wi ¼ X ¼ Pn ; i ¼ 1; . . . ; n: 1 n j¼1 uj uj ui j¼1

ð5:23Þ

Similarly, in (5.21), substituting aui ¼ 1=vi yields 1 vi wi ¼ ; i ¼ 1; . . . ; n: ð5:24Þ Pn 1 j¼1 vj Expressions (5.23) and (5.24) give the same normalized weights wi in terms of the left and right singular vectors of rank one SVD representation of the consistent matrix A. These wi are equal to the wi obtained by normalizing the eigenvector associated with the maximum (and only) eigenvalue of A. It is emphasized that the SVD approach and EM give the same result in the case of positive and consistent matrices, the diﬀerences are in inconsistent cases. 6. Measuring of consistency based on the SVD In this part, it will be shown that the consistency of pairwise comparison matrices can be eﬀectively measured on an absolute scale based on the SVD approach and by using the Frobenius norm.

Proof. It follows from the basic properties of the vector and matrix norms that e k j 6 kA A e k 6 kAk þ k A ek : jkAkF k A F F F F

ð6:2Þ

By using Theorem 4.2, the statement is proved.

Deﬁnition 6.1. The consistency measure of an e with respect n n positive and consistent matrix A to the given n n pairwise comparison matrix A on the absolute scale 0; kAkF þ 419 n is given by ek . IM ¼ kA A F If we use the weight vector of A given by formula (5.2) to generate a positive and reciprocal e ½1 by setting ðwi =wj Þ for every pair ði; jÞ, matrix A then a good approximation of the given matrix A can be obtained with the consistency measure e ½1 k . Following SaatyÕs idea [17], let us exkA A F amine the absolute diﬀerences between aij and ðwi =wj Þ for every pair (i; j). From these diﬀerences, the DM learns which elements of A should be adjusted, and to what extent to make A more consistent.

7. Examples In this part, some examples show the EM and the SVD weights. If 0 1 1 9 9 9 B 0:1111 1 1 1 C C A¼B @ 0:1111 1 1 2 A; 0:1111 1 0:5 1

582

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

then the EM-weights are (0.742, 0.083, 0.1, 0.071), CR ¼ 0:02, IM ¼ 2:263 and the SVD-weights are (0.75, 0.08, 0.0972, 0.0755), IM ¼ 1:825. If 0 1 1 4 6 7 B 0:25 1 3 4C C A¼B @ 0:1667 0:3333 1 2 A; 0:1429 0:25 0:5 1 then the EM-weights are (0.617, 0.224, 0.097, 0.062), CR ¼ 0:04, IM ¼ 3:355 and the SVDweights are (0.6444, 0.1762, 0.1005, 0.07889), IM ¼ 2:652. If 0 1 1 7 7 7 B 0:1429 1 1 1 C C A¼B @ 0:1429 1 1 1 A; 0:1429 1 1 1 then the EM- and the SVD-weights are (0.7, 0.1, 0.1, 0.1), CR ¼ IM ¼ 0. In the ﬁrst two cases, the matrices are inconsistent, while in the third case, it is consistent. Example 2 is from Saaty [23, p. 38]. Let us consider the example investigated in [20]. Here, 2 3 1 2 5 A ¼ 4 0:5 1 3 5; 0:2 0:33 1 the EM-weights are (0.582, 0.309, 0.109), CR ¼ 0:003, IM ¼ 0:3990 and the SVD-weights are (0.5865, 0.3009, 0.1127), IM ¼ 0:3939. Let us consider the example of wealth of Nations, Saaty [23, p. 40] where 0

1:0000 B 0:2500 B B 0:1111 B A¼B B 0:1667 B 0:1667 B @ 0:2000 0:2000

4:0000 1:0000 0:1429 0:2000 0:2000 0:3333 0:2500

9:0000 7:0000 1:0000 5:0000 5:0000 7:0000 5:0000

6:0000 5:0000 0:2000 1:0000 1:0000 3:0000 3:0000

6:0000 5:0000 0:2000 1:0000 1:0000 3:0000 3:0000

Then, the EM-weights are (0.427, 0.230, 0.021, 0.052, 0.052, 0.123, 0.094), CR ¼ 0:08, IM ¼

13:573 and the SVD-weights are (0.511, 0.155, 0.033, 0.060, 0.070, 0.099, 0.082), IM ¼ 11:631.

8. Comparison of EM and SVD in AHP In this part, a comparison is made between the EM and SVD which provide a priority vector and a consistency measure, respectively. Comparisons among diﬀerent techniques can be found in [6,15,27,32]. In the EM, the priority vector is equal to the normalized principal right eigenvector corresponding to the maximal eigenvalue of A, i.e., the normalized weight vector w is the solution to the homogeneous linear equations Aw ¼ kmax w; eT w ¼ 1;

w 2 Rnþ ;

ð8:1Þ

where kmax is the maximal eigenvalue of the matrix A. The Perron theorem ensures that kmax and the vector w are real, positive and unique. The principal right eigenvector generates an invariant subspace of A, and though it can be considered its one rank approximation, but not the best one. Theorem 3.2 states that the one rank approximation A½1 of A is the best solution with respect to the Frobenius norm from among all. This statement is valid with respect to the 2-norm of matrices as well, and taking Theorem 4.3 into account, we can understand the meaning of the measure generated by the Frobenius norm. By Theorem 5.1, the priority vector is obtained from a strictly convex measure-minimizing problem based on the best one rank approximation. 5:0000 3:0000 0:1429 0:3333 0:3333 1:0000 0:5000

1 5:0000 4:0000 C C 0:2000 C C 0:3333 C C: 0:3333 C C 2:0000 A 1:0000

The CI proposed by Saaty is related to the maximum eigenvalue of A, and according to him,

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

only the EM deals with the question of inconsistency as well. It seems that a disadvantage of this approach is that it does not originate from a matrix norm. The Frobenius norm in SVD, derived from the Euclidean metric, measures the distance from the global minimum and provides direct analytical, not statistical, relation between the weight vector and the consistency measurement. The CR proposed by Saaty uses information on randomly generated matrices, while in the case of SVD, this does not seem to be necessary. By Standard [29], most of the random matrices were highly inconsistent for which the relative measurement, the CR, might not be entirely precise. The absolute scale based on the SVD approach and the Frobenius norm of matrices seems to be better than the relative one. From a theoretical point of view, the SVD approach has much to oﬀer as it stems directly from rank one considerations. Also, the associated Frobenius norm appears to be an appropriate metric that may be adaptable as an inconsistency measure. Further research along these lines are needed, as well as comparisons of the EM and SVD approaches on real-world applications.

Acknowledgement This research was supported in part by the Hungarian National Research Foundation, Grant No. OTKA-T029572.

References [1] J. Barzilai, W.D. Cook, B. Golany, Consistent weights for judgements matrices of the relative importance of alternatives, Operations Research Letters 6 (1987) 131–134. [2] J. Barzilai, Deriving weights from pairwise comparison matrices, Journal of the Operational Research Society 48 (1997) 1226–1232. [3] I. Basak, The categorical data analysis approach for ratio model of pairwise comparisons, European Journal of Operational Research 128 (2001) 532–544. [4] E. Blankmeyer, Approaches to consistency adjustment, Journal of Optimization Theory and Applications 3 (1987) 479–488.

583

[5] N. Bryson, A goal programming method for generating priority vectors, Journal of the Operational Research Society 46 (1995) 641–648. [6] A.T.W. Chu, R.E. Kalaba, K. Spingarn, A comparison of two methods for determining the weights of belonging to fuzzy sets, Journal of Optimization Theory and Applications 4 (1979) 531–538. [7] G. Crawford, C. Williams, A note on the analysis of subjective judgment matrices, Journal of Mathematical Psychology 29 (1985) 387–405. [8] P. De Jong, A statistical approach to SaatyÕs scaling method for priorities, Journal of Mathematical Psychology 28 (1984) 467–478. [9] H.A. Donegan, F.J. Dodd, T.B.M. McMaster, A new approach to AHP decision-making, The Statistician 41 (1992) 295–302. [10] C. Eckart, G. Young, The approximation of one matrix by another of lower rank, Psychometrika 1 (1936) 211– 218. [11] S.I. Gass, T. Rapcsak, A note on synthesizing group decisions, Decision Support Systems 22 (1998) 59–63. [12] B. Golany, M. Kress, A multicriteria evaluation of methods for obtaining weights from ratio-scale matrices, European Journal of Operational Research 69 (1993) 210– 220. [13] M.J. Greenacre, Theory and Applications of Correspondence Analysis, Academic Press, London, Orlando, 1984. [15] R.E. Jensen, Comparisons of eigenvector, least squares, chi square and logarithmic least squares methods of scaling a reciprocal matrix, Trinity University, Working Paper No. 127, 1984. [16] R.E. Jensen, An alternative scaling method for priorities in hierarchical structures, Journal of Mathematical Psychology 28 (1984/2) 317–332. [17] C.R. Johnson, W.B. Beine, T.J. Wang, Right-left asymmetry in an eigenvector ranking procedure, Journal of Mathematical Psychology 19 (1979) 61–64. [18] W.J. Kennedy, J.E. Gentle, Statistical Computing, Marcel Dekker, New York, Basel, 1980. [19] Cs. Meszaros, T. Rapcsak, On sensitivity analysis for a class of decision systems, Decision Support Systems 16 (1996) 231–240. [20] L. Mikhailov, A fuzzy programming method for deriving priorities in the analytic hierarchy process, Journal of the Operational Research Society 51 (2000) 341–349. [21] D.S. Mitrinovic, Analytic Inequalities, Springer-Verlag, Berlin, Heidelberg, NewYork, 1970. [22] H. Monsuur, An intrinsic consistency threshold for reciprocal matrices, European Journal of Operational Research 96 (1996) 387–391. [23] T.L. Saaty, The Analytic Hierarchy Process, McGraw-Hill, New York, 1980. [24] T.L. Saaty, Eigenvector and logarithmic least squares, European Journal of Operational Research 48 (1990) 156– 160. [25] T.L. Saaty, The analytic hierarchy process: A 1993 overview, CEJORE 2 (1993) 119–137.

584

S.I. Gass, T. Rapcsak / European Journal of Operational Research 154 (2004) 573–584

[26] T.L. Saaty, L.G. Vargas, Inconsistency and rank preservation, Journal of Mathematical Psychology 28 (1984/1) 205–214. [27] T.L. Saaty, L.G. Vargas, Comparison of eigenvalue, logarithmic least squares, and least squares methods in estimating ratios, Mathematical Modelling 5 (1984/2) 309– 324. [28] P. Schweitzer, An inequality concerning the arithmetic mean, Mathematikai es Physikai Lapok 23 (1914) 257–261. [29] S.M. Standard, Analysis of positive reciprocal matrices, Master of Arts Thesis, University of Maryland, 2000. [30] G.W. Stewart, Introduction to Matrix Computations, Academic Press, New York, 1973.

[31] E. Takeda, K. Cogger, P.L. Yu, Estimating criterion weights using eigenvectors: A comparative study, European Journal of Operational Research 29 (1987) 360–369. [32] S.L. Tung, S.L. Tang, A comparison of the SaatyÕs AHP and modiﬁed AHP for right and left eigenvector inconsistency, European Journal of Operational Research 106 (1998) 123–128. [33] Z. Xu, C. Wei, A consistency improving method in the analytic hierarchy process, European Journal of Operational Research 116 (1999) 443–449. [34] S. Zahir, Geometry of decision making and the vector space formulation of the analytic hierarchy process, European Journal of Operational Research 112 (1999) 373–396.