Dimensionality reduction in software development effort estimation

Dimensionality reduction in software development effort estimation

J. SYSTEMS SOFIWARE 1993; 21:187-196 187 Dimensionality Reduction in Software Development Effort Estimation Wish H. Subrarnanian School of busines...

1MB Sizes 0 Downloads 20 Views

J. SYSTEMS SOFIWARE 1993; 21:187-196


Dimensionality Reduction in Software Development Effort Estimation Wish

H. Subrarnanian

School of business ~dmin~trution Pe~n~lvania State ~nive~i~ at ~a~~u~,







and Information Sciences, Temple University Philadelphia, Pennsylvania

During the last decade, an increasing number of researchers have attempted to address the problem of estimating software development effort. Recent empirical evaluations of the effort estimation models developed to date indicate two problems-model complexity and poor portability. A methodology that works on the notion of reduced dimensionality in the choice of a small set of site-relevant variables is presented. We contend that this methodology could incorporate model simplicity and site specificity in current estimation models. The working of our method is demonstrated and tested using the COCOMO data base. The ability of this method to estimate effort of projects within an acceptable relative error range is studied using discriminant analysis.


software development effort estimation models have typically evolved from projects undertaken by various organizations to develop a quantitative model to aid in their effort estimation task. Consequently, the models often describe the characteristics of a particular class of software development projects germane to that organization. Examples of such models include COCOMO (constructive cost model) [l] designed at TRW defence systems, the Bailey and Basili model designed at the NASA Goddard Space Flight Center [2], the effort estimation model designed for the U.S. Army 131,the SLIM model 141, and the SOFTCOST model El. Contemporary

Address correspondence to Proj Girish H. Subramanian, School of Business Administration, Penn State Unive&y, E-355 Olmsted Building, 777 W. Harrisburg Pike, Middletown, PA 170.57.

0 Elsevier Science Publishing Co., Inc. 655 Avenue of the Americas, New York, NY 10010

Developers typically develop and test these models using only the data base corresponding to the site of development. However, research indicates that these models perform poorly when tested on data bases from other sites [6-g]. Model complexity and poor model portability to other sites are the major problems mentioned in these empirical evaluations. This research presents a methodology to provide model simplicity through reduced dimensionality and enhance portability by including site-relevant variables and demonstrates the methodology using the COCOMO data base [l, 91. Model simplici~ is based on the notion of reducing the dimensionali~ (using factor analysis) of the independent variable space used in contemporary effort estimation models, especially the COCOMO model. Poor portability is essentially due to the absence of flexibility in including/choosing variables relevant to the site to which the model is ported. In addition to the site-relevant variables, the model’s functional form and coefficients need to be generated specifically for the site to which the model is ported. Site specificity refers to the flexibility (in a guided fashion) to include/choose site-relevant independent variables and build estimation models that are specific to the site. All models/methods are not suitable for all kinds of projects. Hence, the ability of any model/method to estimate effort accurately needs to be predicted for any given project in order to determine its applicability. Discriminant analysis provides us with this capability and is used to predict the estimation ability of this proposed dimensionality reduction method.



G. H. Subramanian and S. Breslawski

J. SYSTEMS SOFIWARE 1993; 21:187-196

Unfortunately, this use of discriminant analysis in effort estimation has not been attempted earlier, leading to the possible past applications of the right model/method to the wrong project. The use of discriminant analysis provides a better fit between the models/methods and projects.



To effectively use this methodology, steps must be taken:

the following

Step 1: Attain model simplicity. Based on a large set of candidate variables, factor analysis is used to reduce the dimensionality of the variable space into a small set of underlying factors in a reduced space. The factors are labelled and interpreted subsequently. Step 2: Attain site specificity and model portability. A subset of variables within each of the factors are chosen as a measure of the latent factor to be included in the building of the multiple regression model. The variables are chosen by using the correlations-based method, regressionbased method, or other suitable methods. Using these chosen variables, functional form and coefficients for the resulting model are generated that are specific to the project site to enhance model portability to the site. On completion of steps 1 and 2, it is important to subject the methodology and its resulting effort estimates to testing using established evaluation criteria in the literature. Step 3: Method/model applicability to the site. Discriminant analysis is used to validate the applicability (or ability) of the method/model to accurately predict effort for a given set of projects in the site. The COCOMO model, which has many candidate variables, is used to illustrate these steps.



The COCOMO model [l, 91 is a procedurally complete and thoroughly documented model for effort estimation. The model was developed from data corresponding to various TRW, Inc., software projects. Effort estimation via COCOMO proceeds through two steps. First, a nominal effort estimate is obtained using a formula derived by Boehm [l, 91 via least squares; size in lines of code is the only independent variable. Second, the nominal effort esti-

mate is multiplied by a composite multiplier function, m(X), where X is the vector of 15 independent variables; the result is the final effort estimate. The 15 variables are referred to as cost drivers. The functional form of the model is given below: E = a,Sb’m(X)


where S is the size in lines of code and m(X) is the multiplier function. The coefficients a, and bi are derived from a combination of the mode and the level. Boehm [l, 93 identifies three modes and three levels for classifying software projects. The modes are organic, semidetached and embedded. Projects of the organic type are relatively small in size, require little innovation, have relaxed delivery requirements, and are developed in a stable inhouse environment. Projects of the embedded type are relatively large, operate within tight constraints and have a high degree of hardware, customer interface complexities, and a greater need for innovation. Projects of the semidetached mode fall somewhere between the other two. The three levels are basic, intermediate, and detailed. Detailed COCOMO differs from its intermediate counterpart by having an additional variable called requirements volatility. In addition, detailed COCOMO has phase distribution of effort and a three-level product hierarchy. Components for each of the cost drivers in the function m(X) are called multipliers. The product of the multipliers yields the function m(X). The choice of multipliers used in the model depends on whether the basic, intermediate, or detailed level is selected. While the basic level has its m(X) set to 1, the intermediate and detailed levels follow the multiplier table provided by Boehm [l, 91. The COCOMO model has been refined and modified [lo, 111. In this demonstration, the data base corresponding to the intermediate COCOMO model is used. The presence of a few structural factors that underlie the variables used in effort estimation by the COCOMO model is demonstrated through factor analysis, a [email protected] reduction technique. These structural factors, which are few in number, can help estimate development effort equally well. The COCOMO data base is chosen for this analysis as it is considered to be “the most complete and thoroughly documented of all the models for effort estimation” [6]. A sizeable portion of the COCOMO data base is used in deriving the factors and subsequent model construction. A small portion of the COCOMO data base, selected at random, is set aside for testing purposes. This testing portion of the data base is not used in model construction. Though the COCOMO

Effort Estimation Dimensional&y Reduction

J. SYSTEMS SOFI’WARE 1993; 21:187-196

data base is used here, this dimensional&y reduction method can be applied to any other estimation task and a data base associated with any estimation model, provided the task involves the use of a large independent variable space and the data base is su~ciently large for the use of multiva~ate techniques. This process can work equally well in the presence of an approach to estimate size such as function points [12] or estimation by analogy [13]. The COCOMO data base is a collection of project data on 63 projects, including business, scientific, process control, and other applications. The COCOMO data base has multiplier values for its 15 cost drivers. The cost drivers will be referred to as variables in this paper. Required software reliability (RELY), data base size (DATA), product complexity (CPLX), execution time constraint (TIME), main storage constraint (STIR), virtual machine volatility (VIRT), and computer turnaround time (TURN) have an increasing multiplier value as the ratings go from very low to extra high. The lower the multiplier value for these variables, the lesser the effort required for the project. Analyst capability (ACAP), applications experience (AEXP), programmer capability (PCAP), virtual machine experience WEXP), language experience (LEXP), use of modern programming practices (MODP), use of software tools (TOOL) and required development schedule (SCED) have a decreasing multiplier value as the ratings go from very low to extra high. The lower the multiplier value for these variables, the lesser the effort required for the

project. Personnel continuity (CONT) is reflected as a rating of the turnover in projects ranging from low (1) to high (3). The mode of the projects (MODE) is classified as “embedded” (which is more complex and coded as 1) while “semidetached” is coded as 2 and “organic” as 3. Once the factors are identified and labelled through factor analysis, measurements are required for the factor labels, not data for all the variables of the model, thus ensuring model simplicity. These measurements can be obtained by selecting variables from each factor that would effectively capture the meaning of the factor label. The schemes to select the site-relevant variables provides the site specificity feature. RESULTS OF THE FACTOR ANALYSIS

A factor analysis of the COCOMO data base using the principal components extraction and varimax rotation is presented in Table 1. Table 1 shows the factor loadings (of a magnitude > 0.3) of the variables with the corresponding factor. The structural factors identified from the analysis are application constraint (APPLN), experience in virtual machine and language constraint (EXPER), programming capability constraint (PROOR), and completion within schedule constraint (SCEDU). The naming of these factors and their inte~retation are ex post facto rationalization of the results based on correlational analysis. As part of the dimensionality reduction method, the model developer needs to cross check

Table 1. Factor Loadings of COCOMO Data Base APPLN Main storage constraint (STOR) Required .&&ware reliability (RELY) Execution time constraint (TIME) Mode of software development (MODE) Product complexity (CPLX) Virtual machine experience (VEXP) Language experience (LEXP) Virtual machine volatility (VIRT) Personnel continuity (CONT) Programmer capability (PCAP) Analyst capability (ACAP) Computer turnaround time (TURN) Use of modem programming practices (MODP) Use of software tools (TOOL) Applications experience (AEXP) Required development schedule (SCED)





0.84048 0.87886 0.83394 - 0.6839 0.59124 0.31042 0.36011 - 0.30713 - 0.3281

Rules for assigning variables to factors (based on Dillon and Goldstein 1151: 1. Choose the highest factor loading for each variable and underline it. 2. The factor loadings underlined are significant if their magnitude is z 0.50

0.87642 0.80379 0.71592 - 0.5795 - 0.3248


0.5906 0.5655 0.6831 0.8567 0.5478 0.8558 0.6694


G. H. Subramanian and S. Breslawski

J. SYSTEMS SOFIWARE 1993; 21:187-196

the naming and interpretation with the project managers. Since this is a demonstration of the method, the correlation-based analysis will suffice. These factors effectively capture the four dimensions that underlie the variables used in the COCOMO model. These factors combined account for 73.2% of the variation in the independent variable space. These factors are expressed as constraints as they can be seen as constraints that determine the effort required for projects. APPLN consists of RELY, STOR, TIME, CPLX, and MODE. The effects of each of these variables on APPLN can be seen clearly from Table 1. The correlations for each of these factors are provided in Table 2. Since these five variables are the constraints imposed on the application to be developed, this factor is called application constraints. The lower the STOR, RELY, TIME, or CPLX, the lower the APPLN; hence, less effort is required for the project. Alternately, an increase in STOR, RELY, TIME, or CPLX results in an increase in APPLN; hence more effort is required for the project. Embedded projects result in increased APPLN, a fact made clear by the negative loading of MODE on APPLN. RELY, STOR, TIME, and CPLX exhibit strong positive correlations with each other. A reason for the strong correlation could be the strong interdependence between these variables. For example, required reliability is often affected by the storage constraint or the execution time constraint. MODE exhibits strong negative correlations with RELY, STOR, and TIME because embedded projects are often associated with higher constraints on RELY, STOR, and TIME. Table 2. Pearson’s Correlations within Each Factor



STOR 0.654 1.000 0.678 0.518 - 0.443

TIME 0.703 0.678 1.000 0.487 - 0.578

- 0.260


VEXP 1.000 0.797

LEXP 0.797 1.000

VIRT 0.698 0.692

CONT - 0.273 - 0.275

0.698 - 0.273

0.692 - 0.275

I.000 - 0.276

- 0.276 1.000 - 0.296

TOOL 0.429 0.437 0.519 - 0.296 l.OOa





PCAP 1.000 0.668 0.449 0.530

ACAP 0.668 1.000 0.130 0.384

TURN 0.499 0.130 1.000 0.499


AEXP 1.000 0.352



CPLX 0.559 0.518 0.487 1.000

RELY 1.000 0.654 0.703 0.559 - 0.583

of correlations

MODE - 0.583 - 0.443 - 0.578 - 0.260

MODP 0.530 0.384

0.499 1.000


1.000 is reported

at the 0.05 significance


EXPER consists of VEXP, LEXP, VIRT, CONT, and TOOL. Since these variables reflect the experience in hardware, software, and programming language, the factor is called Experience in virtual machine and language constraint. As VEXP and LEXP have decreasing multiplier values from very low to extra high, a lower value on these variables indicates a higher level of experience. Hence, the lower the VEXP and LEXP (turnover as measured in CONT), the lower the EXPER and less the effort requirement. Boehm [9] refers to virtual machine as “the complex of hardware and software (OS, DBMS, etc.) the software product calls on’ to accomplish its tasks.” Volatility is the frequency of the major or minor changes made to the virtual machine. The lower the VIRT, the more stable the virtual machine environment and hence lower the EXPER. VEXP, LEXP, and VIRT are strongly correlated, which demonstrates a strong bond between experience in the language and experience in the virtual machine environment. TOOL exhibits strong positive correlations with VEXP and LEXP as the use of software tools needs good experience in the language and virtual machine. The use of software tools increases with major changes in the virtual machine environment; hence, the strong positive correlation of TOOL with VIRT. TOOL is negatively correlated with CONT as the use of software tools decreases with an increase in turnover of project personnel. PROGR consists of ACAP, PCAP, TURN, and MODP. This factor is called the programming capability constraint as these variables affect the capability of the programming task. Again, PCAP and ACAP have decreasing multiplier values as the rating goes from very low to extra high. Hence, a lower PCAP or ACAP indicates a higher level of capability. The lower the PCAP or ACAP constraint, the lower the PROGR and less the effort required. PCAP and ACAP are strongly correlated, which indicates the necessity of both these capabilities for effective programming in projects. In addition, the use of MODP and TURN affect the programming capability. SCEDU consists of AEXP and SCED. This factor is called the completion within schedule constraint as it reflects the ability of the project team to deliver the application in time. Since AEXP and SCED have decreasing multiplier values as the ratings go from very low to extra high, the lower multiplier values indicate a higher level of applications experience and scheduling ability. The lower the AEXP or SCED, the lower the SCEDU and less the effort required for the project. AEXP and SCED exhibit weak correlations with each other.

Effort Estimation Dimensionality SITE SPECIFICITY


J. SYSTEMS SOFTWARE 1993; 21:187-196



Site specificity is the flexibility to include/choose site-relevant independent variables in effort estimation. Nonlinear relationships are explicitly considered by using nonlinear transformations of the independent variables, an established procedure in mathematical modelling [141. Detailed schemes (case RSQ and case COR) are used to choose this small set of site-relevant variables from the four factors obtained through [email protected] reduction. The variables chosen from a factor serve as a measure for the factor. Inclusion of variables is currently not possible as data on that variable for the 63 projects of the COCOMO data base are not available. In discussing these two cases, the following notation needs to be presented. Let p be the number of factors into which m determinants of effort (independent variables) xi, i = (1,2,. . . , m) are organized. In addition to considering the use of xi as a measure for the factor with which it is associated, two nonlinear transformations of xi, namely, Log(xJ and (xi)* are also considered. The reason for including the transformations is because regression models-specifically, cost-estimation models-often include nonlinear relationships between the dependent and independent variables. Let 6, i = (1,2,. . . , p> be the ith of the p factors, qi be the number of variables in factor Fi, and xijk, i = (1,2,. . . , p), j = a,2 ,..., q’), and k = (1,2,3) be the kth transformation of the qi variables assigned to factor Fi, where k = 1 for no transformations k = 2 for Log transformations k = 3 for square transformations

Figure 1.

R-Square Case.

choice process leads to a number of acceptable choices of variables. One of those choices is presented here. The variables chosen are (RELY)*, LogGCED), (PUP)*, TOOL, and (AEXP)*. The model obtained using these variables is called model A. Thus, we have reduced the independent variable space by N 50%. Correlations case (COR). Select variables based on the correlations of the variables with each other and with effort. In this case, variables are selected based on the flow diagram presented in Figure 2. This case presents a normative selection where we want to select the best representative variable(s) xijk from each factor Fi. The best representative(s) is one that has maximal correlation with effort and minimal

Hence, the word “variables” in the following four cases encompasses the transformations in its meaning, i.e., the transformations and the variables are both subject to evaluation as likely candidates to be selected as measures for the factor in both the cases described below. R-square case (RSQ). Select variable(s) xijk from each factor Fi based on the contribution of the variable to the R-square value of the resulting model. Selection based on this limiting condition is facilitated by the statistical package SAS, which enters variables based on their contribution to the R-square value. An R-square value of 0.85 for the model obtained from this selection is chosen as the limiting condition. The choice under case RSQ can be made with the aid of the flow diagram in Figure 1. This


xi,* I

Figure 2.

Correlations Case.


J. SYSTEMS SOFTWARE 1993; 21:187-196

G. H. Snbramanian and S. Breslawski

correiation with the other variables. In notational terms, a variable xijk from factor F, is chosen where correlation r,,, of variable _xijkwith any other variable xijrkf where j and j’ are two different variables within the same factor, is the least of all the possible correlations within the factor Fi and the correlation of variable xijk with effort (E) is the maximum F” I.e., xijk from Fj is selected such that Min rxx* and Max I;~ is achieved). If a unique Min and Max situation is not possibfe, variable(s) are chosen based on the p and q values defined in the flow chart. Ideally, one variable for each factor is preferred. Applying the schematic diagram in case COR, the following is the choice of variables as measures for factors F,, F2, Fj, and F4. They are MODE, Log(CONT), Log(PCAP), Log(TURN), and (AEXP)“. The model obtained using these variables is called model B. TEST OF THE D~M~N~I~NALI~

Until this point, the notion that the proposed dimensionality reduction methodolo~ can be used to generate estimates that are not significantly less accurate than the estimate obtained through the COCOMO (basic, intermediate or detailed) models has been conjecture. As the COCOMO data base is used in the demonstration, it is reasonable to compare the performance of our method with the COCOMO models. This contention is now tested using the following methodology. Testing is conducted using two cases for selecting the set of variables to act as measures for the factors. The empirical results are presented and discussed. Using the criteria of Conte et al. [6], this methodology’s performance is compared with the performance of the three COCOMO models on the COCOMO data base. Propositions

In testing this method, its performance

Testing the Propositions Regardless of the case selected, the nonlinear reiationship between size and effort needs to be confirmed. State-of-the-art estimation models posit a nonlinear relationship and identify size as a major predictor of development effort. A log-linear representation of size as a predictor of effort is used; the result confirms the state of the art model. Log(effort) = 4.521 f 10.42 log(size) R-square = 0.6891; F = 108.59; p = 0.0001 This result indicates that variation in size alone accounts for u 69% (R-square value) of the variation in the dependent variable effort. This result concurs with the Kemerer’s 171 empirical study. Though the COCOMO cost drivers do not provide a predictive capability comparable to size; site-relevant ones are still important in predicting effort. In addition, the magnitude of the F statistic allows us to conclude that R-square is significant at a confidence level of 99.99% (p = 0.0001). Hence, the set of variables representing the factors is added to this log-linear representation based on either case RSQ or case COR. Our model can be denoted as Effort = F(Size, F,, F,, F3, F4) where F,,..., F4 are the measures for the four factors APPLN, EXPER, PROGR, and SCEDU. These measures are essentially the selection of variables from each factor guided by case RSQ or case COR considerations.

is tested

1. using established

tests for the goodness of fit and significance of a regression model obtained as a result of model construction, and 2. relative to the performance of COCOMO’s basic, intermediate and detailed models. The relative performance is determined on the test portion of the COCOMO data base kept aside exclusively for testing purposes. Hence, the following propositions

Proposition 2. The models (A and B) obtained using the [email protected] reduction method perform better than Boehm’s basic, intermediate, or detailed models based on the criteria of Conte et al. [6].




ated with high R-square (> 0.8) and significant F statistic.

are studied:

Proposition I. The models (A and B) obtained using the dimensioRali~ reduction method are associ-


1 Results

The models obtained through the choice of variables in both the cases are tested for the assumptions of muitivariate regression modelhng. These assumptions are the absence of multi&ollineari~, seriaf correlation, and unequal error variances. The assumptions are satisfied for the models obtained through the dimensionality reduction method. Model A and case KSQ. The model obtained through multivariate regression with its functional

Effort Estimation Dimensionali~

Table 3. Criteria Set for Evacuation

form and coefficients is Log( effort) = - 8.48 -t 1.174 log(SIZE)

+ 1.555 RELY*

Mean Magnitude of Relative Error The mean magnitude of relative error is MRE-

i- 5.464 TOOL c 0.996 PCAP* R-square = 0.9135; F = 98.54; p = 0.0001 Note the magnitude of the R-square and the F statistic values. The R-square values are high, indicating that there is no need for additional variables to effectively predict effort. The high F value at a p = 0.0001 ind icates that the R-square value is significant at the 99.99% confidence level. In addition, the coefficients are found to be significantly different from zera (as indicated by their T statistic values, not furnished here). The R-square value is > 0.8 and the F value is significant at y = 0.0001. Model B and case COR. The model obtained from multivariate regression with its functional form and coefficients is provided below: Log( EFFORT) = 3.814 +- 0.8521 log(SIZE)

i: MRE,I


MRE is the magnitude of relative error; E is the actual effort and E” is the predicted effort. Prediction at Levet i. Let Ir’ be the number of projects in a set of N projects whose MRE < t. Then PREDtL) = (K/N) For example, if PRED(0.25) = 0.80, then 80% of the predicted values fall within 25% of their actual values. Relative Root Mean Square Error Given a set of II projects, the mean squared error (SE-1 is defined as SE-=


k (E, - E;^)‘) i=l

This measure pertains to regression models aione. It represents the mean value of the error m~nim~ed by the regression model. The square root of SE- is RMS. RMS = (SE-I”.5

- 0.765 MODE

+ I .973 log(PCAP)

= (l/n


t 1.345 AEXP2 + 2.344 log(SCED)

f 3.413 log(TURN)




The relative root mean square error (RMs- ) is

R-square = 0.8667; F = 94.309; p = 0.0001

RMS”-- (RMS,‘(l/n




The coefficients of this model are significantly different from zero (as indicated by their T statistic values, not furnished here).

Ei is actual effort and E,” is predicted effort.

~o~os~tio~ 1. The models (A and B) obtained

using the dimensionahty reduction method are associated with high R-square (> 0.8) and significant f; statistic.


2 Results

To determine the effectiveness of this model, it must be evaluated based on Conte et al.? 161criteria set. The criteria set and its recommended values are presented in Tables 3 and 4. Table 5 indicates that the MRE _ and RMS values are < 0.15 and substautial~y < 0.25, which is the recommended threshold of Conte et al. [6]. The PRED(O.2) values are > 0.8 and definitety 3 0.75, which is the recommended threshold of Conte et al. [6]. Hence, model A easily satisfies Conk’s criteria. Model B performs quite well on Conte’s criteria. The PRED(0.2) value is > 0.8 while the MRE- and RMS values are < 0.2. The performance of the models on Come’s criteria under case RSQ and case COR support the idea

suggested by Conte et al. [6] that “a simpler yet perhaps equally useful model could be constructed based on fewer parameters.” In fact, models A and I3 perform better than the basic, intermediate, and detailed models on the COCOMO data base as evidenced in Table 6. Proposition 2. The models (A and 3) obtained

using the [email protected] reduction method perform better than Boehm’s basic, intermediate, or detailed models based on Conte’s criteria 161.

Table 4. Conk’s Criteria Set Come’s Criteria Set MREPRED(L) RMS-

Mean magnitude of relative error Number of projects with acceptable relative error Relative root mean square error

Acceptable Level < 0.25 > 0.75 < 0.25


G. H. Subramanian and S. Breslawski

J. SYSTEMS SOFTWARE 1993; 21:187-196

Table 5. Performance

on Conte’s Criteria Set

which the projects are to be classified are those that






0.0967 0.1287

0.8889 0.8095

0.1030 0.1042

Results shown in Table 6 confirm proposition 2. The performance of models A and B fit Conte’s criteria better. The MRE- values for these models are < 0.2 ‘and hence are better than the values reported for the three COCOMO models. Similarly, the PRED(0.2) values are > 0.8 and hence are far superior to the performance of the COCOMO models. The results presented so far confirm propositions 1 and 2. Model B needs to be compared with model A. Case COR provides a normative choice process and results in a comparably effective model based on the evaluation using Conte et al.5 f61 criteria. Model B is the more parsimonious of the two models in its choice of independent variables and scores better on model simplicity. However, it does not offer flexibility in the choice of variables in each factor (which case RSQ does) and loses out on the site specificity and ease of data availabili~.

DISCRIMINANT ANALYSIS PREDICTIONS Discriminant analysis, a statistical technique for classi~ing objects into mutually exclusive and exhaustive groups on the basis of a set of independent variables [15], is used. Stepwise discriminant analysis is a selection procedure used to determine, based on Wilks lambda and partial F values, the subset of cost drivers in Boehm’s data base that would act as predictors in classifying the projects into groups. The objective is to predict the ability of the dimensionality reduction method in yielding an estimate with an acceptable relative error. Relative error is given as {(Estimate-Actual)/Actual)). Using Conte et al.‘s [6] criteria, an acceptable relative error of 20% of the actual in either (over or under) direction is chosen. Hence, the two groups into

Table 6. Performance COCOMO Models

Basic COCOMO Intermediate Detailed Model A Model B


1. underestimate or overestimate, and 2. fall within 20%.

which is > 20%

Projects classified into group 2 are those for which the dimensionality method will provide an acceptable estimate. Hence, this method is applicable to projects in group 2. The projects that get classified into group 1 are those for which this method has a tendency to underestimate or overestimate beyond the acceptable level and so the method is not applicable. Stepwise analysis for models A and B yields these two subsets of cost drivers that act as predictors in classifying the projects of the COCOMO data base. These subsets are provided in table 7, where AEXP is applications experience, MODE is the software development mode, MODP is the use of modern programming practices, TIME is the execution time constraint, SCED is the schedule constraint, and STOR is the main storage constraint. This subset of variables is used to classify the COCOMO data base projects into the two groups mentioned earlier. For model A, dis~riminant analysis yields the classification depicted in table 8. This classification has 100% accuracy in predicting the projects for which the dimensionality reduction method is not applicable. The classification is quite conservative in its prediction, incorrectly predicting for only one out of seven projects that the method is not applicable. In other words, there is no risk of applying the method when it is not applicable. However, in about 13% of the projects, the prediction is conservative, as the method is not applied when it would have yielded an estimate within the acceptable 20% level. For model B, disc~minant analysis yields the classification depicted in table 9. This classification is 84% accurate in predicting the projects for which the dimensionality reduction method is not applicable. The classification is quite conservative in its prediction as it incorrectly predicts in N 28% of the projects that the method is not applicable. In other words, the risk of applying the method when it is not applicable is 3%. However, in 22% of the projects, the method is not applied when it would have yielded




0.60 0.22 0.19 0.0%7 0.1287

0.25 0.52 0.67 0.8889 0.8095

Table 7. Subset of Variables Chosen in Stepwise Analysis Models

Subset of Variables




Effort Estimation Dimensionality Table 8. Discrimininant for Model A


Analysis Classification

Discriminant Analysis Prediction

Actual Group Classification Group Codes and Meaning 1 2

J. SYSTEMS SOFIWARE 1993; 21:187--196

Unacceptable estimates Within 20% estimates




I 56



an estimate within the acceptable [email protected]?&variance. In both models A and B, discriminant analysis provides a reasonably good prediction of the applicability of the dimensionality reduction method. In addition, the discriminant analysis yields linear functions that can be used to classify a new addition to the COCOMO projects data base into one of the two groups and thus predict the applicability of the [email protected] reduction method. CONCLUSION DIRECTIONS


The major conclusion is that following a formal mathematical approach, providing choice of cost drivers, and choosing appropriate functional forms represents a substantial improvement over state-ofthe-art techniques, specifically the COCOMO 111 models. There is merit in following the dimensionality reduction and multiple regression methodology. Model simplicity is indeed a possible improvement to state-of-the-art estimation models. The intent of this article is also to show the ability to provide site specificity. Proposition results favor providing site specificity as a needed step in enhancing portability of estimation models. In addition, different functional forms perform quite effectively, as evidenced in the different combination of transformations of independent variables used in the models from cases RSQ and COR. This result supports the need for a model management/ statistical expert system to formally evaluate and select the best functional form. This model manageTable 9. Discriminant Model B

Aoafysis Classi~eation

Discriminant Analysis Prediction

Actual Group Classification Group Codes and Meaning 1 2

Unacceptable estimates Within 20% estimates

ment system could support multivariate techniques such as factor analysis, multiple regression, and discriminant analysis used here. In addition to cases RSQ and COR, the project manager can select/include variables as measures for the factors in a guided fashion. The correlation and other background information can be provided to help the manager make the choice. This choice will incorporate a mechanism to include the project manager’s judgement in effort estimation. There is definitely a need to provide decision support to project managers in their estimation tasks through descriptive and normative models. Crosschecking effort estimates using the descriptive and normative models will be facilitated through the integration of these two approaches. A mechanism to select relevant cost drivers for the project needs to be incorporated. Project managers make adjustments to the initial estimates based on their own experience and judgment. Adequate information support based on a data base of projects is needed to help project managers make necessary adjustments to the initial estimate.

An earlier version of this research was reported in the Proceedings of the National Decision Sciences Institute Conference, New Orleans, Louisiana, 1989.

REFERENCES 1. B. W. Boehm, Software Engineeting Economics, Prentice-Hall, Engiewood Cliffs, New Jersey, 1981. 2. J. W. Bailey and V. R. Basili, A meta-model for software development resource expenditures, in Proceedings of the Fifth International Conference on Software Engineering, IEEE Computer Society Press, San

Diego, California,




12 51

10 14

2 37

1981, pp. 107-116.

3. C. G. Wing~eld, USACSC Experience with SLIM, Report UWAR 360-5, U.S. Army Institute for Re-

search in Management Information and Computer Science, Atlanta, Georgia, 1982. 4. L. H. Putnam, A General Empirical Solution to the Macro Software Sizing and Estimation Problem, IEEE Trans. ~o~are 5. D. J. Reifer,



Eng. SE-4, 34.5-361 (1978).

SoftCost-R: User Experiences and Lessons Learned at the Age of One, J. Syst. Software

7, 279-286 (1987). 6. S. D. Conte, H. E. Dunsmore, and V. Y. Shen, Software Engineeting Metrics and Models, Benjamin/Cum-

mings Publishing Co., Redwood City, California, 1986. An Empiricat Validation of Software Cost Estimation Models, Commun. ACM 30, 416-429

7. C. F. Kemerer, (1987).


J. SYSTEMS SOFTWARE 1993; 21:187-196

8. B. A. Kitchenham and N. R. Taylor, Software Project Development Cost Estimation, 1. Cyst. So&-vure 5, 267-278 (1985). 9. B. W. Boehm, Software Engineering Economics, IEEE Trans. Software Eng. SE-IO, 4-21 (1984). 10. A. M. E. Cuelenaere, M. J. I. M. Van Genuchten, and E. J. Heemstra, Calibrating a Software Cost Estimation Model: Why and How, Info. Soj?ware Technol. 29, 558-567 (1987). Il. Y. Miy~aki and K. Mori, COCOMO evaluation and tailoring, in Proceedings of the 8th Intemational Conference on Software Engineering, IEEE Computer Society

Press, London, UK, 1985, pp. 292-299. 12. A. J. Albrecht and J. E. Gaffney, Jr., Software Func-

G. H. Subramanian and S. Breslawski tion Source Lines of Code, and Development Prediction: A Software Science Validation, Trans. Software Eng. SE-9, 639-648

Effort IEEE


13. A. J. C. Cowderoy and J. 0. Jenkins, Cost estimation by analogy as good management practice, in Second IEE/BCS Conference on Software Engineering, Institu-

tion of Electrical Engineers, Liverpool, UK 1988, pp. 80-84. 14, R. S. Pindyck and D. L. Rubin~eld, Econometric Models and Economic Forecasts, McGraw-bill, New York, 1976. 15. W. R. Dillon and M. Goldstein, Multiuariate Analysis: Methods and Applications, John Wiley & Sons, New York, 1984.