Interactive multiobjective optimization approach to the input–output design of opening new branches

Interactive multiobjective optimization approach to the input–output design of opening new branches

European Journal of Operational Research 220 (2012) 530–538 Contents lists available at SciVerse ScienceDirect European Journal of Operational Resea...

241KB Sizes 0 Downloads 26 Views

European Journal of Operational Research 220 (2012) 530–538

Contents lists available at SciVerse ScienceDirect

European Journal of Operational Research journal homepage:

Innovative Applications of O.R.

Interactive multiobjective optimization approach to the input–output design of opening new branches K. Sam Park ⇑, Dong Eun Shin Korea University, Business School, Anam-5Ga, Seongbuk, Seoul 136-701, Republic of Korea

a r t i c l e

i n f o

Article history: Received 22 January 2011 Accepted 6 February 2012 Available online 11 February 2012 Keywords: Multiobjective programming Interactive approach Nonlinear programming Data envelopment analysis Input–output design New branches

a b s t r a c t We demonstrate a real-world application of the interactive multiple objective optimization (MOO) approach to the simultaneous setting of input and output amounts for the opening of new branches. As illustrated by the case example, all the branches of a fast-food company employ multiple inputs to generate multiple outputs. The company launches several new branches each year and, therefore, needs to plan the quantities of inputs and outputs to be used and produced before their operations. Such input– output settings are a vital practical problem that arises whenever a new branch is opened in a host of different industries. In this paper, we show in detail the entire process of the application from modeling the case problem to generating its solution. In the modeling stage, a data envelopment analysis model and a statistical method are subsequently utilized to form a nonlinear MOO problem for the input–output settings. To solve this problem, we then develop and apply an interactive MOO method, which combines the two earlier interactive methods (Geoffrion et al., 1972; Zionts and Wallenius, 1976), while compensating for their drawbacks and capturing their positive aspects. Ó 2012 Elsevier B.V. All rights reserved.

1. Introduction Using a real-world case, this study explores the important issue of determining the amounts of both inputs and outputs with which a newly opened branch begins to operate. We examine in great detail a fast-food franchise restaurant company in Korea, which operates more than 180 branches across the nation. All of the existing branches employ multiple inputs to generate multiple outputs, as do the new branches that the company will launch to expand its market share. In fact, the company opens several new branches each year, and hence an issue of increasing importance is to plan the quantities of inputs that need to be utilized and the expected outputs to be produced at each of the new branches. One may regard this input setting as an operational design or a resource use plan, whereas the output setting is most accurately viewed as a production target setting. Both settings are closely related to one another in that the more resources are initially used, the more output production is anticipated. Therefore, we take into consideration the simultaneous determination of inputs and outputs for a new branch in the company. In this study, we formulate a nonlinear multiple objective optimization (MOO) model for this input–output setting problem. The basic impetus for formulating this model is the availability of the input and output data generated by the existing branches. The data is the result of actual operations according to individual business ⇑ Corresponding author. Tel.: +82 2 3290 2611; fax: +82 2 922 1380. E-mail address: [email protected] (K.S. Park). 0377-2217/$ - see front matter Ó 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2012.02.004

conditions (including market demands), and thus each branch must determine its own appropriate resource use and output production protocols, depending on their idiosyncratic business situations. Now suppose that a new branch has a business situation similar to a subset of the existing branches. This set of observations can then be compiled to form a feasible set and the new branch can therefore select a desirable input–output point from the prescribed feasible set. To accomplish the MOO model formulation, we employ a data envelopment analysis (DEA) model and a statistical method (for DEA, see Cooper et al., 2006). The use of DEA is to exclude the inefficient observations from the dataset of interest. Although the branches were operating in accordance with their own idiosyncratic business situations, some might prove efficient and some others might not. Relatively excessive uses of inputs and/or output shortfalls are reflective of inefficiency, and hence none of the new branches would like to employ inefficient observations. DEA can clearly identify the efficient branches and easily incorporate multiple inputs and multiple outputs. Based only on these efficient observations, a statistical method is subsequently employed to model functional equations between each of the output variables and the input variables, which form the objective functions to be maximized or minimized in the MOO model. In order to achieve the appropriate solutions, we employ the concept of the interactive MOO approach and develop an enhanced interactive procedure. During the past five decades, a variety of theories and methods have been developed to resolve the MOO problems (Zeleny, 1982; Goicoechea et al., 1982; Haimes and

K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538

Chankong, 1985; Steuer, 1986; Miettinen, 1999) and still further studies have recently been actively conducted (Chinchuluun and Pardalos, 2007; Wallenius et al., 2008). The interactive approach is one of the most broadly used families of MOO techniques (Shin and Ravindran, 1991; Korhonen et al., 1992). This is based on a human–computer interaction process, in that the computer algorithm generates and provides a solution to the human decision maker (DM), after which she provides the algorithm with the necessary information. This process is repeated until the final solution satisfies the DM. The show-and-tell approach gradually articulates her preferences, and helps to provide preferences in a step-by-step fashion, unlike other families of the MOO techniques in which all such information is required at a single given moment. Furthermore, the DM may learn about the changing pattern of the generated solutions and may anticipate the next solution during the solution process, which is also of great help in the selection of a desirable solution. Typical examples of the interactive approach include STEM (Benayoun et al., 1971), the GDF (Geoffrion et al., 1972) method, the ZW (Zionts and Wallenius, 1976) method, the reference direction approach (Korhonen and Laakso, 1986), and NIMBUS (Miettinen, 1999). Each of these employs different computational algorithms and requires different types of preference information, and yet any of them might be employed to solve our MOO problem, either directly or indirectly. However, we focus primarily on the GDF and ZW methods and develop an enhanced method to compensate for their drawbacks and capture their positive aspects. In fact, these two methods have spawned a variety of applications and extensions. Our approach may also meet a basic and crucial requirement in the development of MOO techniques, as we describe below. The success of an interactive method depends heavily on its computational algorithm to generate a new solution at each iteration and, more importantly, the type of information it requires from the DM. Namely, it is important that the computational algorithm is easily understood by the DM. It is more important that she finds it easy and comfortable to provide the method with the necessary information. While the GDF and ZW methods both feature a fairly understandable algorithm, the ZW method features a remarkably simple dialogue because it requires only a ‘‘yes,’’ ‘‘no,’’ or ‘‘don’t know’’ answer, whereas many other methods necessitate specific numerical judgments. For example, the GDF method requires a marginal rate of substitution to trade-off among objective functions. Requiring such specific figures may impose a considerable burden on the DM, and frequently entails inconsistent judgments in the decision-making process (Tversky and Thaler, 1990). One more issue of rising importance addressed in this paper is whether a developed method can cope effectively with a class of nonlinear MOO problems, including ours. The ZW method was designed originally to solve linear problems, but it requires a simple choice-making type of information. The GDF method can effectively solve nonlinear problems, but requires a specific numerical judgment. Therefore, the principal idea underlying our development of an interactive method was to employ the ZW dialogue scheme and the GDF type of computation. Using the method developed herein, we solve our MOO problem, the operational design and the production target setting of new branches. It is also worth mentioning that there are various cases in which new branches are opened in a variety of different organizations or industries. Examples of this include restaurant branches, after-sale service offices, telecommunication service offices, and bank branches. Possible areas of application of our modeling theory and methodology are, therefore, numerous. Despite the importance of such input–output setting problems, no systematic approaches to these problems have been found in the literature.


We have also conducted a more thorough review of the advances made thus far in interactive methods relevant to ours. Zionts and Wallenius (1983) extended the original ZW method and made it possible to cope with nonlinear problems to some degree, via the approximation of nonlinear functions to linear ones. Roy and Wallenius (1992) subsequently extended these ZW methods to address nonlinear problems in a more direct fashion. However, their method, while flexible, necessitates the explicit use of a proxy value function when attempting to compute an efficient solution. As we will demonstrate in the application section, it may not prove possible to devise an improved solution, owing to a cycling problem arising during the solution process. Of course, Roy and Wallenius’ method allows for the switching of the current proxy function to another if such a problem emerges. In practice, however, it may prove difficult to select an appropriate proxy function. Meanwhile, Sadagopan and Rivindran (1986) extended the GDF method and proposed two rough approaches for reducing the burden of its information requirements. One of these allows an upper and lower bound on the marginal substitution rate rather than an exact value, but still involves a numerical judgment. The other is a more interesting approach that allows for the ZW type of interaction within the GDF framework. However, its weakness is found in the identification of the direction vector toward which a current solution is updated. It did not consider the magnitude and feasibility of direction vectors. In practical applications, a speedy convergence to a preferred solution is a matter of vital importance, which will decrease the number of questions, which in turn reduces the information burden on the DM. Additionally, there have been some attempts to combine several different interactive methods. Gardiner and Steuer (1994) and Kaliszewski (2004) both emphasized the need to unify the interactive methods, and each devised different schemes for the hybridization of different methods. The principal objective of this hybrid approach is to enhance flexibility in providing preference information. As the existing methods require different types of information, this approach allows for switching from one type to another, depending on the DM’s most preferred way of providing information during the solution process. Unlike the hybrid approach pursuing flexibility, our notion is to enhance ease in providing information. This is mainly because we believe that the choicemaking type of information is easier to provide than numerical values. The remainder of the article is organized as follows. First, we describe the case problem, followed by the formulation of the MOO model. We then present our methodology and demonstrate its application to the MOO problem. Finally, this paper closes with a summary and a brief sketch of some further research opportunities.

2. Case example In a nationwide fast-food restaurant company, all branches perform common tasks, namely the provision of food and beverage services to their customers. They each have two full-time equivalent managers and several part-time workers. Headquarters hires the branch managers, and makes decisions regarding advertising and the selection and supply of materials for every branch, in addition to the opening of new branches, a matter in which we are particularly interested. The branch managers hire part-time workers at their discretion, and are primarily concerned about the in-house management of employees, inventories, and service operations processes. All branches employ the same input factors to generate the same output factors. Table 1 summarizes the key input and output factors.


K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538

Table 1 Input and output factors in the case example. Factors Inputs

Brief descriptions Unskilled worker Skilled worker Operating cost


Gross profit Product quality

Labor hours of workers except for the skilled workers Labor hours of workers who have more than 300 hours experience in similar business and are deemed to be skilful Material costs and variable costs relevant to providing services of foods and drinks Sales minus operating cost and labor cost Food quality including taste, savor, and freshness

The key inputs can be divided, roughly, into manpower and operating costs. Manpower represents part-time workers who serve customers in service contact points. In every branch, they are further classified into two groups, skilled workers and unskilled workers. The skilled workers are thought of as much more valuable resources than the unskilled workers in the service process, as they could provide customers with faster and more reliable service delivery, even when the branch is crowded and busy. Second, operating costs include the material costs and variable costs salient to the provision of food and beverage services. Labor costs are excluded because they are implicitly included in the workers. For the key outputs in Table 1, gross profit is the total sales minus the sum of operating costs and labor costs in a fiscal year. Next, product quality encompasses the taste, savor, and freshness of the product for sale. Headquarters provides all branches with the same menu, materials, and recipe for every food on the menu. Nevertheless, many branches evidence different quality in the food, which is caused primarily by the different degrees of adherence to the standard manual. The evaluation team at company headquarters thus evaluates this factor for all branches once per year, and assigns each branch a score between 0 and 100, based primarily on mystery shopping. The mystery shoppers hired by headquarters, who masquerade as customers, evaluate the overall quality of the food for each branch. To expand its market share, the company attempts to launch new branches every year and, in fact, opens several new branches. Once the location of a new branch is determined, the next step involves planning the quantity of inputs that need to be utilized and the quantity of outputs that must then be produced for the new branch. For example, the new branch managers determine how many workers need to be employed, and project their operating costs for the current year, prior to operating their branch. It is also important to anticipate the profits that can be earned by utilizing the planned resources during the year. Therefore, the problem we have is the simultaneous determination of the quantities of inputs and outputs for a new branch, a process which is referred to as the input–output setting. We have the year 2009 data of 72 branches for the input and output variables listed in Table 1. Why was data collected from only 72 branches out of more than 180 branch datasets? The input and output data of the existing branches depend heavily on their business situations, including market demands. Suppose that a new branch is similar in its business environment to a subset of the existing branches, like the 72 branches. These branch data were the result of their actual operations, adapting to the given environment. We can, therefore, regard the empirical data as appropriate resource uses and output productions in general, under the given environment. Therefore, we can employ the subset of observations as the admissible set from which the new branch can select a desirable input–output point.

3. Model formulation How could we use the 72 branch data to address the problem of input–output setting? Because the data vary from one branch to another, the new branch managers may have numerous options when selecting from the dataset. We examine the possibility of reducing the original data set to a smaller set, using the concept of efficiency. Although the branches were operating in accordance with their own idiosyncratic business situations, some might prove efficient and some others might not. The typical definition of relative efficiency is the ratio of outputs to inputs, and thus relatively excessive uses of inputs and/or output shortfalls are reflective of inefficiency. From this perspective, none of the new branches would like to employ inefficient observations, and thus we exclude them from consideration as follows. As has been fairly well established, data envelopment analysis (DEA) is a linear programming approach to the evaluation of the relative efficiency of entities or organizations (Cooper et al., 2006). DEA may best be used to identify efficient branches for our case. However, many different DEA models exist, and may generate different measures and different classifications of efficiency. We employed the free disposal hull (FDH) model (see the appendix of this paper. see also Cooper et al., 2006: pp. 117–120), evaluated the 72 branches, and determined that 41 branches became efficient. Table 2 shows the basic descriptive statistics of the 41 branch data for use in the MOO formulation. The FDH model compares actually observed branches alone and generates efficiency scores, whereas almost all other DEA models compare not only actual but also virtualbranches, which are formed via linear combinations of two or more actual branches. In particular, the FDH model identifies the entire set of non-dominated branches as the efficient group. However, other DEA models may exclude some of the nondominated branches from the efficient group, which can be considered part of the decision candidates by the new branch managers. Now employing the data shown in Table 2, we formulate a MOO model for the input–output setting of a new branch. In the case example, three inputs x = (x1, x2, x3) are utilized to produce two outputs y = (y1, y2). This can be viewed as an example of the multiple response design problem, which consists of multiple performance characteristics (outputs) that are influenced by a common set of multiple design factors (inputs). Although many approaches exist for the approximation of the relationship between outputs and inputs, response surface methodology is currently the popular approach (Box and Draper, 1987; Myers and Montgomery, 1995). This methodology involves the approximation of the relationship between each output and the input variables via a quadratic function, as is shown in (1) and (2) below. Then, the designer attempts to determine the proper settings for the input variables that simultaneously optimize the estimated output functions, within a permissible region of the inputs. Table 2 Descriptive statistics for the input–output data of efficient branches. Factors (unit) Unskilled worker (x1: 100 hours labor/month) Skilled worker (x2: 100 hours labor/month) Operating cost (x3: 10,000 Won/month) Gross profit (y1: 10,000 Won/month) Product quality (y2: out of 100 points)


Standard deviation

Minimum value

Maximum value





















K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538


Based on the response surface methodology and using multiple regression analysis, we estimated the following two output functions:

Putting (1)–(4) together, we can formulate the following MOO model:

^1 ðxÞ ¼ 85:918 þ 38:555x1 þ 1:018x3  2:374x21 þ 0:668x22 y

s:t: 0 6 x1 6 12

þ 0:001x23 ^2 ðxÞ ¼ 54:549 þ 10:567x2  0:085x21 þ 0:004x1 x3  0:707x22 y

^1 ðxÞ; f2 ðxÞ ¼ y ^2 ðxÞ; f3 ðxÞ ¼ V 1 ðxÞ; f4 ðxÞ ¼ V 2 ðxÞg max ff1 ðxÞ ¼ y


1 6 x2 6 10


94 6 x3 6 335 ð5Þ


The coefficient of determination (R ) is 0.810 for (1) and 0.684 for (2). Both regression models are significant, with p-values of 0.000 as a result of ANOVA. All of the regression coefficients except for the constant terms are also significant, with p-values of less than 0.05. The two models represent the efficient frontiers for the two respective outputs, since only the efficient branch data are employed in these estimations. Note that, if one constructs such production frontiers using both efficient and inefficient data, then the constructed frontiers cannot be guaranteed to reflect efficiency. With regard to the construction of such models, one more lesson from the response surface methodology is that the presence of insignificant terms in a model increases the variance of an estimated output (Allen, 1974). Owing to the importance of the variance in the input–output setting (see the next two paragraphs), insignificant terms need to be removed from the model to whatever degree possible, using stepwise regression techniques or the method proposed by McKay (1977), who has previously elucidated the extent of the effects of including insignificant terms in the regression models. The predicted functions in (1) and (2) each represent the expectations for the two outputs. Of course, they must be maximized within a certain boundary of interest, because of the nature of the two outputs (gross profit and product quality). However, without considering the prediction variances, we may have a solution that is unacceptable in terms of reliability, whereas this solution maximizes the expectations. For a new branch, it is critically important to determine the input point that gives rise to a more reliable output point. Typically, a reliable solution implies the input–output data, such that many branches in a similar business scale have customarily adopted it during their operations. The unusual data observed by few or notorious branches, while efficient, exemplify the unreliable solutions. The selection of a reliable solution would therefore prove more desirable and less risky from the perspective of the new branch. ^i ðxÞ ¼ axTi , Recall the predicted function for the ith output, y where a is the vector of estimated coefficients and xi is the vector of variables in the regression model. Let Xi be the model matrix corresponding to the variables in xi. Given a feasible input point x0, we denote xi0 by the xi vector evaluated at x0. Now define:

 1 V i ðx0 Þ ¼ xi0 XTi Xi xTi0 The prediction variance at x0 is known to be V i ðx0 Þr2i , whereas the variance r2i is generally unknown but assumed to be constant (Draper and Smith, 1981). Therefore, the Vi measure indicates the impact of an input point upon the prediction variance of output i, which changes with changes in the input point. The input point x0 can be regarded as more reliable than another point x1 for output i, when Vi (x0) is smaller than Vi(x1). Using the equation listed immediately above, we generate the following functions with regard to the prediction variances of the two outputs:

V 1 ðxÞ ¼ 0:568  0:092x1  0:004x3 þ 0:051x21  0:002x22 þ 0:001x23  0:008x31 V 2 ðxÞ ¼ 0:549  0:396x2 þ þ





ð3Þ  0:001x1 x þ

0:002x21 x2 ð4Þ

The four objective functions are all non-linear, but the set of constraints forms a polytope. The upper and lower bounds of all input variables in the constraints are provided by their maximum and minimum values in Table 2, respectively. The set of constraints may change according to the design scenarios or specific business situations encountered in the new branches. Remark 1. For the input–output setting problem such as our case example illustrated in the previous section, one might consider a class of the DEA model-based interactive MOO approaches (Joro et al., 2003; Halme et al., 1999). These approaches do not require the tasks we have carried out in this modeling section? eliminating the inefficient (or dominated) observations a priori and estimating the regression functions such as (1)–(4). Rather, they commonly form a production possibility set just based on the empirically observed data, as done in DEA, and seek to select a desirable input– output point in an interactive manner that incorporates the DM’s preference information. While useful and still flexible, they do not consider the statistical nature of data in terms of expectations and variances in their modeling stages. Therefore, their solutions cannot be guaranteed to reflect reliability. We have emphasized that the selection of a reliable solution is critically important for a new branch. An additional point to be noted is that one might only use the FDH model (without employing the subsequent statistical techniques) and provide the resulting efficient input–output data to the DM for her selection. However, the number of the efficient data could make it difficult to single out a final solution (in our case example, 41 out of 72 branches appear to be efficient). Moreover, no account would be taken of the statistical nature of data.

4. Methodology Consider the following MOO problem:

max U½fðxÞ ¼ U½f1 ðxÞ; f2 ðxÞ; . . . ; fk ðxÞ s:t: x 2 S ¼ fxAx ¼ b; x P 0g


where x 2 Rn is a vector of decision variables, fi (i = 1, . . . , k) are all continuously differentiable non-linear objective functions to be maximized, and S  Rn is the feasible region formed by m linear constraints (A is an m  n matrix, b 2 Rm Þ and the non-negativity condition (x P 0). The ultimate objective of this problem is to maximize the value function of the DM, U: Rk ! R, subject to x 2 S. Note that (5) is a special case of (6). Generally, the U function is not explicitly known. The earlier interactive approach has assumed that as its starting point the DM has an implicit value function, also referred to as the proxy function. Its examples include a linear function, sum-of-exponentials, and sum-of-powers (Oppenheimer, 1978). We employ a linear proxy (i.e., weighted sum of all the fi). The linear proxy is one of the simplest functions and has been most frequently used, for example, in the original ZW method and its extensions cited in the introduction. If the linear function is regarded as a true or explicit value function, as in some of the non-interactive approaches, many issues can arise as to how to elicit and interpret the weights, among others (Miettinen, 1999: pp. 78–85). However, as in the previous interactive approach, we do not regard the proxy as the


K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538

explicit value function, but do use it as a prototype of the value function to guide the search for a preferred solution. We now present our method to solve (6), taking full advantage of the GDF and ZW methods. Using the chain rule for the U function, we have

rx U½f1 ðxÞ; f2 ðxÞ; . . . ; fk ðxÞ ¼ [email protected][email protected] Ox fi ðxÞ The symbol R signifies the summation from i = 1 to k. The resultant n equations can be rewritten as follows:

@[email protected] ¼


wi ð@fi [email protected] Þ;

j ¼ 1; . . . ; n

Weights w

w1 .. . wk Weighted

Objective functions f

f1 .. . fk sum

Basic variables xB

ri ¼ ½rBi ; rNi  ¼ rx fi ðxÞ  rB fi ðxÞB1 A ð8Þ

Therefore, the reduced gradients corresponding to the basic variables are all zero, and those for the nonbasic variables are expressed by rNfi(n)  rBfi(n)B1N. Table 3 shows a template into which these results are integrated, which is also useful in the next stage, in determining a weight vector via interaction with the DM. To determine the set of weights, w, Table 3 is presented to the DM. In practice, it should prove sufficient   to present the reduced gradients for the nonbasic variables rjN and the current solution (n), along with the objective function values (f(n) in the final column of Table 3). The ZW dialogue scheme can be employed in this stage. It recommends presenting only efficient or non-dominated vectors of the rjN vectors to the DM. For example, let the two vec-

Nonbasic variables xN







0 .. . 0 0

... .. . ... ...

0 .. . 0 0

r1(m+1) .. . rk(m+1)

... .. . ... ...

r1n .. . rkn Rw i rin

R wiri(m+1)


Given a linear proxy, we could denote wi = @U/@fi > 0 for all i. This term means the change in U according to the change in fi, and hence the extent to which the ith objective function contributes to the overall value of all objective functions. Therefore, we employ wi as the importance weight of objective function i, and denote w = (w1, . . . , wk) > 0. The second term, @fi/@xj, is a measure of the change in fi per unit change in decision variable xj, and hence is similar to the meaning of the reduced cost in linear programming, but referred to as the reduced gradient in nonlinear programming. Therefore, (7) implies that the change in xj affects the objective function values, thus altering the overall value. The architecture of our interactive method is as follows. Given a feasible solution xh 2 S at iteration h, we calculate the vectors of reduced gradients rj = (r1j, . . . , rkj) for all xj, where rkj dictates the change in fk per unit increase in xj. We present these data to the DM. She then determines whether each xj value should increase or decrease to improve her value function. Based on (7), we form a series of restrictions such that wrj > 0 for the xj to be increased (meaning that @U/@xj > 0) and wrj < 0 for the xj to be decreased (meaning that @U/@ xj < 0). We then compute the w vector satisfying all the restrictions and, further, calculate a good and usable direction vector to move from the current solution xh to the next solution xh + 1 2 S. This series of steps is repeated until the DM finally grasps a preferred solution. How are the reduced gradients calculated? Let x = n 2 S be a feasible solution. The x vector can be decomposed into x = [xB, xN], where xB is the set of m basic variables and xN the set of n–m nonbasic variables. We assume that nB > 0 and nNP 0. Similarly, the A matrix in (6) is partitioned as A = [B, N] such that BxB + NxN = b. In the same way, let rxfi(x) = [rBfi(x), rNfi(x)], where rBfi(x) is the vector of partial derivatives of fi(x) with respect to xB, and rNfi(x) the vector of those of fi(x) with respect to xN. Denote the vector of reduced gradients of fi(x) by ri = (ri1, . . . , rin). Note that the ri vector corresponds to the objective function-oriented arrangements of the individual reduced gradients, whereas the rj vector (defined in the immediately above paragraph) is a result of the decision variable-oriented arrangements. The ri vector can also be decomposed as ri = [rBi, rNi], where rBi and rNi are respectively associated with xB and xN. This is then calculated as follows (Bazaraa et al., 2006):

¼ ½0; rN fi ðxÞ  rB fi ðxÞB1 N

Table 3 Presentation template of the reduced gradients at the current solution. Objective values f(n)

f1(n) .. . fk(n)

tors, r1N ¼ ð3; 1Þ and r2N ¼ ð1; 1Þ, reflecting tradeoffs between two objective functions. Then the first one dominates (or is more efficient than) the second one. The speed of increment in the first objective function is three times faster in the case of the unit increase in x1 than in x2, whereas the speed of decrement in the second objective function remains identical in both cases. (See Zionts and Wallenius (1980) for the method of identifying such efficient vectors in more detail.) For each of the presented vectors, the DM is asked to answer whether or not the tradeoff is preferred. Her responses to the tradeoff questions subsequently generate the following restrictions:

wrjN P e; wrjN 6 e;

if the tradeoff vector rjN is preferred;


if the tradeoff vector rjN is not preferred

where e signifies a small positive number. When the DM answers ‘‘don’t know’’ to a tradeoff vector, the sign of wrjN is unclear. For this and several other reasons, the ZW method removes this category of responses from consideration. See Malakooti (1985) for different dialogue schemes. Using all of the generated restrictions, the following linear programming problem can be constructed (Roy and Wallenius, 1992):




all the restrictions generated so far in the form of ð9Þ and Rwi ¼ 1;

wi P e; 8i ð10Þ

Insofar as the optimal e value is positive, a consistent set of weights is thereby determined as the result of the cumulative use of the preference information. In cases in which the DM provides a ‘‘don’t know’’ answer to all the presented tradeoff vectors, no new vector of weights is yielded, and hence the current solution may become the final solution. To update the solution n = (n1, . . . , nn) toward an improving and feasible direction, d = (d1, . . . , dn), we use the w vector derived from (10). According to the partition of x = [xB, xN] at n, the partition of d yields d = [dB, dN]. Because OxU[f(x)] = Rwirxfi(x), the d vector must satisfy both of the following conditions:

ðaÞ Rwi Ox fi ðnÞd > 0 ðimproving directionÞ ðbÞ Ad ¼ 0;

and dj P 0 if nj ¼ 0 ðfeasible directionÞ

To fulfill Ad = 0, Ad = [B, N] [dB, dN] = 0, thus yielding

dB ¼ B1 NdN

ð11Þ 1

Regarding (a), RwiOxfi(n)d = Rwi[ONfi(n)  OB fi(n)B N]dN = [RwirNi]dN. Therefore, we can determine the values of individual dj variables in dN as follows:

8 j j > > < wrN ; if wrN P 0 j dj ¼ wrN ; if wrjN < 0 and nj > 0 > > : 0; if wrjN < 0 and nj ¼ 0


K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538

The d = [dB, dN] vector determined using (12), first, and then (11) fulfills all the conditions in (a) and (b). Finally, we address the problem of how far to move from the current solution n in the direction d. This concept is employed in the computational part of the GDF method. We consider the following line search problem:

max U½fðn þ sdÞ s:t: 0 6 s 6 smax


where smax = min{nj/djjdj < 0 for j = 1, . . . , n}. Since the explicit form of U remains unknown, the choice of step size s necessitates an interaction with the DM. In the GDF method, this is conducted such that, for various possible s values, the DM is presented with a set of calculated objective value vectors, {f(n + sd)}, and asked to select an appropriate s value. We pursued this method, but experienced a practical problem such that the smax value will vary from one solution to another (sometimes, possibly be a very large number), which renders the presentation of {f(n + sd)} difficult. Note that the line search problem in the GDF method is designed to restrict the possible step sizes to [0, 1]. However, denoting z = smaxd, we can reduce (13) to

max U½fðn þ tzÞ s:t: 0 6 t 6 1


so that the step size t lies between 0 and 1. We refer to z as the rescaled direction of d. An appropriate choice of t yields the improved new solution, n + tz. 5. Summary and discussion Our interactive procedures can be summarized as follows: Step 1: Choose an initial solution, x1 2 S, and set the iteration number to h = 1. Step 2: Decompose the decision variables x, referring to the current solution xh, into the basic and nonbasic parts, x = [xB, xN]. Similarly, partition A and rxf(x) as A = [B, N] and rxf(x) = [rBf(x), rNf(x)]. Calculate the vector of reduced gradients for the nonbasic variables in xN, rNi, using (8) for all i. Step 3: Present the DM the rNi values in the form of the (efficient) rjN vectors and, for each of these tradeoff vectors, ask the DM to answer whether or not the tradeoff is preferred. Keep all the responses in the form of (9) but, in the case of a ‘‘don’t know’’ response, exclude it from consideration. If the DM’s response to all the tradeoff questions is ‘‘don’t know,’’ then stop; the current solution may become the final solution. Step 4: Construct and solve the linear program in (10) to obtain a weight vector, w. Step 5: Denote an improving and feasible direction by d = (d1, . . . , dn), and let d = [dB, dN] according to the partition of x = [xB, xN] in Step 2. Calculate the dN vector using (12) and the dB vector using (11). Calculate the rescaled direction of the d vector, z = smaxd, using the definition of smax in (13). Using the concept of (14), prepare a bunch of the objective value vectors, {f(xh + tz)}, for various possible values of t 2 [0, 1]. Step 6: Present {f(xh + tz)} to the DM, and ask her to select an optimal t value, t⁄. Then, xh+1 = xh + t⁄z becomes a new improved solution and f(xh+1) the new vector of objective values. If the DM is satisfied with the new solution, then stop. Otherwise, set h = h + 1 and return to Step 2. As such, interactions with the DM occur in Steps 3 and 6. The other steps consist of rote calculations and only one linear programming step (in Step 4). The ZW dialogue scheme is employed


in Step 3, and the choice-making style of interaction in the GDF method is employed in Step 6. No specific preference figures are required. Therefore, we believe that our method, while dealing with a nonlinear MOO problem, features not only a simple computation but also a comfortable interaction with the DM. Moreover, as mentioned in the introduction, this comfortable interaction helps the DM provide consistent answers, which is one of the most important aspects in any interactive methods. Basically, every method assumes that the DM’s responses will be consistent during the solution process. Despite all this, what if the DM provides an inconsistent response? Our method detects this in Step 4, as is evident from the lack of a feasible weight solution in the linear program, and can allow the DM to correct it by going back and forth to Steps 3 and 4. If all the objective functions in (6) are assumed to be concave, then our method guarantees a preferred Pareto optimal solution within a finite number of iterations. This assumption of concavity implies the concavity of the U function due to the use of a linear proxy. This condition (coupled with the linearity of constraints) ensures the convergence of the reduced gradient method to a KuhnTucker point (Bazaraa et al., 2006: pp. 602–613). It follows, then, that the convergence of our method to a Kuhn-Tucker point is guaranteed because the computational steps in our method are based on the reduced gradient method. Furthermore, Roy and Wallenius (1992) have demonstrated that an interactive method?in which the ZW type of interaction and a linear proxy are employed?can terminate finitely with a preferred solution among a set of Pareto solutions. Our method falls into this category of interactive methods. The nonlinear MOO problem with which we are currently dealing is composed of nonlinear objective functions and linear constraints. However, the extension of our method to the problem of nonlinear constraints is quite possible, while maintaining the conceptual framework and structure of our procedures. That is, only some of the computational components (in Steps 2 and 5) must be modified to treat a set of nonlinear constraints. This can be accomplished by replacing the reduced gradient method with the generalized reduced gradient method (Bazaraa et al., 2006: pp. 602–613). However, we did not pursue this further, mainly due to our case problem, in which no nonlinear constraints are involved. Rather, see the studies of Sadagopan and Rivindran (1986), Roy and Wallenius (1992), as to how the generalized reduced gradient method can be employed to solve a general version of nonlinear MOO problems. We recall the MOO problem in (5) and modify its constraints slightly, as follows:

max s:t:

U½fðxÞ ¼ U½f1 ðxÞ; f2 ðxÞ; f3 ðxÞ; f4 ðxÞ x1 þ x4 ¼ 2:68 x2  x 5 ¼ 1 x2 þ x6 ¼ 5:07


x3  x7 ¼ 94 x3 þ x8 ¼ 174:15 x ¼ ðx1 ; . . . ; x8 Þ P 0 Five slack variables from x4 to x8 are introduced, and the upper bounds of the original (or structure) variables are changed from their maxima to the means listed in Table 2. It is assumed that the new branch should begin its operation with a relatively small input size. The set of constraints in (15) limits its business size up to the average size of the relevant set of observations. Again, such a design scenario may change according to the different branch openings, depending on their specific business situations. Our interactive procedures are now applied to the problem in (15), as follows:


K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538

Table 4 Reduced gradients for h = 1.

Table 6 Reduced gradients for h = 2.











f1 f2 f3 f4

38.555 0.376 0.092 0.094

1.336 9.153 0.004 0.122

1.206 0 0.184 0

19.278 64.409 9.026 0.282

f1 f2 f3 f4

25.830 0.079 0.009 0.076

2.308 8.125 0.007 0.158

1.206 0.011 0.184 0.003

106.964 71.090 9.001 0.059

changes are further possible in the objective values. If the possible changes are undesirable, then the DM may stop. Otherwise, she will continue to seek a more preferable solution.

First iteration(h = 1) Step 1: As an initial solution, we begin with the lower bounds of the three structural variables, such that x1 = (0, 1, 94, 2.68, 0, 4.07, 0, 80.15). Then, f1 = [f1(x1), . . . , f4(x1)] = [19.28, 64.41, 9.03, 0.28]. Step 2: Referring to solution x1, the variable vector x is decomposed into xB = (x2, x3, x4, x6, x8) and xN = (x1, x5, x7). For the three nonbasic variables, the reduced gradient vectors are calculated and recorded in Table 4. Step 3: The data in Table 4 is presented to the new branch managers (the DM), with the questions of whether or not each of the tradeoff vectors is preferred. In fact, the current solution is not a Pareto solution, because all of the objective values could be improved by increasing the x1 and/or x5 values. For these two variables, the DM would apparently say ‘‘yes’’ after being notified that the two tradeoff vectors are both efficient. The tradeoff vector of x7 is inefficient, and hence is dropped from consideration. Step 4: We then construct the linear program,


Second iteration(h = 2) Steps 2 and 3:


s:t: 38:555w1 þ 0:376w2 þ 0:092w3 þ 0:094w4 P e 1:336w1 þ 9:153w2 þ 0:004w3 þ 0:122w4 P e Step 4:

w1 þ w2 þ w3 þ w4 ¼ 1 w1 ; w2 ; w3 ; w4 P e


and solve it to obtain the weight vector,w = (0.25, 0.25, 0.25, 0.25). Step 5: The direction vector is calculated as d = (9.779, 2.654, 0.256, 9.779, 2.654, 0.714, 0.256, 0.256) together with smax = 0.274. We then have z = smaxd and, for various possible values of t 2 [0, 1], prepare a series of objective value vectors, as shown in Table 5. Step 6: We present Table 5 to the DM and ask her to choose the most preferred vector of objective values. As can be readily seen, as the tvalue increases, all objective function values continue to improve, except for the minimal change in f3. Therefore, t⁄ = 1.0, and the improved new solution becomes x2 = x1 + t⁄z = (2.68, 1.727, 94.07, 0, 0.727, 3.343, 0.07, 80.08) along with f2 = [106.964, 71.09, 9.001, 0.059]. A comparison of f1 and f2 shows a noticeable improvement in f1, f2, and f4, but only a minor improvement in f3. Note that f1 and f2 (representing expectations) are to be maximized, whereas f3 and f4 (variances) minimized. However, only the demonstration of this improvement would be insufficient to stop and the DM would like to know what Table 5 Sample of the possible objective values for h = 1. t





0 0.1 0.2 .. . 0.8 0.9 1.0

19.278 29.549 39.487 .. . 92.098 99.698 106.964

64.409 65.166 65.903 .. . 69.911 70.510 71.090

9.026 9.006 8.992 .. . 8.992 8.997 9.001

0.282 0.249 0.218 .. . 0.082 0.069 0.059

Given x2, let xB = (x1, x2, x3, x5, x7) and xN = (x4, x6, x8). Table 6 shows the calculated reduced gradients for xN (This confirms that the current solution x2 is a Pareto solution in the actual objective function space, because it is impossible to improve all the objective values simultaneously, by means of any changes in xN.) None of those tradeoff vectors would be desirable. If the x6 value decreases, then we might expect a visible improvement in f1 and f2, but a slight deterioration in f4 (say ‘‘scenario 1’’). If the x8 value decreases, we might expect to see a visible improvement in f1, but a slight deterioration in f3 (‘‘scenario 2’’). Of course, the two scenarios are each subject to the changes that are possible in the other variables. Now suppose that the DM prefers scenario 1 to scenario 2 in this iteration. We then construct the linear program,


s:t: 38:555w1 þ 0:376w2 þ 0:092w3 þ 0:094w4 P e 1:336w1 þ 9:153w2 þ 0:004w3 þ 0:122w4 P e  2:308w1  8:125w2  0:007w3 þ 0:158w4 6 e w1 þ w2 þ w3 þ w4 ¼ 1 w1 ; w2 ; w3 ; w4 P e Steps 5 and 6:

and solve it to obtain,w = (0.25, 0.25, 0.25, 0.25). The direction vector is calculated as d = (0, 2.57, 0.259, 0, 2.57, 2.57, 0.259, 0.259) along with smax = 1.301. We then calculate z = smaxd and prepare Table 7. As the tvalue increases, we observe continuous improvements in f1 and f2, minor changes in f3, and a slight deterioration in f4. If the DM selects t⁄ = 0.7, then the improved new solution becomes x3 = x2 + t⁄z = (2.68, 4.067, 94.306, 0, 3.067, 1.003, 0.306, 79.844) and f3 =

Table 7 Sample of the possible objective values for h = 2. t





0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

106.964 107.850 108.886 110.071 111.406 112.890 114.523 116.305 118.237 120.318 122.548

71.090 73.727 76.206 78.527 80.690 82.696 84.543 86.232 87.763 89.137 90.352

9.001 9.004 9.008 9.010 9.013 9.015 9.016 9.017 9.018 9.018 9.017

0.059 0.135 0.262 0.445 0.689 0.999 1.383 1.845 2.394 3.035 3.777


K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538

[116.305, 86.232, 9.017, 1.845]. If x3 is the final solution, the most preferred or optimal input–output setting becomes (x1, x2, x3) = (2.68, 4.067, 94.306) and (y1, y2) = (116.305, 86.232). These quantities of output productions are anticipated and their predicted variances are (9.017, 1.845), statistically, if the determined levels of inputs are utilized. Remark 2. We note the downside of the approach devised by Roy and Wallenius (1992). It necessitates (in its second step) the construction of a specific U function and also requires solving the problem of maximizing U subject to x 2 S, in order to generate an efficient solution for use in the interaction step. Given a linear proxy, the weight vector w becomes the set of parameters for the complete formation of the U function. This implies that the efficient solution does not change unless the w vector changes. As is apparent in Step 4 in this application, the w vector remains the same even though the iteration number h increases from 1 to 2. As the number of iterations increases, more preference information is accumulated, and a more preferable solution is demanded despite the identical w vector, as shown via our approach. We have experienced many such cases of the same w vector occurring during the solution of problem (15) for various design scenarios. Therefore, in their method, given a linear proxy, cycling may frequently occur. Although their method also allows for switching the current proxy to another when such cycling arises, the selection of an appropriate proxy function is rather difficult in practice. 6. Conclusion We have demonstrated the input–output setting problem for the new branches of a fast-food restaurant company. The entire process, from initial modeling to the actual solving of the problem, is shown in great detail. In the modeling stage, a DEA model (i.e., FDH model) and statistical method (response surface methodology) are subsequently utilized to formulate a nonlinear MOO model. The DEA model is applied to a set of the input–output data generated from the existing branches, in order to identify inefficient data and drop them from further consideration. Relatively excessive uses of inputs and/or output shortfalls are reflective of inefficiency, and hence none of the new branches would like to employ inefficient observations. The set of efficient data is then compiled to form a feasible set and the new branch can therefore select a desirable input–output point from the prescribed feasible set. However, the size of the efficient dataset makes it difficult to single out a final solution. Furthermore, the statistical nature of the data needs to be considered in order to achieve a desirable solution that maximizes the expected output functions and minimizes their variances, simultaneously. We have emphasized that the selection of a reliable solution (obtainable from minimizing the variance functions) is particularly important for a new branch. Therefore, based only on the efficient observations, the response surface methodology is employed to model the expected output functions and their variance functions, in the form of functional equations between each of the output variables and the input variables. These form the objective functions to be maximized or minimized in the MOO model. To solve the formulated MOO problem, we have developed an enhanced interactive MOO method that takes full advantage of the GDF and ZW methods, in addition to nonlinear programming theories. This overcomes many of the drawbacks associated with the relevant interactive methods. In terms of computations, while dealing with a nonlinear MOO problem, the method consists of simple calculations and only one linear programming step. Its interaction steps are characterized by a show-and-selection dia-

logue, in that they first present several options to the DM, and then ask her to select one of them or to answer ‘‘yes,’’ ‘‘no,’’ or ‘‘don’t know’’ to each of them. Therefore, we believe that our method features not only a simple computation but also a comfortable interaction with the DM. The procedure of opening new branches is relevant to many different organizations or industries, including restaurant branches, after-sale service offices, telecommunication service offices, and bank branches. The possible areas in which our modeling idea and methods can be applied are, therefore, numerous. Some of the further research opportunities are as follows. In our modeling stage, we have considered both the mean of the outputs through a regression function and the associated uncertainty via a variance measure. The consideration of correlations among the outputs would constitute another issue of uncertainty in the input–output setting. Next, we have dealt with a MOO problem in which the objective functions are nonlinear and the constraints are linear. An extension to a set of nonlinear constraints would be an interesting topic for future research. Finally, for the input– output setting, it should also prove interesting to develop a userfriendly software package or decision support system in general, that can help the DM to formulate, solve, and interpret the solution in an interactive way. Acknowledgments This research was supported by the National Research Foundation of Korea, Grant No. KRF-2009-327-B00179. We thank Professor Jean-Charles Billaut, the Editor, and anonymous referees for their helpful comments and suggestions, which helped to improve this paper. Appendix A The free disposal hull (FDH) model used in this paper is as follows:



m X




n X


s X

! sþro


xij kj þ sio ¼ hxio


j¼1 n X

yrj kj  sþro ¼ yro


j¼1 n X

kj ¼ 1


sio ; sþro ; kj P 0 8i; r; j kj 2 f0; 1g 8j Here, xij and yrj respectively represent the amounts of the ith input (i = 1, . . . , m) and the rth output (r = 1, . . . , s) for the jth branch (j = 1, . . . , n). The xio, yro data represent the inputs and outputs for branch o, the branch j to be evaluated. The symbol e > 0 is a nonArchimedean infinitesimal. Next, the decision variable h 6 1 repreþ sents the efficiency rating of branch o. The s io ; sro variables are the slacks for the inputs and outputs of branch o, respectively. The kj variables are the convex combinators for the xij data and for the yrj data. If the optimal h value becomes unity and all the optimal slacks equal zero, then branch o is efficient or non-dominated. Otherwise, it is inefficient. Finally, we note that the FDH model has the bivalent condition, kj 2 {0, 1} for all j, as in the last constraints. This guarantees that the FDH model identifies the entire set of nondominated branches as the efficient group. If the bivalent condition


K.S. Park, D.E. Shin / European Journal of Operational Research 220 (2012) 530–538

is removed from the model, then it becomes an ordinary DEA model. For more details about the DEA and FDH models, refer to Cooper et al. (2006).

References Allen, D.M., 1974. The relationship between variable selection and data augmentation and a method for prediction. Technometrics 16, 125–127. Bazaraa, M.S., Sherali, H.D., Shetty, C.M., 2006. Nonlinear Programming: Theory and Algorithms. John Wiley & Sons, New Jersey. Benayoun, R., Montgolfier, J., Tergny, J., Larichev, O., 1971. Linear programming with multiple objective functions: step method (STEM). Mathematical Programming 1, 366–375. Box, G.E.P., Draper, N.R., 1987. Empirical Model Building and Response Surfaces. Wiley, New York. Chinchuluun, A., Pardalos, P.M., 2007. A survey of recent developments in multiobjective optimization. Annals of Operations Research 154, 29–50. Cooper, W.W., Seiford, L.M., Tone, K., 2006. Data Envelopment Analysis: A Comprehensive Text with Models, Applications, References and DEA-Solver Software. Kluwer Academic Publishers, Boston. Draper, N.R., Smith, H., 1981. Applied Regression Analysis. John Wiley & Sons, New York. Gardiner, L.R., Steuer, R.E., 1994. Unified interactive multiple objective programming. European Journal of Operational Research 74, 391–406. Geoffrion, A.M., Dyer, J.S., Feinberg, A., 1972. An interactive approach for multicriterion optimization, with an application to the operation of an academic department. Management Science 19, 357–368. Goicoechea, A., Hansen, D.R., Duckstein, L., 1982. Multiobjective Decision Analysis with Engineering and Business Applications. John Wiley & Sons, New York. Haimes, Y.Y., Chankong, V., 1985. Decision Making with Multiple Objectives. Lecture Notes in Economics and Mathematical Systems, vol. 242. Springer-Verlag, Berlin. Halme, M., Joro, T., Korhonen, P., Salo, S., Wallenius, J., 1999. A value efficiency approach to incorporating preference information in data envelopment analysis. Management Science 45, 103–115. Joro, R., Korhonen, P., Zionts, S., 2003. An interactive approach to improve estimates of value efficiency in data envelopment analysis. European Journal of Operational Research 149, 688–699.

Kaliszewski, I., 2004. Out of the mist–towards decision-maker-friendly multiple criteria decision making support. European Journal of Operational Research 158, 293–307. Korhonen, P., Laakso, J., 1986. A visual interactive method for solving the multiple criteria problem. European Journal of Operational Research 24, 277–287. Korhonen, P., Moskowitz, H., Wallenius, J., 1992. Multiple criteria decision support: a review. European Journal of Operational Research 63, 361–375. Malakooti, B., 1985. Assessment through strength of preference. Large Scale Systems: Theory and Applications 8, 169–182. McKay, R.J., 1977. Variable selection in multivariate regression: an application of simultaneous test procedures. Journal of the Royal Statistical Society: Series B 39, 371–380. Miettinen, K.M., 1999. Nonlinear Multiobjective Optimization. Kluwer Academic Publishers, Boston. Myers, R.H., Montgomery, D.C., 1995. Response Surface Methodology. John Wiley & Sons, New York. Oppenheimer, K.R., 1978. A proxy approach to multi-attribute decision-making. Management Science 24, 675–689. Roy, A., Wallenius, J., 1992. Nonlinear multiple objective optimization: an algorithm and some theory. Mathematical Programming 55, 235–249. Sadagopan, S., Rivindran, A., 1986. Interactive algorithms for multiple criteria nonlinear programming problems. European Journal of Operational Research 25, 247–257. Shin, W.S., Ravindran, A., 1991. Interactive multiple objective optimization: survey I -continuous case. Computers and Operations Research 18, 97–114. Steuer, R.E., 1986. Multiple Criteria Optimization: Theory, Computation, and Application. John Wiley & Sons, New York. Tversky, A., Thaler, R.H., 1990. Anomalies: preference reversals. Journal of Economic Perspectives 4, 201–211. Wallenius, J., Dyer, J.S., Fishburn, P.C., Steuer, R.E., Zionts, S., Deb, K., 2008. Multiple criteria decision making, multiattribute utility theory: recent accomplishments and what lies ahead. Management Science 54, 1336–1349. Zeleny, M., 1982. Multiple Criteria Decision Making. McGraw-Hill, New York. Zionts, S., Wallenius, J., 1976. An interactive programming method for solving the multiple criteria problem. Management Science 22, 652–663. Zionts, S., Wallenius, J., 1980. Identifying efficient vectors: some theory and computational results. Operations Research 28, 785–793. Zionts, S., Wallenius, J., 1983. An interactive multiple objective linear programming method for a class of underlying nonlinear utility functions. Management Science 29, 519–529.