An interactive dynamic approach based on hybrid swarm optimization for solving multiobjective programming problem with fuzzy parameters

An interactive dynamic approach based on hybrid swarm optimization for solving multiobjective programming problem with fuzzy parameters

Applied Mathematical Modelling 38 (2014) 2000–2014 Contents lists available at ScienceDirect Applied Mathematical Modelling journal homepage: www.el...

835KB Sizes 0 Downloads 4 Views

Applied Mathematical Modelling 38 (2014) 2000–2014

Contents lists available at ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

An interactive dynamic approach based on hybrid swarm optimization for solving multiobjective programming problem with fuzzy parameters M.A. Abo-Sinna a, Y. Yousria Abo-Elnaga b,c, A.A. Mousa d,e,⇑ a

Department of Mathematics, Faculty of Science, Princess Nora Bint Abdul Rahman University, Riyadh, Saudi Arabia Department of Mathematics, Faculty of Sciences, Tebah University, Saudi Arabia c Department of Basic Science, Higher Technological Institute, Tenth of Ramadan City, Egypt d Department of Basic Engineering Science, Faculty of Engineering, Menoufia University, Shebin El-Kom, Egypt e Department of Mathematics and Statistics, Faculty of Sciences, Taif University, Saudi Arabia b

a r t i c l e

i n f o

Article history: Received 2 December 2011 Received in revised form 7 September 2013 Accepted 8 October 2013 Available online 25 October 2013 Keywords: Multiobjective programming Dynamic programming Swarm optimization Genetic algorithm

a b s t r a c t Real engineering design problems are generally characterized by the presence of many often conflicting and incommensurable objectives. Naturally, these objectives involve many parameters whose possible values may be assigned by the experts. The aim of this paper is to introduce a hybrid approach combining three optimization techniques, dynamic programming (DP), genetic algorithms and particle swarm optimization (PSO). Our approach integrates the merits of both DP and artificial optimization techniques and it has two characteristic features. Firstly, the proposed algorithm converts fuzzy multiobjective optimization problem to a sequence of a crisp nonlinear programming problems. Secondly, the proposed algorithm uses H-SOA for solving nonlinear programming problem. In which, any complex problem under certain structure can be solved and there is no need for the existence of some properties rather than traditional methods that need some features of the problem such as differentiability and continuity. Finally, with different degree of a we get different a-Pareto optimal solution of the problem. A numerical example is given to illustrate the results developed in this paper. Ó 2013 Elsevier Inc. All rights reserved.

1. Introduction Multiobjective optimization (MOO) is an important research topic for both scientists and engineers. In MO, a set of nondominated solutions is usually produced instead of single recommended solution. According to the concept of dominance, also referred as Pareto optimality, a solution to a MO problem is non-dominated, or Pareto optimal, if no objective can be improved without worsening at least one other objective. Naturally, these objective functions and constraints involve many parameters whose possible values may be assigned by the experts. In the traditional approaches, such parameters are fixed at some values in an experimental and/or subjective manner through the experts’ understanding of the nature of the parameters. In practice, however, it is natural to consider that the possible values of these parameters are often only ambiguously known to experts’ understanding of the parameters as fuzzy numerical data which can be represented by means of fuzzy subsets [1] of the real line known as fuzzy numbers. Recently, Sakawa [2] introduced the concept of a-multiobjective programming based on the a-level sets of the fuzzy numbers and several interactive decision-making methods were introduced ⇑ Corresponding author at: Department of Basic Engineering Science, Faculty of Engineering, Menoufia University, Shebin El-Kom, Egypt. E-mail address: [email protected] (A.A. Mousa). 0307-904X/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apm.2013.10.013

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

2001

[3]. In [4] a method for solving linear programming problems where all the coefficients are, in general, fuzzy numbers was proposed. They offer for the decision-maker (DM) the optimal solution for several different degrees of feasibility. With this information the DM is able to establish a fuzzy goal, and build a fuzzy subset in the decision space whose membership function represents the balance between feasibility degree of constraints and satisfaction degree of the goal. In [5] an effective particle swarm optimization algorithm was presented to find a good approximation of Pareto frontier to fuzzy multi-objective unrelated parallel machines scheduling problem, the proposed algorithm exploits new selection regimes for preserving global as well as personal best solutions. Moreover, a generalized dominance concept in a fuzzy environment is employed to find locally Pareto-optimal frontier. In [6] a hybrid fuzzy multi-objective programming model including both quantities and qualitative constraints and objectives is proposed to determine the optimal price markdown policy and aggregate production planning in a two echelon supply chain. The model aims to maximize the total profit of manufacturer, the total profit of retailer and improving service aspects of retailing simultaneously. The use and development of heuristic-based optimization techniques have significantly grown. Since they use a population of solutions in their search, it is more likely to find the global solution of a given problem. In recent years there have been a lot of reported works focused on the hybridization of PSO with other heuristic based optimization techniques [7– 9]. In [10], PSACO (particle swarm ant colony optimization) algorithm was proposed for highly non convex optimization problems. In [11], GA has been incorporated into PSO as a hybrid method combining two heuristic optimization techniques for the global optimization of nonlinear optimization problem. In [12] an improved multi-objective particle swarm optimization algorithm was applied to cope with a stochastic multi-objective DEED problem. To enhance the overall performance and effectiveness of the particle swarm optimization, a fuzzy adaptive technique, Theta-search and self-adaptive learning strategy for velocity updating are used to tune the inertia weight factor and to escape from local optima, respectively. In [13], a hybrid intelligent algorithm by combining the particle swarm optimization (PSO) with chaos searching technique (CST) is presented for solving nonlinear bilevel programming problems where particles in the front of list are updated by PSO, while particles in the end of list are updated by CST. The CST used here is not only to enhance the particles but also to improve the diversity of the particle swarm so as to avoid PSO trapping the local optima. In [14] predator–prey optimization (PPO) is used as a base level search in the global search space and Powell’s method as a local search technique. Predator–prey model is based on particle swarm optimization (PSO). In [15] a hybrid multiobjective evolutionary algorithm combining two heuristic optimization techniques was proposed. It integrates the merits of both genetic algorithm (GA) and particle swarm optimization (PSO), and has two characteristic features. Firstly, the algorithm is initialized by a set of random particles which is flown through the search space. In order to get approximate nondominated solutions PND, an evolution of this particle is performed. Secondly, the local search (LS) scheme is implemented as a neighborhood search engine to improve the solution quality. Dynamic programming is a useful mathematical technique for making a sequence of interrelated decisions. It provides a systematic procedure for determining the optimal combination of decisions. In contrast to linear programming, there does not exist a standard mathematical formulation of the’’ dynamic programming problem. Rather, dynamic programming is a general type of approach to problem solving, and the particular equations used must be developed to fit each situation. Therefore, a certain degree of ingenuity and insight into the general structure of dynamic programming problems is required to recognize when and how a problem can be solved by dynamic programming procedures. These abilities can best be developed by an exposure to a wide variety of dynamic programming applications and a study of the characteristics that are common to all these situations. A large number of illustrative examples are presented for this purpose. Dynamic programming technique that divides the problem to be solved into a number of sub problems and then solves each sub-problem in such a way that the overall solution is optimal to the original problem. Dynamic programming is a applicable when subproblem are not independent, that is, when subproblem share subproblems. A dynamic programming algorithms solve every problem just once and then saves its answer in the table, thereby avoiding the work of recomputing the answer every time the Subproblem is encountered. It is typically applied to optimization problems. In such problems there are many possible solutions each solution has a value, and we wish to find a solution with optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem. Dynamic programming has a variety of applications as follows, 1. 2. 3. 4. 5. 6.

Bioinformatics [16]. Molecular biology [17]. Control theory [18–20]. Information theory [21,22]. Operations research [23]. Computer science: theory, graphics, AI [24,25].

Unfortunately, dynamic programming has often been dismissed because it suffers from ‘‘the curse of dimensionality.’’ In fact, there are up to three curses of dimensionality: the state space, the outcome space and the action space. The term curse of dimensionality was coined by Richard E. Bellman when considering problems in dynamic optimization [26,27]. In this paper, we present an interactive approach for generating an a-Pareto optimal solution of multiple objective mathematical programming problems with fuzzy parameters which posses certain structure [1,28]. This Method is formally, a natural extension of the work already given by the author Osman et al. 2003 [28] based on evolutionary Genetic algorithm, but the present approach is based on hybrid swarm optimization (H-SOA), which integrates the merits of both GA and PSO.

2002

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

Also, for the degree of a (specified by the decision maker DM), the corresponding a-Pareto optimal solution can be interactively easily obtained by solving the functional equation of DP recursively for which the fuzzy decision model is applicable. 2. Formulation of the problem The fuzzy multiobjective nonlinear programming problem (FMNLPP) is represented as the following fuzzy vector minimization problem (FVMP) involving fuzzy parameters in the objective functions [28]: (FVMP):

~1 Þ; . . . ; fkN ðxN ; a ~N ÞÞ; k ¼ 1; . . . ; K; ðK P 2Þ; Minimize F k ðfk1 ðx1 ; a subject to Gm ðg m1 ðx1 Þ; . . . ; g mN ðxN ÞÞ 6 0; m ¼ 1; . . . ; M; xn 2 X n ; n ¼ 1; . . . ; N; where Xn is a subset of Rln , xn is a ln vector, the objective functions Fk(), k = 1, . . ., K, and the constraint functions Gm(), are ~ ¼ ða ~1 ; . . . ; a ~n Þ represented a vector real-valued functions on RN , and fkn() and gmn() are real valued function on Xn and a of fuzzy parameters in the objective functions. These fuzzy parameters are assumed to be characterized as the fuzzy num~ form a convex continuous fuzzy subset of the real line (Fig. 1) whose membership function bers. The real fuzzy numbers p lp~ ðpÞ is defined by: (1) a continuous mapping from R1 to the closed interval [0, 1]; (2) lp~ ðpÞ ¼ 0 for all p e (1, p1]; (3) strictly increasing on [p1, p2]; (4) lp~ ðpÞ ¼ 1 for all p e [p2, p3]; (5) strictly decreasing on [p3, p4]; (6) lp~ ðpÞ ¼ 0 for all p e [p4, + 1); ~ ¼ ða ~1 ; . . . ; a ~n Þ in the (FVMP) are fuzzy numbers whose membership functions are Assume that a

la~ ðaÞ.

~ ¼ ða ~1 ; . . . ; a ~n Þ is defined as the ordinary set Definition 1 (([1] a-level set)). The a-level set or a-cut of the fuzzy numbers a ~n Þ for which the degree of their membership functions exceeds the level a e [0, 1]: La ða

~n Þ ¼ fajla~ ðan Þ P ag: La ða For a certain degree a, the (FVMP) can be represented as the following nonfuzzy a-vector minimization problem (a-VMP). (a-VMP):

Minimize F k ðfk1 ðx1 ; a1 Þ; . . . ; fkN ðxN ; aN ÞÞ; k ¼ 1; . . . ; K; ðK P 2Þ; subject to ~n Þ; n ¼ 1; . . . ; N: Gm ðg m1 ðx1 Þ; . . . ; g mN ðxN ÞÞ 6 0; m ¼ 1; . . . ; M; xn 2 X n ; an 2 La ða It should be emphasized here that the parameters (a) in the (a-VMP) are treated as decision variables rather than constants. In the following, definitions of separability and monotonicity of problem functions are given and the nth subproblem is defined. Moreover, the concept of a-Pareto optimal solution to the (a-VMP) is also defined. Definition 2 [29]. Separability and Monotonicity. The objective function Fk is said to be separable if there exist functions F nk ; n ¼ 1; . . . ; N, defined on Rn and functions /nk , n = 2, . . ., N, defined on R2 satisfying, for n = 2, . . ., N,

9 F nk ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn ; an ÞÞ ¼ /nk ½F n1 = k ðfk1 ðx1 ; a1 Þ; . . . ; fkn1 ðxn1 ; an1 Þ; fkn ðxn ; an ÞÞ > : and > ; F Nk ðfk1 ðx1 ; a1 Þ; . . . ; fkN ðxN ; aN ÞÞ ¼ F k ðfk1 ðx1 ; a1 Þ; . . . ; fkN ðxN ; aN ÞÞ

~. Fig. 1. Fuzzy number p

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

2003

Similarly, the constraint function Gm is separable, if there exist functions Gnm , n = 1, . . ., N, defined on Rn and wnm ; n ¼ 2; . . . N; on R2 satisfying, for n = 2, . . ., N,

9 Gnm ðg m1 ðx1 Þ; . . . ; g mn ðxn ÞÞ ¼ wnm ½Gn1 = m ðg m1 ðx1 Þ; . . . ; g mn1 ðxn1 Þ; g mn ðxn ÞÞ > : and > ; N Gm ðg m1 ðx1 Þ; . . . ; g mN ðxN ÞÞ ¼ Gm ðg m1 ðx1 Þ; . . . ; g mN ðxN ÞÞ If all objective and constraint functions are separable, we say that (a-VMP) is separable. Moreover, the functions /nk and wnm are called the separating function of and G. Furthermore, the separation of (a-VMP) is said to be monotone if all functions /nk and wnm are strictly increasing with respect to the first argument for each fixed second argument. Specifically, for each y 2 RN ,

/nk ðz; yÞ > /nk ðz0 ; yÞ iffz > z0 ; and

wnm ðz; yÞ > wnm ðz0 ; yÞ iffz > z0 ; for every k = 1, . . ., K, m = 1, . . ., M, n = 2, . . ., N. Assumption 1. (a-VMP) is separable and the separation is monotone. ~n Þ is compact and F nk ðÞ; k ¼ 1; . . . ; K and Gnm ðÞ; m ¼ 1; . . . ; M be continuous functions. Assumption 2. For every n, X n \ La ða Definition 3 [29]. Assuming the separability of (a-VMP), an nth subproblem pn(z)a for each n = 1, . . ., N and each z ¼ ðz1 ; . . . ; zm Þ 2 Rn was defined as follows:pn(z)a

Minimize F nk ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn ; an ÞÞ; k ¼ 1; . . . ; K; subject to ~n Þ; n ¼ 1; . . . ; N: Gm ðg m1 ðx1 Þ; . . . ; g mn ðxn ÞÞ 6 zm ; m ¼ 1; . . . ; M; xn 2 X n ; an 2 La ða It is clear that, when n = N and z = 0, the above problem coincides with (a-VMP). The nth subproblem pn(z)a for each z e Rn is again an a-vector minimization problem. Definition 4 (a-Pareto optimal solution). Let ðx ; a Þ ¼ ðx1 ; a1 Þ; . . . :; ðxn ; an Þ be a feasible solution of the problem pn(z)a. It is called a-Pareto optimal solution if there exists no feasible (x1, a1), . . ., (xn, an) such that

F nk ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn ; an ÞÞ 6 F nk ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn ; an ÞÞ;

for all k

and F nl ðfl1 ðx1 ; a1 Þ; . . . ; fln ðxn ; an ÞÞ < F nl ðfl1 ðx1 ; a1 Þ; . . . ; fln ðxn ; an ÞÞ; For at least one index l e {1, . . ., k}, where the corresponding values of parameter an are called a-level optimal parameter. For each n = 1, . . ., N, and each z e Rm, the a-Pareto optimal solution of problem pn(z)a is denoted by En[z]. Therefore, to solve a-VMP may be interpreted as finding the set EN [0]. Theorem 1 [29]. Suppose that assumptions 1 and 2 hold. Let ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ is any a-Pareto optimal solution of problem pn(z)a, then ððx1 ; a1 Þ; . . . ; ðxn1 ; an1 ÞÞ is an a-Pareto optimal solution of problem pn1 ½zn1 ððxn ; an Þ; zÞa . Where: n1 Z n1 ðxn ; an ; zÞ ¼ ½Z n1 1 ðxn ; an ; zÞ; . . . ; Z M ðxn ; an ; zÞ; and n Z n1 m ðxn ; an ; zÞ ¼ supfn 2 R; wm ðn; g mn ðxn Þ 6 zm g;

~n Þ; n ¼ 1; . . . ; N: m ¼ 1; . . . ; M; an 2 La ða

Theorem 2 [29]. Suppose that assumptions 1 and 2 hold. Let ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ be a-Pareto optimal solution of the following

a-multiobjective programming problem: ða  MOPÞ : Min F nK ðfk1 ðx1 ; a1 Þ; :::; fkn ðxn ; an ÞÞ; k ¼ 1; . . . ; K; subject to ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ 2 [En1 ½zn1 ððxn ; an Þ; zÞ  fxn ; an g : ~n Þ; n ¼ 1; . . . ; N xn 2 X n ; an 2 La ða Then the ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ constitutes an a-Pareto optimal solution of problem pn(z)a. Here En1[zn1((xn, an), z)] is the set of all a-Pareto optimal solution of problem pn1[zn1((xn, an), z)]a.

2004

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

Theorem 3 [29]. Suppose that assumptions 1 and 2 hold. Then the following recursive relation hold for n = 2, . . ., N;

Q n ðzÞ ¼ minimal /n ðQ n1 ½zn1 ððxn ; an Þ; zÞ; fn ðxn ; an ÞÞ;

ð1Þ

~n Þ xn 2X n ;an 2La ða

T

where fn(xn, an) = [f1n(xn, an), . . ., fkn(xn, an)]T, Qn(z) is the image of the set En(z) under the mapping F n ðf1 ; . . . ; fn Þ ¼ ½F n1 ; . . . ; F nL  and the right hand side is understood to be the set of minimal points of the set

~n Þg: Y ¼ fy 2 RK ; y ¼ /n ðv ; fn ðxn ; an ÞÞ; v 2 Q n1 ½zn1 ððxn ; an Þ; zÞ; xn 2 X n ; an 2 La ða 3. Fuzzy decision model (FDM) To find a-Pareto optimal solution of (a-VMP), we may use the dynamic programming principle to decompose a-VMP into subproblems [Pn(z)a, n = 1, . . ., N] and use the fuzzy decision model (FDM) for finding the a-Pareto optimal solution of each subproblem pn(z)a. The fuzzy version to problem pn(z)a was defined as follows:

F nk ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn ; an ÞÞ K F~k ; k ¼ 1; . . . ; K; Gn ðg ðx1 Þ; . . . ; g ðxn ÞÞ K Z~ m ; m ¼ 1; . . . ; M; m

m1

ð2Þ

mn

~ m means an aspiration level of the decision maker and x1 2 X n ; . . . ; xn 2 X n ; a1 2 La ða ~1 Þ; . . . ; an 2 La ða ~n Þ. where F~k ; Z Each (k + M) inequalities of (2) shall now be represented by a fuzzy set, of which the membership function are li such that

li ½F nk ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn ; an ÞÞ; Gnm ðg m1 ðx1 Þ; . . . ; g mn ðxn ÞÞ ¼ li ðHni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞÞ ¼ 0

ð3Þ

~ n F n ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn Þ K F~n ; Gn ðg ðx1 Þ; . . . if the ith constraint of F nk ðfk1 ðx1 ; a1 Þ; . . . ; fkn ðxn Þ K F~nk ; Gnm ðg m1 ðx1 Þ; . . . ; g mn ðxn ÞÞ K Z m1 m k m k ~ n is strongly violated, i.e., Hn ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ > bi þ di , g mn ðxn ÞÞ K Z m i

0 < li ½Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ < 1; if bi <

Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ

ð4Þ

< bi þ di , and:

n i ½Hi ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ

l

¼ 1;

ð5Þ

if Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ 6 bi . Where H ¼ ½Hni ; i ¼ 1; . . . ; K þ M is a vector formed cates the ith inequality of Hni ; bi is the right hand side of

by adding the vector ½F nk ; k ¼ 1; . . . ; K and ½Gnm ; m ¼ 1; . . . ; M, i indithe ith inequality, and di is a subjectively chosen constant expressing the limit of the admissible violation of the ith inequality, For example using the following linear membership function

8 1 > > < n n n ;an ÞÞbi li ½Hi ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ ¼ 1  Hi ððx1 ;a1 Þ;...;ðx di > > : 0

for

Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ 6 bi

for

Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ 6 bi þ di

for

Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ

ð6Þ

> bi þ di :

The membership function of the solution-set is then

lm ½Hnm ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ ¼ minli ½Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ P 0; i

ð7Þ

and the maximizing decision

max minli ½Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ;

ðX n ;An Þ

i

ð8Þ

where (Xn, An) = ((x1, a1), . . ., (xn, an)). Theorem 4 [29]. If ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ 2 Rn is a solution of problem (2)–(8), then it is a-Pareto optimal solution of pn(z)a. Theorem 5. If ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ 2 Rn is a unique solution of problem (2)–(8), then ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ 2 Rn is an a-Pareto optimal solution of pn(z)a. Following the fuzzy decision of Bellman and Zadeh [30] together with the linear membership function functions, the problem of finding the maximum decision is to choose (Xn, An) such that:

lD ðX n ; An Þ ¼ max min fli ½Hni ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ: n n ðX ; A Þi¼0;...;KþM

In other words, the problem is to find the (Xn, An) which maximize the minimum membership function values by introducing an auxiliary variables k, the problem (2)–(8) is equivalent to the form of Tchebycheff approach, which can be transformed into the following equivalent form:

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

2005

(k-NLP):

Maximize

k  H  n ððx1 ; a1 Þ; . . . ; ðxn ; an ÞÞ; subject to k 6 b i i

i ¼ 1; . . . ; ðK þ MÞ:

ð9Þ

n

n ¼ bi that is a nonlinear programming problem.  n ¼ Hi and b where H i i di di This problem, (k-NLP), can be solved by using hybrid swarm optimization algorithm (H-SOA) to obtain the a-Pareto opti   mal solution ðk ; xn ; an Þ. 4. Hybrid swarm optimization algorithm (H-SOA) The proposed approach integrates the merits of both GA and PSO and it has two characteristic features. Firstly, NLPP was described briefly. Secondly, we present the proposed 4.1. Nonlinear programming problem (NLPP) Any population based technique applied to a particular problem should address the issue of handling unfeasible individuals. In general, a search space S consists of two disjoint subsets of feasible and unfeasible subspaces F and U respectively. We do not make any assumptions about these sub-spaces; in particular they need not be convex and they need not be connected (e.g., as it is the case in the example in Fig. 2 where feasible part F of the search space consist of two disjoined subsets). The general nonlinear programming problem [31] for continuous variables is to find x so as to:

Min f ðxÞ; x ¼ ðx1 ; . . . ; xn Þ 2 Rn ;

ð10Þ

where x 2 F # S. The set S # Rn defines the search space and the set F # S defines a feasible part of the search space. Usually, the search space S is defined as n-dimensional rectangle in Rn (domains of variables defined as lower and upper bounds): leftðiÞ 6 xi 6 rightðiÞ; 1 6 i 6 n. Whereas the feasible set F is defined by the search space S and an additional set of constraints:

g j ðxÞ 6 0;

for

j ¼ 1; . . . ; m

ð11Þ

Thus the nonlinear programming problem (NLPP) can be defined as follows:

9 NLPP : Minf ðxÞ > > ( ) >  > n x 2 R g i ðxÞ 6 0; i ¼ 1; 2; . . . ; k and hj ðxÞ ¼ 0 = s:t: F ¼ : > ; j ¼ k þ 1; . . . m > > >   ; S ¼ x 2 Rn jlðxi Þ 6 xi 6 uðxi Þ; i ¼ 1; 2; . . . ; n

ð12Þ

4.2. The proposed PSO based on GA algorithm In the proposed approach, the algorithm initialized with a population of random solutions and searches for optima by traveling into the search space. During this travel an evolution of this solution is performed by integrating PSO and GA. The description diagram of the proposed algorithm is shown in Fig. 3 and it is described as follows:

Fig. 2. A search space and its feasible part.

2006

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

Fig. 3. Diagram model for particles movement.

Step 1. Initialization: Initialize a population of particles with random positions and velocities on n-dimensions in the problem space. Step 2. Evaluation: Evaluate the desired optimization fitness function in n variables for each particle. Step 3. Initializing Pbest and Gbest: Set Pbest of each particle and its objective value equal to its current position and objective value, and set Gbest and its objective value equal to the position and objective value of the best initial particle. Step 4. Updating the velocity and position: Update the velocity and position of each particle according to Eqs. (10) and (11). Step 5. Evolution of particles: To restrict velocity and control it, some authors [32,33] use a constriction factor v, which has a constant value to improve the performance of PSO. We present a modified constriction factor (i.e., dynamic constriction factor) to keep the feasibility of the particles. e.g., Fig. 4 shows the movement of the particle i through the search space with and without modified factor. Where the particle i start at the position xki with velocity v ki in the feasible space, the new position xikþ1 in Fig. 4 depends on velocity v ikþ1 .

xkþ1 ¼ xki þ v kþ1 : i i Then,

v

kþ1 i

ð13Þ

makes the particle to lose its feasibility, so we introduce a new modified factor v such that:

v ¼ 

2 2  s 

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; s2 þ s

ð14Þ

Fig. 4. The movement of the particle i through search space.

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

2007

where, s is the age of the infeasible particle (How long it’s still unfeasible?). And the new modified position of the particle is computed as:

xkþ1 ¼ xki þ v v kþ1 : i i

ð15Þ

For each particle we check its feasibility, if it is infeasible, we implement v parameter to control its position and velocity, using Eq. (17) where s is increased with the number of failed trial to keep the feasibility of the particle. Step 6. Evaluation: Evaluate the desired optimization fitness function in n variables for each particle. Step 7. Updating Pbest and Gbest: For each particle, compare its current objective value with the objective value of its Pbest. If the current value is better, then update Pbest and its objective value with the current position and objective value. Determine the best particle of the current swarm with the best objective value. If the objective value is better than the objective value of Gbest, then update Gbest and its objective value with the position and objective value of the current best particle. Step 8. Ranking: Ranks individuals (particles) according to their objective value, and returns a column vector containing the corresponding individual fitness value. Step 9. Selection: Selection is an operator to select two parent strings for generating new strings (i.e., offspring). In the selection, a string with a low fitness value has more chance to be selected as one of parents than a string with a high fitness value. In GAs, parent strings are selected by random choice. The parent strings, however, are not selected by a sheer random choice. The fitness value of each string is utilized for selecting parent strings. Step 10.

Crossover:

Crossover is an operator to generate new strings (i.e., offspring) from parent strings. Various crossover operators have been proposed for GAs.

Fig. 5. The pseudo code of the proposed algorithm.

2008

Step 11.

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

Mutation:

Mutation is an operator to change elements in a string which is generated by a crossover operator. Such a mutation operator can be viewed as a transition from a current solution to its neighborhood solution in local search algorithms [15]. Step 12.

Elitist strategy (Replacing):

Randomly remove a string from the current population and add the best string in the previous population to the current one. Step 13.

Repairing:

Repair the infeasible individuals of the population to be feasible. The idea of this technique is to separate any feasible individuals in a population from those that are infeasible by repairing infeasible individuals. This approach co-evolves the population of infeasible individuals until they become feasible, the reader is referred to [7]. The pseudo code of the proposed algorithm showing in Fig. 5.

5. Hybrid swarm -dynamic programming algorithm (HSDPA) In this section, we summarize HSDPA for solving multiobjective programming problems with fuzzy parameters. The solution procedure of the proposed algorithm can be summarized in the following steps:

start

Formulate FVMP Elicit a membership function for fuzzy parameter

Choose α Construct α -VMP Pn(zn)α

Pn-1(zn-1)α

P1(z)α

(λ-NLP)n

( xn* , an* , λ * )

(λ-NLP)n-1

( xn* −1 , an* −1 , λ * )

(λ-NLP)1

( x1* , a1* , λ * )

No Choose α

Is DM satisfied

?

yes

stop

Fig. 6. Flowchart of HSDPA.

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

2009

STEP 0: Formulate a fuzzy vector minimization problem (FVMP). STEP 1: Interact with the decision maker (DM) to elicit a membership function from the DM for each of the fuzzy numbers ~1 ; . . . ; a ~n Þ in problem (FVMP). ða STEP 2⁄: Ask the (DM) to select the initial value of a e [0, 1]. STEP 3: Formulate an a-vector minimization problem (a-VMP). STEP 4: Assuming the separability of (a-VMP), decompose it into subproblems. Pn(z)a, n = 1, 2, . . ., N by using DP. STEP 5: Using H-SOA to find a satisfied solution ðk ; xn ; an Þ of the problem pn(z)a by using fuzzy decision model (2)–(8), which is an a-Pareto optimal solution of the original (FVMP). STEP 6: If n = 1 stop; otherwise, obtain the subproblems pn1 ½zn1 ððxn ; an Þ; zÞa ; 2 6 n 6 N by using the functional equation of DP (1). STEP 7: Solve pn1[zn1((xn, an), z)]a by using H-SOA to find an a-Pareto optimal solution ðk ; xn1 ; an1 Þ by using fuzzy decision model (2)–(8). STEP 8: If n = N, go to step 9. Otherwise go to step 4. STEP 9: Interact with the decision maker (DM), if he/she is satisfied with the obtained solution, then stop. Otherwise, ask the decision maker to update the degree a and return to step 2. STEP 10: stop. It must be observed here, if the DM is not satisfied with the current value of the degree a of the a-Pareto optimal solution, it is possible for the DM to choose another value of a e [0, 1] and then our problem is converted to solving a sequence n of a nonlinear programming problem ððk  NLPÞn ; ðk  NLPÞn1 ; . . . ; ðk  NLPÞ1 Þ. But for each value of a the bounds of the decision variables (a) are changed reducing/enlarging the search space for NLPP. Fig. 6 shows flowchart of the hybrid swarm-dynamic programming algorithm (HSDPA). 6. Numerical example STEP. 0. Formulate a fuzzy vector minimization problem. Consider the following three-objective nonlinear programming problem with fuzzy parameters in the objective functions which is represented as the following fuzzy vector minimization problem. (FVMP): 2

~1 Þ ¼ ðx1  a ~11 Þ þ x22 þ x23 Minimize f 1 ðx; a 2

~2 Þ ¼ ðx1  1Þ2 þ ðx2  a ~22 Þ þ ðx3  2Þ2 Minimize f 2 ðx; a ~3 Þ ¼ 2x1 þ x22 þ ðx3  a ~33 Þ Minimize f 3 ðx; a

2

subject to

M ¼ fx 2 R3 jx1 þ x2 þ x3 6 4; xj P 0;

j ¼ 1; 2; 3g:

~1 ; . . . a ~n Þ in problem (FVMP). Let the fuzzy parameters STEP. 1. Elicit a membership function for each of be fuzzy numbers ða be characterized by the following numbers:

~11 ¼ ð1; 2; 4; 5Þ; a

~22 ¼ ð0; 1; 3; 5Þ; a

~33 ¼ ð3; 5; 9; 10Þ: a

~ in problem (FVMP) is defined by: Assume that the membership function for each fuzzy number a

8 0; > > >  2 > > 2 > 1  pap ; > p > 1 2 < la~ ða~Þ ¼ 1; >   > > > 1  ap3 2 ; > > p4 p3 > > : 0;

1 < a 6 p1 p1 6 a 6 p2 p2 6 a 6 p3 p3 6 a 6 p4 p4 6 a 6 1

STEP. 2⁄. (a-Level set) consider the a-level sets or a-cuts of the fuzzy numbers are given by:

la~11 ða11 Þ P 0:36; then we get 1:2 6 a11 6 4:8; la~22 ða22 Þ P 0:36; then we get 0:2 6 a11 6 4:6; la~33 ða33 Þ P 0:36; then we get 3:4 6 a33 6 9:8: STEP. 3. Construct a non fuzzy a-vector minimization problem

2010

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

(a-VMP):

Minimizeff1 ðx; a1 Þ; f2 ðx; a2 Þ; f3 ðx; a3 Þg subject to ~kn Þjx1 þ x2 þ x3 6 4; 1:2 6 a11 6 4:8; 0:2 6 a22 6 4:6; S ¼ fx 2 R3 ; akn 2 La ða 3:4 6 a33 6 9:8; x1 ; x2 ; x3 P 0; k ¼ 1; 2; 3; n ¼ 1; 2; 3g; where

f1 ðx; a1 Þ ¼ ðx1  a11 Þ2 þ x22 þ x23 ; f2 ðx; a2 Þ ¼ ðx1  1Þ2 þ ðx2  a22 Þ2 þ ðx3  2Þ2 ; f3 ðx; a3 Þ ¼ 2x1 þ x22 þ ðx3  a33 Þ2 : STEP. 4. Decompose the a-VMP into the following subproblems:

P1 ½z1 ððx2 ; a22 Þ; z2 ðx3 ; a33 ; 4ÞÞa : Minimize½ðx1  a11 Þ2 ; ðx1  1Þ2 ; 2x1  subject to ~11 Þjx1 6 ½z1 ððx2 ; a22 Þ; z2 ðx3 ; a33 ; 4ÞÞ; x1 P 0g S1 ¼ fx1 2 R1 ; a11 2 La ða 1 ~11 Þjx1 6 4; 1:2 6 a11 6 4:8; x1 P 0g; ¼ fx1 2 R ; a11 2 La ða P2 ½z2 ðx3 ; a33 ; 4Þa : Minimize½ðx1  a11 Þ2 þ x22 ; ðx1  1Þ2 þ ðx2  a22 Þ2 ; 2x1 þ x22  subject to ~22 Þjx1 þ x2 6 ½z2 ðx3 ; a33 ; 4Þ; x1 ; x2 P 0g S2 ¼ fx1 ; x2 2 R2 ; a22 2 La ða 2 ~22 Þjx1 þ x2 6 4; 0:2 6 a22 6 4:6; x1 ; x2 P 0g; ¼ fx1 ; x2 2 R ; a22 2 La ða P3 ð4Þa : isa  VMP: STEP. 5. Find an a-Pareto optimal solution of P3(4)a. From the functional equation of DP(1), we obtain the following problem:

~33 Þ Q 3 ð4Þ ¼ Minimal ½ðx1  a11 Þ2 þ x22 þ x23 ; ðx1  1Þ2 þ ðx2  a22 Þ2 þ ðx3  2Þ2 ; 2x1 þ x22 þ ðx3  a33 Þ2 ; x3 2 S; a33 2 La ða ~11 Þ Minimize½ðx1  a11 Þ2 þ x22 þ x23  ¼ ðx1  a11 Þ2 þ x22 ; x3 2 S; a11 2 La ða ~11 Þ Maximize½ðx1  a11 Þ2 þ x22 þ x23  ¼ ðx1  a11 Þ2 þ x22 þ 16; x3 2 S; a11 2 La ða ~22 Þ Minimize½ðx1  1Þ2 þ ðx2  a22 Þ2 þ ðx3  2Þ2  ¼ ðx1  1Þ2 þ ðx2  a22 Þ2 ; x3 2 S; a22 2 La ða ~22 Þ Maximize½ðx1  1Þ2 þ ðx2  a22 Þ2 þ ðx3  2Þ2  ¼ ðx1  1Þ2 þ ðx2  a22 Þ2 þ 4; x3 2 S; a22 2 La ða ~33 Þ Minimize½2x1 þ x22 þ ðx3  a33 Þ2  ¼ 2x1 þ x22 ; x3 2 S; a33 2 La ða ~33 Þ; Maximize½2x1 þ x22 þ ðx3  a33 Þ2  ¼ 2x1 þ x22 þ a233 ; x3 2 S; a33 2 La ða 8 0 for x23 6 0 > < x23 l1 ðx3 Þ ¼ 1  16 for 0 6 x23 6 16; > : 1 for x23 P 16; 8 > > <0 2 l2 ðx3 Þ ¼ 1  ðx3 2Þ 4 > > : 1 8 0 > > < 2 33 Þ l3 ðx3 Þ ¼ 1  ðx3 a a233 > > : 1

for ðx3  2Þ2 6 0 for 0 6 ðx3  2Þ2 6 4; for ðx3  2Þ2 P 4; for ðx3  a33 Þ2 6 0 for 0 6 ðx3  a33 Þ2 6 a233 ; for ðx3  a22 Þ2 P a233 :

To solve Q3(4), we formulate the following problem: ðk  NLPÞ3 :

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

2011

P3 : Minf ðkÞ ¼ k subject to 16k 6 16  x23 ; 4k 6 4  ðx3  2Þ2 ; a233 k 6 a233  ðx3  a33 Þ2 ; x1 þ x2 þ x3 6 4; 3:4 6 a33 6 9:8; x1 ; x2 ; x3 P 0: Using H-SOA to find a satisfied solution. The a-Pareto optimal solution of problem P3 is

X ¼ ðx1 ; x2 ; x3 ; a33 ; kÞ ¼ ð0:50; 1:65; 1:83; 3:40; 0:788Þ;

f  ¼ 0:788:

Then the a-Pareto optimal solution is k ¼ 0:788, x3 ¼ 1:83, a33 ¼ 3:40 STEP. 6. Find an a-Pareto optimal solution of P2[z2(x3, a33, 4)]a. From the functional equation of DP (1), we obtain the following problem:

Q 2 ð4Þ ¼ Minimal ½ðx1  a11 Þ2 þ x22 ; ðx1  1Þ2 þ ðx2  a22 Þ2 ; 2x1 þ x22 ;

~22 Þ; x2 2 S2 ; a22 2 La ða

~11 Þ Minimize½ðx1  a11 Þ2 þ x22  ¼ ðx1  a11 Þ2 ; x2 2 S2 ; a11 2 La ða ~11 Þ Maximize½ðx1  a11 Þ2 þ x22  ¼ ðx1  a11 Þ2 þ 16; x2 2 S2 ; a11 2 La ða ~22 Þ Minimize½ðx1  1Þ2 þ ðx2  a22 Þ2  ¼ ðx1  1Þ2 ; x2 2 S2 ; a22 2 La ða ~22 Þ Maximize½ðx1  1Þ2 þ ðx2  a22 Þ2  ¼ ðx1  1Þ2 þ a222 ; x2 2 S2 ; a22 2 La ða ~33 Þ Minimize½2x1 þ x22  ¼ 2x1 ; x2 2 S2 ; a33 2 La ða ~33 Þ; Maximize½2x1 þ x22  ¼ 2x1 þ 16; x2 2 S2 ; a33 2 La ða 8 0 for x22 6 0 > < 2 x2 l1 ðx2 Þ ¼ 1  16 for 0 6 x22 6 16; > : 1 for x22 P 16; 8 0 > > < 2 22 Þ l2 ðx2 Þ ¼ 1  ðx2 a a222 > > : 1

for ðx2  a22 Þ2 6 0 for 0 6 ðx2  a22 Þ2 6 a22 ; for ðx2  a22 Þ2 P a22 ;

8 0 for x22 6 0 > < x22 l3 ðx2 Þ ¼ 1  16 for 0 6 x22 6 16; > : 1 for x22 P a233 : To solve Q2(4), we formulate the following problem: ðk  NLPÞ2 :

P2 : Minf ðkÞ ¼ k subject to 16k 6 16  x22 ; a222 k 6 a222  ðx2  a22 Þ2 ; x1 þ x2 6 4; 0:2 6 a22 6 4:6; x1 ; x2 P 0: Using H-SOA to find a satisfied solution. The a-Pareto optimal solution of problem P2 is

X ¼ ðx1 ; x2 ; a22 ; kÞ ¼ ð2:71; 0:32; 0:35; 0:99Þ;

f  ¼ 0:99:

Then the a-Pareto optimal solution is k ¼ 0:99, x2 ¼ 0:32, a22 ¼ 0:35 STEP. 7. Find an a-Pareto optimal solution of P1[z1((x1, a22), z2(x3, a33, 4))]a. from the functional equation of DP (1), we obtain the following problem:

2012

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

~11 Þ Q 1 ð4Þ ¼ Minimal ½ðx1  a11 Þ2 ; ðx1  1Þ2 ; 2x1 ; x1 2 S1 ; a11 2 La ða ~11 Þ Minimize½ðx1  a11 Þ2  ¼ 0; x1 2 S1 ; a11 2 La ða ~11 Þ Maximize½ðx1  a11 Þ2  ¼ a211 ; x1 2 S1 ; a11 2 La ða ~22 Þ Minimize½ðx1  1Þ2  ¼ 0; x1 2 S1 ; a22 2 La ða ~22 Þ Maximize½ðx1  1Þ2  ¼ 9; x1 2 S1 ; a22 2 La ða ~33 Þ Minimize½2x1  ¼ 0; x1 2 S1 ; a33 2 La ða ~33 Þ; Maximize½2x1  ¼ 8; x1 2 S1 ; a33 2 La ða 8 0 > > < 2 11 Þ l1 ðx1 Þ ¼ 1  ðx1 a a211 > > : 1 8 > > <0 2 l2 ðx1 Þ ¼ 1  ðx1 1Þ 9 > > : 1 8 > <0 l3 ðx1 Þ ¼ 1  2x81 > : 1

for ðx1  a11 Þ2 6 0 for 0 6 ðx1  a11 Þ2 6 a211 ; for ðx1  a11 Þ2 P a211 ; for ðx1  1Þ2 6 0 for 0 6 ðx1  1Þ2 6 9; for ðx1  1Þ2 P 9;

for 2x1 6 0 for 0 6 2x1 6 8; for 2x1 P 8;

To solve Q1(4), we formulate the following problem: ðk  NLPÞ1 :

P1 : Minf ðkÞ ¼ k subject to 8k 6 8  2x1 9k 6 9  ðx1  1Þ2 ; a211 k 6 a211  ðx1  a11 Þ2 ; x1 6 4; 1:2 6 a11 6 4:8; x1 P 0: Using H-SOA to find a satisfied solution. The a-Pareto optimal solution of problem P1 is:

X ¼ ðx1 ; a11 ; kÞ ¼ ð0:69; 1:20; 0:826Þ; f  ¼ 0:826: Then the a-Pareto optimal solution is k ¼ 0:826, x1 ¼ 0:69, a11 ¼ 1:20 Therefore the a-Pareto optimal solution of (a-VMP) is ðxn ; an Þ ¼ ((0.69, 1.20), (0.32, 0.35), (1.83, 3.40)) at a-level set, a⁄ = 0.36. It must be observed here, if the DM is not satisfied with the current value of the degree a of the a-Pareto optimal solution, it is possible for the DM to continue the procedure in the same manner until the DM is satisfied with the current value of the degree a of the a-Pareto optimal solution. In this subsection, a comparative study has been carried out to assess the proposed approach concerning solving difficulty, Pareto optimality and interactive procedure. On the first hand, dynamic programming decomposes a multistage decision problem to single stage decision problems that are solved successfully. On the other hand, multistage decision problems can also be solved by direct application of the classical optimization techniques. However, this requires the number of variables to be small, the functions involved to be continuous and continuously differentiable, and the optimum points not to lie at the boundary points. Further, the problem has to be relatively simple so that the set of resultant equations can be solved either analytically or numerically. Also our algorithm guarantees that the solutions obtained by this method are always aPareto optimal solution. Finally, changing the value of a, if the DM is not satisfied, changes only the bounds of the decision variables (a) i.e., reducing/enlarging the search space for NLPP, which mean, with different degree of a we get different a-Pareto optimal solution of the problem. Also, the feasibility of using the proposed approach to handle MOP has been computationally approved.

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

2013

7. Concluding remarks In this paper, we introduce an interactive dynamic approach for generating an a-Pareto optimal solution of multiple objective mathematical programming problems with fuzzy parameters based on hybrid swarm optimization. These fuzzy parameter are assumed to be characterized as fuzzy numbers. The algorithms are based on the principle of optimality in dynamic programming (DP) and hybrid swarm optimization. Also, an interactive fuzzy decision making algorithm for generating a-Pareto optimal solution through the decomposition method is developed. A main features of the proposed algorithm could be summarized as follows: (a) The proposed approach converted multiobjective optimization problem to a sequence of n of a nonlinear programming problem. (b) The proposed algorithm uses H-SOA for solving nonlinear programming problem with fuzzy parameter. In which, any complex problem under certain structure can be solved and there is no need for the existence of some properties rather than traditional methods that need some features of the problem such as differentiability and continuity. (c) It combines DP, FDM, SO and GAs to solve FMONLP problems. (d) It shows that the solutions obtained by this method are always a-Pareto optimal solution. (e) The proposed algorithm converts the fuzzy vector minimization problem (FVMP) into n of a non linear programming problem which can be solved efficiently using GAs. (f) With different degree of a we get different a-Pareto optimal solution of the problem. (g) Changing the value of a, if the DM is not satisfied, changes only the bounds of the decision variables (a) i.e., reducing/ enlarging the search space for NLPP. Finally, illustrate numerical example has been given to clarify the developed method and the proposed algorithm. Also for further work we intend to studying the changing in Pareto optimal solution according to alpha parameter to select the stable Pareto optimal solutions. Acknowledgments The authors are grateful to the anonymous reviewers for their valuable comments and helpful suggestions which greatly improved the paper’s quality. References [1] M.A. Abo-Sinna, Generating an a-Pareto optimal solution to multiobjective nonlinear programming problems with fuzzy parameters: a decomposition method, J. Fuzzy Math. 2 (10) (2002) 423–439. [2] M. Sakwa, Fuzzy sets and Interactive Multiobjective Optimization, Plenum Press, New York, 1993. [3] S.A. Torabi, E. Hassini, An interactive possibilistic programming approach for multiple objective supply chain master planning, Fuzzy Sets Syst. 159 (2008) 193–214. [4] M. Jimenez, M. Arenas, A. Bilbao, M.V. Rodriguez, Linear programming with fuzzy parameters: an interactive method resolution, Eur. J. Oper. Res. 177 (2007) 1599–1609. [5] S.A. Torabi, N. Sahebjamnia, S.A. Mansouri, M. Aramon Bajestani, A particle swarm optimization for a fuzzy multi-objective unrelated parallel machines scheduling problem, Appl. Soft Comput. 13 (12) (2013) 4750–4762. [6] R. Ghasemy Yaghin, S.A. Torabi, S.M.T. Fatemi Ghomi, Integrated markdown pricing and aggregate production planning in a two echelon supply chain: a hybrid fuzzy multiple objective approach, Appl. Math. Model. 36 (2012) 6011–6030. [7] M.S. Osman, M.A. Abo-Sinna, A.A. Mousa, IT-CEMOP: an iterative co-evolutionary algorithm for multiobjective optimization problem with nonlinear constraints, J. Appl. Math. Comput. (AMC) 183 (2006) 373–389. [8] M. Gen, R. Chfng, Genetic Algorithms and Engineering Optimization, John Wiley & Sons, Inc., New York, 2000. [9] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA, 1989. [10] P.S. Shelokar, P. Siarry, V.K. Iayaraman, B.D. Kulkarni, Particle swarm and ant colony algorithms hybridized for improved continuous optimization, Appl. Math. Comput. 188 (2007) 129–142. [11] W.F. Abd El-Wahed, A.A. Mousa, M.A. El-Shorbagy, Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems, J. Comput. Appl. Math. 235 (2011) 1446–1453. [12] Bahman Bahmani-Firouzi, Ebrahim Farjah, Rasoul Azizipanah-Abarghooee, An efficient scenario-based and fuzzy self-adaptive learning particle swarm optimization approach for dynamic economic emission dispatch considering load and wind power uncertainties, Energy 50 (2013) 232–244. [13] Zhongping Wan, Guangmin Wang, Bin Suna, A hybrid intelligent algorithm by combining particle swarm optimization with chaos searching technique for solving nonlinear bilevel programming problems, Swarm Evol. Comput. 8 (2013) 26–32. [14] Nitin Narang, J.S. Dhillon, D.P. Kotharic, Multiobjective fixed head hydrothermal scheduling using integrated predator–prey optimization and Powell search method, Energy 47 (1) (2012) 237–252. [15] A.A. Mousa, M.A. El-Shorbagy, Waiel F. Abd El-Wahed, Local search based hybrid particle swarm optimization for multiobjective optimization, Int. J. Swarm Evol. Comput. 3 (2012) 1–14. [16] Sayyed Rasoul Mousavi, Farzaneh Tabataba, An improved algorithm for the longest common subsequence problem, Comput. Oper. Res. 39 (3) (2012) 512–520. [17] Stefan Edelkamp, Stefan Schrödl, Chapter 18 – Computational Biology, Heuristic, Search, 2012, pp. 759–772. [18] Shie Mannor, John N. Tsitsiklis, Algorithmic aspects of mean–variance optimization in Markov decision processes, Eur. J. Oper. Res. 231 (3) (2013) 645– 653. [19] L.A. Zadeh, Stochastic finite-state systems in control theory, Inf. Sci. 251 (2013) 1–9. [20] L. Chen, S. Xia, F. Sun, Maximum power output of multistage irreversible heat engines under a generalized heat transfer law by using dynamic programming, Scientia Iranica 20 (2) (2013) 301–312. [21] Yang-Chi Chang, Tsung-Ting Ko, An interactive dynamic multi-objective programming model to support better land use planning, Land Use Policy 36 (2014) 13–22.

2014

M.A. Abo-Sinna et al. / Applied Mathematical Modelling 38 (2014) 2000–2014

[22] Federico Perea, Justo Puerto, Revisiting a game theoretic framework for the robust railway network design against intentional attacks, Eur. J. Oper. Res. 226 (2) (2013) 286–292. [23] Liang-Tu Chen, Dynamic supply chain coordination under consignment and vendor-managed inventory in retailer-centric B2B electronic markets, Ind. Mark. Manage. 42 (4) (2013) 518–531. [24] Hu Huaifei, Haihua Liu, Zhiyong Gao, Lu Huang, Hybrid segmentation of left ventricle in cardiac MRI using Gaussian-mixture model and region restricted dynamic programming, Magn. Reson. Imaging 31 (4) (2013) 575–584. [25] Hong. Liu, Hu. Huaifei, Xu. Xiangyang, Enmin Song, Automatic left ventricle segmentation in cardiac MRI using topological stable-state thresholding and region restricted dynamic programming, Acad. Radiol. 19 (6) (2012) 723–731. [26] Richard Ernest Bellman; Rand Corporation (1957). Dynamic programming. Princeton University Press. ISBN 978-0-691-07951-6, Republished: Richard Ernest Bellman (2003). Dynamic Programming. Courier Dover Publications. ISBN 978-0-486-42809-3. [27] Richard Ernest Bellman, Adaptive Control Processes: A Guided Tour, Princeton University Press, 1961. [28] M.S. Osman, M.A. Abo-Sinna, A.A. Mousa, An interactive approach for solving multiobjective dynamic programming problems with fuzzy parameters based on genetic algorithms, in: Proceeding of the 38th Annual Conference On Statistics, Computer Science & Operation Research ‘‘Institute of Statistical Studies And Research’’ Cairo University, Egypt, December 13–16, 2003. [29] H. Mine, Fukushima, Decomposition of multiple objective mathematical programming by dynamic programming, Int. J. Syst. SCI 10 (5) (1979) 557– 566. [30] R.E. Bellman, L.A. Zadeh, Decision-making in a fuzzy environment, Manage. Sci. 17 (4) (1970) 141–164. [31] M.S. Bazaraa, Nonlinear Programming: Theory and Algorithms, John Willy & Sons, New York, 1979. [32] Kennedy, R.C. Eberhart, Swarm Intelligence, Morgan Kaufmann, San Francisco, 2001. [33] R.C. Eberhart, Y. Shi, Comparing inertia weights and constriction factors in particle swarm optimization, Congr. Evol. Comput. 1 (2000) 84–88.