- Email: [email protected]

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A simulated annealing technique for multi-objective simulation optimization Mahmoud H. Alrefaei a,*, Ali H. Diabat b a b

Qatar University, Department of Mathematics and Physics, P.O. Box 2713, Doha, Qatar Masdar Institute of Science and Technology, P.O. Box 54224, Abu Dhabi, United Arab Emirates

a r t i c l e

i n f o

a b s t r a c t In this paper, we present a simulated annealing algorithm for solving multi-objective simulation optimization problems. The algorithm is based on the idea of simulated annealing with constant temperature, and uses a rule for accepting a candidate solution that depends on the individual estimated objective function values. The algorithm is shown to converge almost surely to an optimal solution. It is applied to a multi-objective inventory problem; the numerical results show that the algorithm converges rapidly. Ó 2009 Elsevier Inc. All rights reserved.

Keywords: Simulated annealing Simulation optimization Multi-objective simulation optimization

1. Introduction In many practical optimization problems, the evaluation of the objective function values is subject to noise, i.e., the objective function has no exact formula. This type of problem can be found in many complex stochastic systems. Simulation is the main tool used to estimate the performance of such systems. In this paper, we will consider the following discrete stochastic optimization problem:

min JðhÞ ¼ min E½hðh; Y h Þ

ð1Þ

h2H

where the feasible region H is the set of feasible solutions; h is the decision parameter consisting of the input parameters of the simulation; the objective function J : H ! R is the expected system performance when the input parameter values are given by h; and h is a deterministic, real-valued function that depends on h and on a random variable Y h that also depends on h. When the simulation optimization problem has more than one performance measure, i.e., the function J is an n-vector J ¼ ðJ 1 ; . . . ; J n Þ, where J i ðhÞ ¼ E½hi ðh; Y h Þ is a real-valued function whose evaluation encounters some noise, then the problem becomes a multi-objective simulation problem for which an alternative that optimizes the vector J is desired. Of course, there may be no single h that simultaneously minimizes all objectives. In this case, the various objectives are weighted by the decision maker and aggregated into a single objective optimization problem (see Weigert et al. [1]):

J¼

n X i¼1

wi J i ;

where

n X

wi ¼ 1;

wi P 0;

ð2Þ

i¼1

where wi is the relative importance of the objective i and J i is the mean performance measure of objective i; one can use an existing optimization method for solving the new optimization problem minh2H JðhÞ. In many cases, the decision maker needs to know the relative importance of each performance measure before the optimization is done. This means that the best * Corresponding author. Address: Jordan University of Science and Technology, Department of Mathematics and Statistics, Irbid 22110, Jordan. E-mail addresses: [email protected], [email protected] (M.H. Alrefaei), [email protected] (A.H. Diabat). 0096-3003/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2009.09.051

3030

M.H. Alrefaei, A.H. Diabat / Applied Mathematics and Computation 215 (2009) 3029–3035

solution selected depends on the preferences of the decision maker – once the preferences are changed then the solution is also changed. When the feasible region is ﬁnite and relatively small, a multi-objective computing budget allocation (MOCBA) procedure developed by Lee et al. ([2,3]) can be used which integrates Pareto optimality with ranking and selection (R&S) to ﬁnd all non-dominated solutions. When the feasible solution is large, they integrate the MOCBA approach with the nested partition (NP) algorithm (see Olafsson [4] for an example that uses NP with a two-stage R&S procedure). One way of solving the aggregated optimization problem (2) is to simulate each alternative and get a precise estimate for each performance measure in each alternative then use a deterministic optimization method to solve the resulting approximated problem. However, simulation requires considerable computational time in order to get a precise estimate of the performance measures. When the objective is to optimize a large-scale simulation problem, it becomes very costly to simulate all the alternatives to get an accurate estimate of the objective function for each alternative. Therefore, approaches are needed to solve optimization problems that decrease the simulation time. One of the approaches used for this problem is the simulated annealing (SA) algorithm. SA is a random search method that has been proposed by Kirkpatrick et al. [5] for solving discrete optimization problems when the exact objective values exist. It requires that the set of feasible solutions, H, be well structured. The idea of SA is to allow hill-climbing moves, so that the algorithm hk , where can escape from local minima. In iteration k, we compare the current alternative hk with a candidate alternative hk 2 Nðhk Þ (Nðhk Þ is the set of all neighboring states for the current state hk ). If Jð hk Þ < Jðhk Þ, then hk is a better alternative hk . On the other hand, if than hk , and since we are trying to solve a minimization problem in Eq. (1), we will have hkþ1 ¼ hk . Despite this, the algorithm will move to the alternative hk with a cerJð hk Þ P Jðhk Þ then hk is a better alternative than a hill-climbing tain probability. This is because good states might be hidden behind hk . Thus, the algorithm will make move over the worse alternative hk with a probability that depends on the difference Dk J ¼ Jðhk Þ J h0k ; this probability is given by:

Jðhk Þ Jðhk Þ PðDk JÞ ¼ exp Tk

!

where fT k g is a decreasing sequence of positive real numbers such that T k ! 0 as k ! 1. The sequence fT k g is called a cooling schedule, as T k is interpreted as a temperature. The initial value of T k and the rate at which it is decreased play an important role in the convergence of the algorithm; if T k is decreased too rapidly, the algorithm may end up in a local minimum, and if it is decreased too slowly, then progress is slow. More details can be found in Aarts and Korst [6]. A number of researchers have studied the use of simulated annealing to solve simulation-based optimization problems. Kushner [7] was the ﬁrst to examine the behavior of SA when the value of the objective function can only be sampled via the Monte Carlo method. He has presented a general approach by specifying a system that represents a combination of a stochastic approximation method and SA. Based on the theory of large deviations and assuming Gaussian noise he gives a full asymptotic analysis of the system under suitable conditions. Bulgak and Sanders [8] and Haddock and Mittenthal [9] have proposed heuristic versions of simulated annealing and applied them in optimizing manufacturing systems. Gelfand and Mitter [10] and Gutjahr and Pﬂug [11] have assumed that the noise is normally distributed with zero mean and small variance. Other researchers have proposed and analyzed other variants of simulated annealing. Alrefaei and Andradóttir [12] have proposed a method that, at termination, instead of returning the last solution visited, returns the solution visited most often throughout the run. They also consider the state with the best average function value over all samples taken as the optimal solution. Because of this, the algorithm does not require convergence of the underlying Markov chain and the algorithm can be run at a constant temperature. They have shown that, under the condition of decreasing noise, the procedure converges almost surely to the set of optimal solutions. Alrefaei [13] has proposed an approach for solving simulation optimization problems using the standard clock simulation technique and has used it for optimizing a voice multihop radio network ([14]). Fox and Heine [15] have also used simulated annealing for solving discrete stochastic optimization problems when the range of the objective function is bounded. The method of Alkhamis and Ahmed [16] is a combination of two other approaches: it uses a constant temperature and returns the most often visited solution as in Alrefaei and Andradóttir [12], and also uses the lower bound of a conﬁdence interval to make acceptance decisions as in Alkhamis et al. [17]. Andradóttir and Prudius [18] have also presented an approach called BEES that strikes a balance between searching the global space and searching locally in promising subregions. They based their approach on the algorithm of Alrefaei and Andradóttir [12]. The numerical results show that the BEES approach indeed accelerates the convergence of the algorithm. Ahmed and Alkhamis [19] combined SA with the ranking and selection procedure for solving discrete simulation-based optimization problems. Their algorithm divides the sampling for each solution into two stages. The number of samples taken in the second stage depends on two parameters as well as on the mean solution quality that results from the ﬁrst stage. The method resembles the algorithm of Alrefaei and Andradóttir [12] in that it returns the solution visited most often throughout the run. Ahmed and Alkhamis [19] assume Gaussian noise with unknown variance and prove that their procedure converges almost surely to the globally optimal solution. Al-Aomar [20] has used simulated annealing in control theory. In this paper, we use simulated annealing for solving multi-objective optimization problems and apply it for solving a management inventory problem. The paper is organized as follows: in Section 2, we discuss the use of simulated annealing for multi-objective optimization problems; in Section 3, we provide numerical results for our algorithm on an inventory problem; conclusions are presented in Section 4.

M.H. Alrefaei, A.H. Diabat / Applied Mathematics and Computation 215 (2009) 3029–3035

3031

2. Simulated annealing for multi-objective function We assume that the state space H contains a nonempty proper subset H of optimal solutions in the sense that there is at least one h 2 H and 1 6 i 6 n, such that J i ðh Þ < J i ðhÞ for all h 2 H n H , because otherwise the problem is trivial. We also assume that H is well structured in the sense that for each h 2 H, there exists a subset NðhÞ of H n fhg, called the set of neighbors of h, such that for any h; h0 2 H; h0 is reachable from h; i.e., there exists a ﬁnite sequence fni gli¼0 for some l, such that

hn0 ¼ h; hnl ¼ h0 ; and hniþ1 2 Nðhni Þ; i ¼ 0; 1; 2; . . . ; l 1: For each h; h0 2 H, let Rðh; h0 Þ be the probability of generating a candidate solution h0 given that the current solution is h. We assume that the transition matrix R is irreducible, i.e., we can move from any state h to any other state h0 after a ﬁnite number of transitions with positive probability. Furthermore, we assume that R satisﬁes:

Rðh; h0 Þ ¼ Rðh0 ; hÞ; 8 h; h0 2 H Now we state the proposed simulated annealing algorithm for solving multi-objective optimization problems in the presence of noise. Note that V k ðhÞ is the number of times the Markov chain fX k g has visited state h in the ﬁrst k iterations, for all h 2 H, and X k is the state that maximizes V k ðxÞ. Algorithm 1. Step 0: Select a starting point X 0 2 H. Let V 0 ðX 0 Þ ¼ 1 and V 0 ðhÞ ¼ 0, for all h 2 H; h – X 0 . Let k ¼ 0 and X k ¼ X 0 , and let Lk be the sample size at iteration k. Step 1: Given X k ¼ h choose a candidate Z k 2 NðhÞ with probability

P½Z k ¼ h0 jX k ¼ h ¼ Rðh; h0 Þ; 8 h0 2 NðhÞ; Step 2: Given X k ¼ h and Z k ¼ h0 , generate independent, identically distributed, and unbiased observations Y h0 ð1Þ; Y h0 ð2Þ; . . . ; Y h0 ðLk Þ of Y h0 and Y h ð1Þ; Y h ð2Þ; . . . ; Y h ðLk Þ of Y h . For all i ¼ 1; . . . ; n, compute bJ i ðh0 Þ and bJ i ðhÞ as follows:

XLk bJ i ðsÞ ¼ 1 h ðs; Y s ðlÞÞ for s ¼ h; h0 : l¼1 i Lk

ð3Þ

Step 3: Given X k ¼ h and Z k ¼ h0 , generate U k U½0; 1, and set

X kþ1 ¼

Zk

if

Xk

otherwise;

where

Gh;h ðkÞ ¼ exp 0

U k 6 Gh;h0 ðkÞ;

! Pn b 0 þ b i¼1 ½ J i ðh Þ J i ðhÞ ; T

ð4Þ

where T is a ﬁxed number, and ½þ is deﬁned as follows:

½xþ ¼

x

if

x>0

0 otherwise:

Step 4: Let k ¼ k þ 1; V k ðX k Þ ¼ V k1 ðX k Þ þ 1, and V k ðhÞ ¼ V k1 ðhÞ, for all h 2 H, h – X k . If V k ðX k Þ > V k ðX k1 Þ, then let X k ¼ X k ; otherwise let X k ¼ X k1 . Go to Step 1. The stochastic process fX k g generated by Algorithm 1 is a time-inhomogeneous Markov chain whose transition probability matrices are given by

P0k ðh; h0 Þ ¼ PfX kþ1

8 Rðh; h0 ÞPfU k 6 Gh;h0 ðkÞg if h0 2 NðhÞ; > > < P 0 Pk ðh; gÞ if h0 ¼ h; ¼ h0 jX k ¼ hg ¼ 1 g2NðhÞ > > : 0 otherwise;

for all k 2 N, where U k is a uniform random variable deﬁned on the interval [0,1] and Gh;h0 ðkÞ is deﬁned in Eq. (4). Note that if h0 2 NðhÞ, then

P0k ðh; h0 Þ ¼ Rðh; h0 ÞPfU k 6 Gh;h0 ðkÞg ¼ Rðh; h0 ÞE½E½IfUk 6Gh;h0 ðkÞg jGh;h0 ðkÞ ¼ Rðh; h0 ÞE½Gh;h0 ðkÞ: Therefore, the transition probability matrices P 0k can be rewritten as:

P0k ðh; h0 Þ ¼ PfX kþ1

8 Rðh; h0 Þp0h;h0 ðkÞ if h0 2 NðhÞ; > > < P 0 Pk ðh; gÞ if h0 ¼ h; ¼ h0 jX k ¼ hg ¼ 1 g2NðhÞ > > : 0 otherwise;

ð5Þ

3032

M.H. Alrefaei, A.H. Diabat / Applied Mathematics and Computation 215 (2009) 3029–3035

for all k 2 N, where p0h;h0 ðkÞ ¼ E½Gh;h0 ðkÞ for all h 2 H; h0 2 NðhÞ, and k 2 N. Let P be the transition probability matrix deﬁned by

Pn 8 ½J ðh0 ÞJ i ðhÞþ 0 > i¼1 i > Rðh; h Þ exp if h0 2 NðhÞ; > T > < Pðh; h0 Þ ¼ 1 P Pðh; gÞ if h0 ¼ h; > > g 2NðhÞ > > : 0 otherwise;

and let

ð6Þ

ph be deﬁned as follows: Pni¼1 exp

ph ¼ P

g2H

h

J i ðhÞ T

Pni¼1 exp

h

i

JðgÞ T

i

ð7Þ

for all h 2 H. Then we have the following proposition: Proposition 1. The transition probability matrix P given in Eq. (6) is irreducible and aperiodic and has stationary distribution where p is a vector whose entries ph are given in Eq. (7).

p,

Proof. The proof of irreducibility follows directly from the structure of NðhÞ and Eq. (6). For the proof that p is the stationary P distribution of P, note that h2H ph ¼ 1: Now if h0 2 NðhÞ, then n

h

J i ðh0 Þ

i

h

J i ðh0 Þ

i

h

½J i ðh0 ÞJ i ðhÞþ

i

exp T exp T ph0 Pi¼1 exp h T i Pðh; h0 Þ h i ¼ Pni¼1 h i¼ : ¼ Pni¼1 ¼ 0 þ n J ðhÞ J ðhÞ ½J ðhÞJ ðh Þ ph Pi¼1 exp i Pðh0 ; hÞ exp i T exp i Ti T Thus,

ph0 Pðh0 ; hÞ ¼ ph Pðh; h0 Þ: This is also true if h and h0 are not neighbors because in this case, Pðh0 ; hÞ ¼ Pðh; h0 Þ ¼ 0. Therefore,

X

ph Pðh; h0 Þ ¼ ph0

h2H

X

Pðh0 ; hÞ ¼ ph0 :

h2H

This completes the proof that p is a stationary distribution of P. To prove aperiodicity, note that by the assumption that H – H there exist h 2 H and h 2 Nðh Þ with Ji ðh Þ < Ji ðhÞ for some 1 6 i 6 n. Therefore, from the deﬁnition of P given in Eq. (6), Pðh ; h Þ > 0 and therefore, P is aperiodic. h The proof of the following proposition is straightforward. Proposition 2. The transition matrices P0k ðh; h0 Þ ! Pðh; h0 Þ as k ! 1 for all h; h0 2 H, where the Markov matrices Pk and P are given in Eqs. (5) and (6), respectively. Let H be the set of all optimal solutions in the sense that

( X n n X H ¼ h Ji ðh Þ 6 J i ðhÞ; i¼1 i¼1

)

8h 2 H;

then by Lemma 3.1 of Andradóttir [21], we have that k V k ðhÞ 1 X IfX ¼hg ! ph ¼ k k i¼0 i

almost surely as k ! 1

ph > 0 for all h 2 H so that Eq. (7) yields h i h i 0 " !# J i ðh0 Þ n n P exp T exp Ji ðhT Þ X ½J ðh0 Þ Ji ðhÞ 1 X n h i ¼ Pi¼1 h i ¼ Pni¼1 exp i ¼ J i ðh0 Þ J i ðhÞ : ¼ exp T T Pni¼1 exp Ji ðhÞ exp Ji ðhÞ i¼1 i¼1 T T

for all h 2 H. It is clear that

ph0 ph

n i¼1

for all h; h0 2 H. Therefore,

ph0 6 ph if and only if

Pn

0 i¼1 J i ðh Þ

P

Pn

i¼1 J i ðhÞ.

This shows that

arg max ph ¼ H : h2H

ð8Þ

Now the proof of the following theorem is straightforward and mimics the proof of Theorem 1 of Alrefaei and Andradóttir [12]. Theorem 1. The sequence fX k g generated by Algorithm 1 converges almost surely to the set H .

M.H. Alrefaei, A.H. Diabat / Applied Mathematics and Computation 215 (2009) 3029–3035

3033

2.1. The best estimate scenario The algorithm above generates a sequence of states, where the current state is taken as an estimate of the optimal solution. Here we consider another approach for estimating the optimal solution, which is to select out of all the states previously visited that with the best estimated objective function value and to use this as an estimate of the optimal solution. The algorithm is described below: Algorithm 2. Step 0: Select a starting point X 0 2 H. For all h 2 H, let A0 ðhÞ ¼ 0 and C 0 ðhÞ ¼ 0. Let k ¼ 0 and X k ¼ X 0 , and let Lk be the sample size at iteration k. Step 1: Similar to Step 1 in Algorithm 1. Step 2: Similar to Step 2 in Algorithm 1. P PLk hi ðs; Y s ðlÞÞ, and C kþ1 ðsÞ ¼ C k ðsÞ þ Lk , for s ¼ h; h0 . Moreover, let Step 3: Update Ak , and C k as follows Akþ1 ðsÞ ¼ Ak ðsÞ þ ni¼1 l¼1 00 00 00 00 00 0 Akþ1 ðh Þ ¼ Ak ðh Þ and C kþ1 ðh Þ ¼ C k ðh Þ for all h – h; h . Step 4: Similar to Step 3 in Algorithm 1. Step 5: Let k ¼ k þ 1, If Ak ðX k Þ=C k ðX k Þ < Ak ðX k1 Þ=C k ðX k1 Þ (use the convention 0=0 ¼ 1) then let X k ¼ X k ; otherwise let X k ¼ X k1 . Go to Step 1.

3. Numerical results In this section we evaluate the performance of our algorithm on a multi-objective ðs; SÞ inventory model (see S. Axsäter [22] or Zipkin [23]). An ðs; SÞ policy means that an order is triggered as soon as the inventory position declines to or below the reorder point s. The order size is chosen so that the inventory position increases to S. A stockout occurs if the sum of the undershoot plus the total lead time demand goes below s (see Fig. 1). Due to the difﬁculty in determining optimal values of ðs; SÞ, several approximations have been suggested. Baganha et al. [24] develop a simple algorithm to compute the undershoot and show that for most common demand distributions the mean and variance given in their paper are quite close to the actual values. Hill [25] examines the cost of ignoring the undershoot and develops an approximation with zero undershoot. The reader interested in a comprehensive comparison of several approximate ðs; SÞ policies should refer to Porteus [26]. The example we consider in this section consists of three objectives that are to be minimized: these are the average ordering cost, the average holding cost (storage cost), and the average shortage cost. We assume that the time between successive demands is exponentially distributed with a mean interarrival time of k days, and that the size of each demand is a discrete random variable with the following probabilities: D ¼ 1 with probability 0.15; D ¼ 2 with probability 0.25; D ¼ 3 with prob-

Fig. 1. ðs; SÞ inventory policy.

3034

M.H. Alrefaei, A.H. Diabat / Applied Mathematics and Computation 215 (2009) 3029–3035

ability 0.3; D ¼ 4 with probability 0.1; D ¼ 5 with probability 0.1; D ¼ 6 with probability 0.05; and D ¼ 7 with probability 0.05. The lead time (the time between placing an order and receiving it) is assumed to be uniformly distributed between 1 and 2 days. The inventory is reviewed every month. The setup cost to place an order is K ¼ $50 and the cost of each item ordered is C i ¼ $5. The holding cost per item per month is hi ¼ $2, and the shortage cost per item per month is assumed to be p ¼ $5. We are interested in ﬁnding the state that minimizes the total cost. We assume that the feasible solution set H ¼ fwhereðs; SÞ : 20 6 s 6 115; 45 6 S 6 235; S s P 25; s; S are multiples of 5g. Fig. 2 depicts the estimates of these costs for all designs. Santé-Riveira et al. used SA with varying temperature to solve a multi-objective land-allocation optimization problem and compared their algorithm with three other approaches [27]. We compared the results of Algorithms 1 and 2 with theirs. Each algorithm was run for 500 iterations; at each iteration, 30 samples were used to estimate the objective function values at the current h and at candidate h’s. Our algorithms used a ﬁxed temperature of T ¼ 1 and a neighborhood structure N given by:

Nðs; SÞ ¼ fðs0 ; S0 Þ 2 H : js s0 j 6 5; jS S0 j 6 5g We ran the algorithms for 100 replications. The average performance of the algorithms over the 100 replications is presented in Fig. 3, which plots the total cost against the iteration number. It is clear from Fig. 3 that our ﬁxed-temperature SA algorithms converge much more rapidly than the variable-temperature SA algorithms. Note also that ﬁxed-temperature SA with the best estimate scenario outperforms variable-temperature SA, even if the best estimate scenario is used for variabletemperature SA.

Fig. 2. Estimates of the different costs and the total cost for all designs.

Fig. 3. The average performance of the ﬁxed-temperature SA algorithm compared to the variable-temperature SA algorithms over 100 replications.

M.H. Alrefaei, A.H. Diabat / Applied Mathematics and Computation 215 (2009) 3029–3035

3035

4. Conclusion We have considered the optimization of multi-objective functions in the presence of noise. Our approach uses simulated annealing, which allows the algorithm to escape from local minima. It uses simulation to estimate objective function values when they are needed during the course of optimization, and uses a new rule for accepting a candidate solution that depends on the individual estimated objective function values. The algorithm is shown to converge rapidly on an inventory management problem. Acknowledgment The support of Jordan University of Science and Technology and Masdar Institute of Science and Technology (MIST) is gratefully acknowledged. References [1] G. Weigert, S. Werner, D. Hampel, H. Heinrich, W. Sauer, Multi objective decision making – solution for the optimization of manufacturing processes, in: The Proceedings of the 10th International Conference on Flexible Automation and Intelligent Manufacturing, 2000, pp. 487–496. [2] L.H. Lee, E.P. Chew, S.Y. Teng, D. Goldsman, Optimal computing budget allocation for multiobjective simulation models, in: R.G. Ingalls, M.D. Rossetti, J.S. Smith, B.A. Peters (Eds.), Proceedings of the 2004 Winter Simulation Conference, 2004, pp. 586–594. [3] L.H. Lee, S.Y. Teng, E.P. Chew, K.W. Lye, P. Lendermann, Application of multi-objective simulation – optimization techniques to inventory managenent proplems, in: M.E. Kuhl, N.M. Steiger, F.B. Armstrong, J.A. Joines, Proceedings of the 2005 Winter Simulation Conference, 2005, pp. 1684–1691. [4] S. Ólafsson, Iterative ranking-and-selection for large-scale optimization, in: Proceedings of the 1999 Winter Simulation Conference, 1999, pp. 479–485. [5] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [6] E. Aarts, J. Korst, Simulated Annealing and Boltzmann Machines, John Wiley and Sons, New York, 1990. [7] H.J. Kushner, Asymptotic global behavior for stochastic approximation and diffusions with slowly decreasing noise effects: global minimization via Monte Carlo, SIAM Journal on Applied Mathematics 47 (1) (1987) 169–185. [8] A.A. Bulgak, J. Sanders, Integrating a modiﬁed simulated annealing algorithm with the simulation of a manufacturing system to optimize buffer sizes in automatic assembly systems, in: Proceedings of the 1988 Winter Simulation Conference, 1988, pp. 684–690. [9] J. Haddock, J. Mittenthal, Simulation optimization using simulated annealing, Computers and Industrial Engineering 22 (1992) 387–395. [10] S.B. Gelfand, S.K. Mitter, Simulated annealing with noisy or imprecise energy measurements, Journal of Optimization Theory and Applications 62 (1989) 49–62. [11] W.J. Gutjahr, G. CH. Pﬂug, Simulated annealing for noisy cost functions, Journal of Global Optimization 8 (1996) 1–13. [12] M. Alrefaei, S. Andradóttir, A simulated annealing algorithm with constant temperature for discrete stochastic optimization, Management Science 45 (1999) 748–764. [13] M. Alrefaei, Stochastic optimization using the standard clock simulation, International Journal of Applied Mathematics 8 (2002) 317–333. [14] M. Alrefaei, A simulated annealing applied for optimizing a voice multihop radio network, Applied Mathematics and Computation 167 (2005) 496– 507. [15] B.L. Fox, G.W. Heine, Probabilistic search with overrides, Annals of Applied Probability 5 (1995) 1087–1094. [16] T.M. Alkhamis, M.A. Ahmed, Simulation-based optimization using simulated annealing with conﬁdence interval, in: R.G. Ingalls et al. (Eds.), Winter Simulation Conference, 2004, pp. 514–519. [17] T.M. Alkhamis, M.A. Ahmed, V.K. Tuan, Simulated annealing for discrete optimization with estimation, European Journal of Operational Research 116 (3) (1999) 530–544. [18] S. Andradóttir, A.A. Prudius, Balanced explorative and exploitative search with estimation for simulation optimization, INFORMS Journal on Computing 21 (2009) 193–208. [19] M.A. Ahmed, T.M. Alkhamis, Simulation-based optimization using simulated annealing with ranking and selection, Computers and Operations Research 29 (4) (2002) 387–402. [20] R. Al-Aomar, Designing machine operating strategy with simulated annealing and Monte Carlo simulation, Journal of the Franklin Institute 343 (2006) 372–388. [21] S. Andradóttir, A method for discrete stochastic optimization, Management Science 41 (1995) 1946–1961. [22] S. Axsäter, Inventory Control (International Series in Operations Research and Management Science), Springer, New York, 2006. [23] P. Zipkin, Foundations of Inventory Management, McGraw-Hill/Irwin, New York, 2000. [24] A. Baganha, D. Pyke, G. Ferrer, The undershoot of the reorder point: tests of an approximation, International Journal of Production Economics 45 (1996) 311–320. [25] R. Hill, Stock control and the undershoot of the re-order level, Journal of the Operational Research Society 39 (1988) 173–181. [26] E. Porteus, Numerical comparisons of inventory policies for periodic review systems, Operations Research 33 (1985) 134–152. [27] I. Santé-Riveira, M. Boullón-Magán, R. Crecente-Maseda, D. Miranda-Barrós, Algorithm based on simulated annealing for land-use allocation, Computers and Geosciences 34 (2008) 259–268.

Copyright © 2020 KUNDOC.COM. All rights reserved.