Expert Systems With Applications, Vol. 4, pp. 323-328, 1992 Printed in the USA.
0957-4174/92 $5.00+ .00 © 1992 Pergamon Press Ltd.
Optimization of Job Scheduling on Parallel Machines by Simulated Annealing Algorithms* ZHEN-PING LO AND B. BAVARIAN Universityof California, Irvine, Irvine, CA 92717, USA
Abstract--ln this paper, we consider the problem of scheduling a set of simultaneously available jobs on several parallel machines. Specifically, the minimization of the time to finish all the jobs assigned to all machines under job deadline constraints for n jobs, m machines problem is formulated in this paper. The simulated annealing and fast-simulated annealing algorithms are reviewed and adopted for the scheduling problem. Large numbers of simulations are carried out that provide an empirical basis for comparing the application of classical simulated annealing and fast-simulated annealing algorithms to the scheduling problem.
of the problem). The problem may be considered as a special case of a deterministic job-shop problem for the jobs needing only one operation. For the problems involving a small number of jobs, optimal sequences may be determined by three approaches: branch and bound search, heuristic procedures, and integer linear programming. However, the enormous computation time and computer-memory requirements associated with using these methods limits their application to large problems. Artificial neural networks have been also investisated for solving the scheduling problem (Lo & Bavarian, 1990, 1991). The key approach is to formulate the optimization problem with many firstorder differential or difference equations and solve them in a parallel distributed fashion. Each differential equation represents the characteristics of a neuron or processing element. The final equilibrium state of the network contains near-optimal schedules for each machine. The equations are obtained by differentiating an energy function which incorporates the optirniTation criteria. In this paper, we provide an alternative approach to formulating the problem of scheduling n jobs on m machines using the simulated annealing method for minimizing the time to finish all the jobs assigned to all machines (or the completion time of all jobs) under the deadline constraints. The remainder of the paper is organized as follows. In section 2, two versions of simulated annealing alg~)rithm are reviewed. The formulation of the problem and simulation algorithms are presented in section 3. Section 4 contains the concluding remarks and our current and future work on the problem.
1. INTRODUCTION THE MACHINE-SCHEDULINGproblem has been studied extensively over the past several decades (Baker, 1974; Panwalkar, Dudck & Smith, 1973; Rogers, 1987). In this paper, we consider a job scheduling on parallel machines problem. The parallel-machine problem has been investigated by several researchers from different points of view. Cheng (1989)has investigated job scheduling on parallel machines using heuristic for common due-date assignment. So (1990), Wittrock (1986) and Tang and Wittrock (1985) have investigated parallel-machine scheduling with set-ups (i.e., considering the case of machines which switch from processing some jobs to others requiting set-up times). In these papers, they assume all the machines are identical and have a fixed capacity that limits their applications in real situations. In this paper, we consider the problem of scheduling a set of simultaneously available jobs to be processed on m parallel servers. These servers may not be identical. The scheduling problem is to find the sequence and timing of each job on each machine so that some given performance criterion is optimized (e.g., minimizing the time required to complete all of the jobs). The problem captures the fundamental computation complexity of the central problem of sequencing jobs on machines, divorced of many side issues. This problem has been shown to be NP-complete (i.e., the amount of computation time required to solve such problems increases combinatorially with the size
* This work is supported in part by a grant from Hughes Aircraft Company, a subsidiary of GM Hughes Electronics and UC MICRO. Requests for reprints should be sent to Behnam Bavarian, Department of Electric and Computer Engineering, University of California, Irvine, Irvine, CA 92717.
2. S I M U I ~ T E D ANNEALING Simulated annealing is a stochastic strategy for selecting the minimal configuration of the states of a system. It 323
was first proposed by Metropolis, Rosenbluth, Rosenbluth, Teller and Teller (1953). Kirkpatrick, Gelatt, and Vecchi (1983) first successfully applied the approach to the optimization problems. The goal is to minimize a desired objective or energy function. Starting from an initial solution, the algorithm unconditionally accepts the solution which results in smaller energy values than the last solution. A new solution with a larger energy value is accepted or rejected based on the value of a probability function. The ability to occasionally accept degenerate solutions is what separates simulated annealing from a gradient-descent method. In gradient descent, once a local minima of the objective function is reached, there can be no further improvements. In simulated annealing, occasional "hill climbing" is done over local minimum. Simulated annealing has been proven to converge asymptotically to the global minimum (Lundy & Mees, 1986), as the number of iterations goes to infinity. However, it may also find near-optimal solutions in relatively few iterations. Recently, Szu (1986) proposed a fast-simulated annealing (FSA) algorithm, which uses a D-dimensional Cauchy probability distribution for generating the network states. This enables occasional long jumps during local sampling of the state space, which results in a faster convergence than classical simulated annealing (CSA). Thus, it is possible to find a near-optimum solution in relatively few iterations by an appropriate simulated annealing method. Simulated annealing involves a control parameter that is referred to as temperature. The temperature decreases during each iteration, which in turn affects the acceptance of a new state. As the temperature decreases, it is less likely to accept a degenerate state. At low temperature, the algorithm approaches a gradient-descent method. In general, the algorithm may be summarized in the following steps:
Z.-P. Lo and B. Bavarian
to accept the first few selected states. There is a lot of flexibility in choosing a generation scheme. The acceptance probability decreases as temperature decreases. For example, one of the commonly used acceptance probability functions is:
=E(Vn~w) - E(Vold )
T is the temperature, and E is the energy function of the system calculated at states V,~ and I7ol d . For larger AE (i.e. when the new state is extremely undesirable) the probability of acceptance diminishes, and when AE is non-positive, the new state is always accepted. As Tdecreases, the probability of acceptance decreases. Another acceptance-probability function used is the Cauchy distribution (Szu, 1986). The one-dimensional Cauchy distribution is given by: p=
T T 2 + AE 2
This acceptance probability function is first proposed by Szu (1986). There are many different functional forms for decreasing temperature reported in the literature. In all cases, an initial temperature To must be high enough to assure the acceptance of the first few states. Three possible cooling schedules are: T =
To (1 + log(t))
Step 1: Choose an initial state randomly and assign an initial temperature. Step 2: Generate a new state over sample size at temperature T. Step 3: Calculate the new state energy. Step 4: Compare the difference of the energy of the new state and the old state. If the new state energy is less than the previous one, then accept the new state, or else accept the new state only if it satisfies a certain probability. Step 5: Decrease the temperature. Step 6: If the energy is less than a certain value then stop, or else go to step 2. The initial state of the system may be generated randomly or selected using any heuristic method. A proper choice of initial temperature is a sufficiently high value
T = (e)tTo + To
To (l +t)
T = ~ + T o
Simulated Annealing Algorithm
P = e- ~ / r
where t is the number of iterations so far, To is the lowest temperature value and c is a constant between 0 and 1. The first cooling schedule is too slow to be practical. The second one was suggested in Kirkpatrick et al. (1983). The third was suggested by Szu (1986), which is known as fast-simulated annealing cooling schedule. The second and third functions will be used in the simulations in this paper.
3. THE PROBLEM REPRESENTATION AND SIMULATIONS The problem is to schedule n jobs on m machines which should be completed within all job deadlines and in minimum time. Each machine's processing rate, job length, and deadlines may be different. Each job has
Optimization of Job Scheduling on Parallel Machines
only one operation. That is, a job only needs to be processed by one machine. This is different from jobshop problems for which each job has m operations (i.e., each job is processed on m machines). So the problem is formulated by defining a set of machines with processing rate Mi, i = 1, 2 . . . . . m, a set of jobs with processing length Lk, k = 1, 2 . . . . . n, and a set of deadlines dk, k = 1, 2 . . . . . n, associated with each job. The schedule will be representedby a two-dimensional array N# which contains the job numbers. Index i is the machine number and j is a sequence slot on the machine i. For example for N24 = 10, the 10th job will be the 4th job executed by machine 2. Notice that j is not a time parameter, but dictates the queue position of the jobs that machine i will execute. Unlike using j as a time parameter (Lo & Bavarian, 1990), here jobs start only aRer the preceding job in the sequence ends. The array N 0 represents the state of the scheduling system. Figure 1 shows an example of job parameters, machine parameter, and the initial schedule state for a l0 jobs and 3 machines problem. The quantities shown in Figure 1 and used in this paper, represent generic units of time or processing rates and requirements of the jobs and machines. For example, machine rate can be byte/sec processing of digital information, the job length then is given in bytes to be processed and the deadline is in seconds representing time. At time 1 the first machine will start executing job 5 which, upon completion, is followed by job 6, and so on. Also at time 1, the second machine will start job 7 and the third machine will start job 10. The solution state has to be evaluated by a cost function. Here, we want to minimize the time to finish all the jobs assigned to all machines under the deadline constraint. An appropriate cost function must increase rapidly as a job passes its deadline and decrease incrementally as a job finishes earlier. We define the cost as
E = ~ Ek
if J ~ > d k
Ek = [fk-- dk
if fk < dk
where a is a constant, J~ is the finishing time of job k, and dk is the deadline of job k. TheJ~ is calculated by considering *ki, which is the time it takes job k to be executed on machine i. Tki
where Lk and Mi are defined above. Simply stated, for every time unit a job finishes after its deadline, the system cost-function increases by the factor a, and for every time unit a job ends before its deadline, cost de-
Machine information Machine processing rate 2 3 3 Job number 1 2 3 4 5 6 7 8 9 10 job deadline 2 3 3 4 5 2 10 8 6 9 job length 4 3 3 4 2 2 10 8 6 9 Example of random initial schedule state Machinel 5 6 2 8 4 Machine 2 7 3 9 Machine3 10 1 FIGURE 1. An example of the problem formulation for 10 jobs and 3 machines.
creases by one unit. This ensures that the deadlines are accommodated before any minimum time considerations. Based on the general simulated annealing algorithm we discussed earlier, we devise the classical simulated annealing (CSA) and fast-simulated annealing (FSA) algorithms for solving a job scheduling problem. In the following algorithm, the classical simulated annealing and fast-simulated annealing are presented simultaneously, in order to demonstrate their similarities and differences. 3.1. Job Scheduling With CSA and FSA Algorithms
Step 1: Initialization; Assign/max, To, and To where /max is maximum number of iterations the algorithm runs before exiting. To is the initial temperature. The higher the initial temperature, the more global the search for the best state. To is the lowest temperature value. At low temperatures, the algorithm approaches a gradient-descent method, thus the search becomes local; Step 2: Choose a random initial state; Step 3: Loop on/max; For t = 1 to/max where t is the iteration index; Do: Step 4: If T > To then T -- alto + To; (for the fastsimulated annealing T -
To + To), where Tisthe
Step 5: Generate a new state, V,ew, for the system. A new state is generated by choosing randomly: (1) a job to move, (2) a machine to place this job, and (3) a slot to insert this job into the sequence of this machine. For example, if the three random numbers are 8, 2, and 5; job 8 will be moved from wherever it is to machine 2 and must be the 5th job in the queue. Notice that the same three random numbers picked at different orders produce different results. This generation scheme ensures a global search for new states; Step 6: Calculate the energy of the new state according to equation 5;
Z.-P. Lo and B. Bavarian
TABLE 1 Simulation Results for Evaluation of the Algorithms for Six Different Job-Scheduling Cases; Each Case is Simulated 100 Times
Optimal Solution CSA
Number of Machines
10 10 8 :
3.05% 10.04% 5.15% 6.21% 2.69% 4.36%
3.80% 7.83% 3.00% 3.21% 0.00% .32%
Step 7: If I$ V,,)
< E( V,,,) then V,,, = V,,, and go to step 10; otherwise Step 8: set r = random number between 0 and 1
e-(E( Vnew-E(Vo/,tJ/TI 9
20 23 13 14 ;z
centage errors are calculated statistically for the 100 runs by:
Step 9: If
of Total Finishing Time
FSA solution length of the ith run - optimal solution length &jF optimal solution length ( i I-1 x 100%
or for fast simulated annealing if:
CSA solution length of the ith run - optimal solution length a,= y&f optimal solution length ( 1 1-I x 100%
then restore the old state of the system (V,,,) else V,,, = V,,; Step 10: next t. Step 11: report minimum state and stop. The algorithm is implemented in C program on a Sun workstation computer. The size of runs we have simulated range from 4 jobs, 2 machines to 50 jobs, 10 machines. The processing times for jobs on each machine are pre-generated from a program which will yield a valid solution (i.e., the machine rates and job lengths are selected in such a way that solutions meeting the deadline exist). The initial schedule is randomly assigned. The performance of the simulated and fastsimulated annealing algorithms are tested by cornpar;ing the resulting schedules with an optimum solution found by exhaustive search. Since it is computationally demanding to find the optimal solution to a given problem, we only conduct&d simulations for small-scale problems (small number of jobs and machines). Six different cases (different number of machines, or different number of jobs, with different machine-processing rates, job lengths, and deadlines) are studied here. Every case is simulated 100 times with different random initial schedule states with both simulated annealing and fast-simulated annealing algorithms. Table 1 shows the summary of the simulation results of the 1200 runs. Columns 4 and 5 in Table 1 show the error performance of the fast-simulated annealing (FSA), cu/, and classical simulated annealing (CSA), (Y,. These per-
The optimal solution of the total finishing time is found by exhaustive search for the given case, and is shown in column 6 of Table 1. The performance indices, qand (Y,show the convergence of FSA and CSA algorithms to optimal and near-optimal solutions in the range of 0% best case up to 11% worst case. Furthermore, these indices show that the classical simulated annealing algorithm yields slightly better results compared to the fast-simulated annealing algorithm. This observation is important in the context of the popularity of using the fast-simulated annealing algorithm. One interpretation may be that the fast-simulated annealing algorithm has the tendency to be trapped in local minima more than the simulated annealing algorithm, due to the fast cooling temperature. On the other hand, the convergence of the fast-simulated annealing is significantly faster than the classical simulated annealing. Here, we discuss the simulations of the case of 3 machines and 10 jobs, shown in Figure 1. Three examples of the final schedules are shown in Figure 2. The completion times of these examples are 2 1,20, and 20 time units. In the 100 simulations, most converge to the completion time of 21 units, which is near the 20-time-units optimal solution. The details of the 100 simulations for both algorithms for this case are tabulated in Tables 2 and 3. These tables show the number of simulations that have converged to an optimal or near-optimal solution within different ranges of iterations. For example, in Table 2, which is for the fast-simulated annealing, the first row shows that 28
Optimization of Job Scheduling on Parallel Machines TABLE 3
Machine I 5 4 7 Machine2 3 6 2 8 Machine3 1 9 10
The Nucrd)erof I ~ s
Number of Iterations
Machine 1 6 5 4 Machine2 3 2 10 8 Machine3 1 9 7 FIGURE 2. Examples of final schedules of 10 jobs 3 machines with different Initial states.
simulations converge within the 100 iterations, and the second row shows that 38 simulations converge within the 100 to 200 iterations, and so on. Table 3 shows the same for the classical simulated annealing. In Table 2 and 3, the first columns, number of iterations, show that the convergence is much faster using the fast-simulated annealing. The mean number of iterations for convergence is 833.9 for the fast-simulated annealing algorithm, and it is 5617.12 for the classical simulated annealing, which is almost 7 times faster. Also, in general, about 60% of the simulations converge within the first 200 iterations for the fast-simulated annealing, while for the classical simulated annealing this is spread out within 5000 iterations. Another observation is that for both fast-simulated and classical simulated annealing, a few of the simulations may take longer than exhaustive searches to converge to optimal or near-optimal solution, as also pointed out by Lundy and Mees (1986). This number is very small. From the simulations we carried out, only 3% of the simulations may need longer time in order to converge to an optimal or near-optimal solution. Since this is only a very small number, we can add a step to check this risk in the algorithm. If the iterations get larger than a certain number, we restart the algorithm. TABLE 2 The Number of Iterations for Convergence of the 100 Simulations for the Fast-Simulated Annealing Algorithm Apldled to 3 Machines 10 Jobs Problem
0-100 100-200 200-300 300-400 400-500 600-700 700-1200 20000-35500
for Convergence of the 100
Simulations for the Classical Simulated Annealing Algodthm Applied to 3 Macldnes 10 Jobs Proldem
Machine 1 1 4 Machine2 5 6 9 7 Machine3 3 2 10 8
Number of Iterations
Number of Simulations Converging Out of 100 28 38 7 10
0-1000 1000-2000 2000-3000 3000-4000 4000-5000 5000-6000 6000-7000 7000-8000 8000-9000 10000-11000
12 14 11 10 8 7 3 4
12000-13000 23000 above
4. CONCLUSION In this paper, we have reviewed the simulated annealing approach for optimization problems. We have developed an algorithm for the deterministic n job-m machine scheduling problem and compared the fast-simulated annealing with the classical simulated annealing. We have carried out a large number of simulations and computed some statistical performance measures to compare the result. In general, the fast-simulated annealing provides the best performance in producing optimal or near-optimal solutions within a few hundred iterations. However, the convergence to optimal solution for the classical simulated annealing algorithm was slightly better than the fast-simulated annealing. In both cases, there is a risk involved, where it may take an extensive number of iterations for convergence to an optimal or near-optimal solution. For example, for small-scale cases (small number of jobs and machines) the convergence may require more computational time than the exhaustive search. However, the risk probability is small and simple upper-bound test and restart will solve the problem. The implementation of the fast-simulated annealing is equivalent to few seconds on the computer, which makes the scheduling algorithm a very good candidate for a general expert-system package used as a decisionsupport tool in manufacturing environment. The effect of initial state and the perturbation scheme to the final state is currently under investigation. The extension of this approach to scheduling in distributed computersystem and communication-network routing schedules in combination with artificial neural networks is also under further study.
Number of Simulations Converging Out of 100
REFERENCES Baker, K. (1974). Introduction to sequencingand scheduling, New
328 Cheng, T.C.E. (1989). A heuristic for common due-date assignment and job scheduling on parallel machines, Journal of Opt. Res., 40(12), 1129-1135. Kirkpatrick, S., Gelatt, C.D. Jr., & Vecchi, M.P. (1983). Optimization by simulated annealing, Sci., 220, 671-680. Lo, Z.P., & Bavarian, B. (1990). A modified Hopfieid neural network for optimization of manufacturingtask scheduling, In M. Jamshidi and M. Salf (Eds.), ASME press series: Robotics and Manufacturing, vol. 3 (pp. 1001-1006). Lo, Z.P., & Bavarian, B. (1991). Scheduling with neural networks for flexible manufacturing systems, Proceedings of 1991 IEEE International Conf. on Robotics and Automation, 1, 818-823. Lundy, M., & Mees, A. (1986). Convergence of an annealing algorithm, Math. Prog., 34, 111-124. Metropolis, W., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equation of state calculations by fast computing machines, J. Chem. Phys., 21, 1087-1092.
Z.-P. Lo and B. Bavarian Panwalkar, S.S., Dudck, R.A., & Smith, M.L. (1973). Sequencing research and the industrial scheduling problem, lecture notes In Economics and mathematical systems: Syrup. theoryofscheduling and its applications, New York: Springer-Verlag. Rogers, R.V. (1987). Generalizationof the machine scheduling problem, Unpublisheddoctoral dissertation, Univ. Virginia, Dept. Syst. Engineering, Charlottesville, VA. So, K.C. ( 1990, Apt). Some heuristics for schedulingjobs on parallel machine with setups, Management Science, 36(4), 467-475. Szu, H. (1986). Fast simulated annealing, In J. Denker (Ed.), Neural networksfor computing: American institute of physics, (pp. 420425), New York: Tang, C.S., & Wittrock, R.J. (1985). Parallel machines scheduling with major and minor setups, RC-11412, Yorktown Heights, NY: IBM T.J. Watson Research Center. Wittrock, R.J. (1986). Scheduling paralld machines with setups, RC-11740, Yorktown Heights, NY: IBM T.J. Watson Research Center.