Scheduling of serial multiproduct batch processes via simulated annealing

Scheduling of serial multiproduct batch processes via simulated annealing

Compums them. Engng, Vol. 14, No. 12, pp. 1351-1362, 1990 Printed in Great Britain. All rights reserved Copyright 0098-1354/90 $3.00 + 0.00 0 1990 P...

1MB Sizes 6 Downloads 49 Views

Compums them. Engng, Vol. 14, No. 12, pp. 1351-1362, 1990 Printed in Great Britain. All rights reserved

Copyright

0098-1354/90 $3.00 + 0.00 0 1990 Pergamon Press plc

SCHEDULING OF SERIAL MULTIPRODUCT BATCH PROCESSES VIA SIMULATED ANNEALING H. DAS, P. T. CUMMINGS and M. D. Department (Received

of Chemical

4 December

Engineering,

University

of Virginia,

LE VAN

Charlottesville,

1989; finaI revision received 7 June 1990; received&

VA 22903-2442, publication

U.S.A.

8 August

1990)

Abstract-We

present an in-depth study of the simulated annealing approach to the scheduling of serial multiproduct batch processes under the assumption of a permutation schedule. Four versions of the simulated annealing algorithm are studied based on two move acceptance criteria, the Metropolis algorithm and the Glauber algorithm, and two annealing schedules, the exponential schedule and the Aarts and van Laarhoven schedule. Makespan minimization is performed for unlimited intermediate storage (UIS), no intermediate storage (NIS), zero wait (ZW), finite intermediate storage (FIS) and mixed intermediate storage (MIS) flowshop problems using simulated annealing and also the idle matrix search (IMS) heuristic. Of the four versions of the simulated annealing algorithm studied. the Metrooolis algorithm with the Aarts and van Laarhoven annealing schedule ii fo&td to give the best results, ‘with all four versions giving significantly better results than the IMS heuristic. The Metropolis algorithm with the Aarts and van Laarhoven annealing schedule is studied in more detail for further comparison with the IMS heuristic in terms of the comoutational effort expended by the simulated annealing algorithm and the solution quality obtained. .

1. 1NTRODIJCTlON Even though continuous operation is prevalent and desirable in many chemical processes, batch processing is still very important and has a permanent place in the chemical process industry. A recent survey on the use of batch processes indicates that only 6% of the batch processes identified are likely to be replaced by continuous processes (Parakrama, 1985). An extremely appealing feature of batch processes is their flexibility in producing multiple products in a single plant through the sharing of process equipment. Batch processes are economically desirable when small amounts of high value added products are produced or when a large number of products are made using similar production paths. In general, in terms of production rates, batch processes are usually desirable for plants having a capacity of less than 1061byr-’ whereas continuous processes are usually desirable for plants having a capacity of greater than 10’ lb yr-’ (Douglas, 1988). In addition, the appropriateness of batch processes is also determined by market forces such as seasonal demand for products and by operational problems such as frequent cleaning requirements for equipment. Batch scheduling is the methodology that determines the order in which products are to be processed in each of the units of a plant so as to optimize some suitable economic or systems performance criterion (Reklaitis, 1982). This is required when a production system is used to produce multiple products by sharing the available production time between products. In the case of batch chemical plants, batch

scheduling involves scheduling the order in which a variety of products are made subject to equipment and other constraints. A number of reactors, separators, storage tanks, etc. may exist which need to be utilized to optimally process different products in a required sequence. Mathematically, the batch scheduling problem reduces to specifying the following elements: (i) a set of N products which must be processed; (ii) a set of M processing units which are available for this purpose; (iii) a matrix of fixed processing times fi, associated with each product i and unit j; (iv) a matrix of fixed transfer times a, associated with the transfer of each product i out of each unit i; (v) a matrix of fixed sequence-dependent set up times sijk associated with the set up time required for processing product i after product i in unit k; (vi) a performance criterion with respect to which the schedule is optimized; and (vii) a set of rules which govern the production process including the order in which the operations must be performed for each product and the manner in which intermediate storage between processing units is regulated. The problem is virtually always assumed to be deterministic (that is, all of the problem parameters are known in advance) and static (that is, the production requirements remain unchanged for the duration of the schedule). It is evident that a large number of combinations of these variables is possible, making the problem a non-trivial one. Some of the key factors involved in the optimization problem are the performance criteria and the concept of intermediate storage. The common Performance criteria are: (i) minimize makespan, the total time required to produce all

1351

1352

H. DAS et (II.

products; (ii) minimize flowtime, the average residence time of all products in the system; and (iii) minimize the maximum or average tardiness, the difference between the completion times of products and their due dates. Serial unit processing systems are the most prevalent in chemical process industries. Serial processes involving batch units are very common configurations in multiproduct plants used in the food, pharmaceutical, polymer and specialty chemical industries (Loonkar and Robinson, 1970; Solem, 1974). In serial unit networks only one of each type of unit exists so parallel processing of different products on similar units is not possible; that is, serial processing of the products through the different units is required. In general, a non-preemptive operation is followed where an ongoing processing stage of a product cannot be interrupted temporarily and resumed later on. Often, in a batch chemical plant, all of the products require the same processing sequence through the same units with only the individual processing times being different for each product. This is true when the products belong to a common family. As an example, all products would first require processing in a particular reactor followed by separation in a particular distillation column, then processing in another reactor followed by separation in another separation unit and so on. All products would, therefore, encounter the different units in the same sequence. A serial process with this characteristic is known as a multiproduct process or a flowshop (Baker, 1974). In view of the staged nature of flowshop problems, the status of intermediate products between successive units needs to be considered. This gives rise to the concept of intermediate storage. The different intermediate storage criteria between units are unlimited intermediate storage (UIS), no intermediate storage (NE), zero wait processing (ZW), finite intermediate storage (FE) and mixed intermediate storage (MIS). The MIS criterion is a combination of the other four intermediate storage criteria, such that between any two given units in the batch production facility either one of UIS, NIS, ZW or FIS may be in effect. Different intermediate storage criteria could, therefore, be in effect between different pairs of units, thereby making MIS the most general and all encompassing intermediate storage criterion. It is common to consider only solutions where each unit encounters the different products in the same sequence. This is a simplifying approximation and is known to be optimal only for arbitrary problems with three or fewer units (Conway ef al., 1967). Schedules obtained with this approximation are called permutation schedules. The advantage of considering only permutation schedules is that it simplifies and reduces the solution search space for flowshop problems to the determination of just a single-product sequence.

This single-product sequence is then the sequence in which each of the different units encounters all of the products. The particular batch scheduling problem that has received a great amount of interest in chemical process applications is makespan minimization of the serial multiproduct or Iiowshop problem under the assumption of a permutation schedule. To solve this problem two interlinked subproblems need to be considered--determination of completion times and sequencing. The determination of completion times subproblem deals with calculating the detailed operation schedule for a given product sequence. The sequencing subproblem deals with determining the optimal product sequence that gives the overall minimum makespan. The solution of the sequencing subproblem is specified by providing a sequence of the products to be processed which, for the permutation schedule, is the same for all of the units. The known exact methods for solving the sequencing subproblem are exponential in time, meaning that the computer time required to solve the problem increases exponentially with the problem size. A notable exception is Johnson’s aigorithm (Johnson, 1954) which is an exact polynomial time algorithm for the two-unit UIS problem. Several suboptimal polynomial time algorithms exist based on heuristics. The idle matrix search (IMS) heuristic (Rajagopalan and Karimi, 1987) is the best known heuristic available at present for solving the sequencing subproblem. Some exploratory work has been done on the application of simulated annealing as a sequencing algorithm. Simulated annealing has been demonstrated to be a possible technique for the makespan minimization of ZW flowshops (Dolan, 1988). The use of simulated annealing for the makespan minimization of UIS flowshops has been investigated (Ku and Karimi, 1988). The application of simulated annealing has also been investigated for minimizing an economic objective function with constraints on due dates and inventory (Ku and Karimi, 1989; Malone, 1989). Although exploratory in nature, encouraging results have been obtained in all of these cases. An in-depth study of the simulated annealing approach to the sequencing problem for makespan minimization in serial multiproduct batch plants under the assumption of a permutation schedule is reported here. Following our previous work (Das et al., 1989) we perform a comparative study of different versions of the simulated annealing algorithm for the makespan minimization of a variety of flowshop problems. We consider relative merits of different versions of the simulated annealing algorithm for the makespan minimization of UIS, NIS, ZW, FIS and MIS flowshop problems. A comparison with the performance of the IMS heuristic is also provided.

Scheduling of serial multiproduct batch processes 2. SIMULATED ANNEALING Simulated annealing is an algorithm for singleobjective multivariable optimization based on the Monte-Carlo method for simulation of physical systems (Metropolis et al., 1953). The theory of Markov chains provides a rigorous mathematical basis for simulated annealing (Allen and Tildesley, 1987). Simulated annealing provides an approximate solution to NP-complete problems which is asymptotically exact. In practice, it yields a polynomial time solution (which in general is suboptimal) to an exponential time problem. Results of the polynomial time algorithm can in principle be made to approach the exponential time algorithm asymptotically as the degree of the bounding polynomial is increased. Simulated annealing has been successfully applied to a number of single-objective multivariable optimization problems in operations research, a notable application being the solution of the traveling salesman problem (Kirkpatrick et al., 1983). Among its practical applications, simulated annealing is used in the design of very large-scale integrated circuits (Vecchi and Kirkpatrick, 1983). Recently, simulated annealing has been applied for the first time to design problems in chemical engineering. It has been successfully applied to the design of heat exchanger networks (Dolan et al., 1987, 1989a,b) and pipeline networks (Dolan et al., 1989b). In addition, as mentioned earlier, some work has also been done on the application of simulated annealing to batch scheduling problems (Dolan, 1988; Ku and Karimi, 1988a, 1989; Malone, 1989; Das et al., 1989). The general simulated annealing algorithm is the same for all single-objective multivariable optimization problems. The basic requirements for simulated annealing to be applicable are that a given set of values of the multiple variables defines a unique system configuration and that the objective function can be calculated as a function of the system configuration. The simulated annealing algorithm can be expressed by the following simple sequential steps: (a) make a random move to change the existing overall configuration; (b) either accept or reject the move made in (a) based on certain acceptance criteria; (c) go to (a), periodically reducing the temperature or temperature analog by a small amount. The final configuration and the corresponding value of the objective function are obtained when a specified termination condition is satisfied. In the statistical mechanical study of molecules, a random move is made by picking a molecule at random and moving it in a random direction by a random distance. The change in the objective function (which in this case is the change in energy, A,!?) is calculated after each attempted move. The move is then accepted or rejected based on certain acceptance criteria which determine the type of

1353

Markov chain propagated. The move acceptance criteria are based on a function of ewAEIkTwhere k is the Boltzmann constant and T is the system temperature. It should be noted that the argument of this exponential function is dimensionless. The move acceptance criteria and, therefore, the nature of the Markov chain propagated are dependent not only on AE but also strongly on T. The temperature T is gradually reduced during the annealing process and the nature of this reduction is determined by the annealing schedule used. The simulated annealing algorithm can be applied to any general single-objective multivariable optimization problem. Some characteristic objective function C would have to be chosen instead of the energy function E, the goal being to minimize this objective function. Instead of temperature T a temperature analog /3 has to be chosen which will be reduced gradually during the annealing process. The nature of the random move is problem specific and very strongly influences the efficiency of the algorithm. The move acceptance criteria are a function of e-ariS. In order to keep the argument of the exponential function dimensionless the temperature analog /3 should be chosen such that it has the same dimensions as the objective function C. We have studied two different types of move criteria-the acceptance Metropolis algorithm (Metropolis et af., 1953) and the Glauber algorithm (Glauber, 1963). The Metropolis algorithm for move acceptance accepts all moves with AC < 0; that is, all downhill moves in objective function are accepted with a probability of 1. Uphill moves (moves resulting in AC > 0) will be accepted with a probability between 1 and 0 with this probability decreasing with a decrease in fi. Mathematically, the acceptance probability p of any move can be expressed as: forAC 0,

p=l (1) p = e-Acip I ’ The Glauber algorithm for move acceptance does not necessarily accept all downhill moves in addition to not accepting all uphill moves. The probability of accepting any move can be expressed as: e-ACI5

P=

1 fe-wB

(2)

This implies that at very high values of B all moves, whether uphill or downhill, will be accepted with a probability of 0.5. As B is reduced, the probability of accepting an uphill move decreases from 0.5 to 0 whereas the probability of accepting a downhill move increases from 0.5 to 1. The Glauber algorithm is designed to make the Monte-Carlo simulation of condensed systems with discrete (rather than continuous) energies mimic the dynamics of the system more closely than the Metropolis algorithm (Glauber, 1963). It is clearly a more conservative move acceptance criterion than the Metropolis algorithm, since not all downhill moves

H.

1354

DAS et al.

are automatically accepted. Thus, once in the vicinity of the global minimum, the GIauber algorithm is certain to approach the global minimum less rapidly than the Metropolis algorithm. On the other hand, the Metropolis algorithm may lead to the acceptance of a move which puts the system into a local minimum and which the Glauber algorithm might reject. Thus, it is unclear a priori which of the two algorithms will more efficiently search the available search space. Two basic types of annealing schedules are usually used-the exponential annealing schedule (Kirkpatrick et al., 1983) and the Aarts and van Laarhoven annealing schedule (Aarts and van Laarhoven, 1985). In the exponential annealing schedule, each new temperature is obtained by multiplying the current temperature by a specified constant multiplicative factor, a, where 0 c a i 1. This can be expressed as: B’=aB.

(5)

(3)

where 8’ is the new temperature and p is the current temperature. The smaller the value of a, the faster is the annealing. This annealing schedule is, therefore, determined by the choice of CL. In the Aarts and van Laarhoven annealing schedule, a non-constant multiplicative factor is used to obtain the new temperature from the current temperature. The value of this multiplicative factor depends on the closeness to equilibrium obtained at the current temperature level. This implies that the closer the system is to equilibrium at the current temperature level, the lower is the vaIue of the multiplicative factor and the faster is the annealing. The basic idea is to anneal at a rate that keeps the system close to equilibrium at each temperature level. This annealing schedule can be mathematically expressed as: ln(l +S)p -’ 1+ (4) 30(B) ( > ’ where e(p) is the standard deviation of the objective function at the current temperature fi and 6 is a specified value which is a measure of the desired closeness to equilibrium. The smaller the specified value of S, the greater is the desired closeness to equilibrium and the slower is the annealing. The higher the standard deviation of the objective function, the further the system is away from equilibrium and the slower is the annealing. This annealing schedule is, therefore, determined by the choice of S. Aarts and van Laarhoven also make suggestions about the choice of the initial temperature, the number of attempted moves at each temperature level and the termination criterion. The initial temperature is chosen to be about 10 times the maximum value of AC between any two successive configurations when ali moves are accepted. This corresponds to an initial temperature that gives about a 90% acceptance probability for the move giving the maximum value of AC for the Metropolis move acceptance criterion and about a 47.5% acceptance probability for the move P’=B

giving the maximum value of AC for the Glauber move acceptance criterion. The objective of selecting the initial value of B in this manner is to set it to be high enough so that a large number of moves are accepted initially. This helps in randomizing the system and removing solution dependence on the starting system configuration. The number of attempted moves at each temperature level is dependent on the nature of the random move as it is set equal to the number of different configurations the system can reach from any given configuration by making a single random move. The algorithm is made to terminate when the derivative of the smoothed average value of the objective function at a temperature level with respect to the temperature is less than a specified value of a parameter E where 0 < E < 1. This termination condition can be expressed as:

where f?$ is the smoothed average value of the objective function at a particular temperature level based on smoothing over a fixed number of previous temperature levels. The smoothing is done to reduce the effect of fluctuations in the average value of the objective function c over successive temperature levels. q/I,,) is the average value of the objective function at the initial temperature level &. The term /I/[email protected],) is used to make the termination condition dimensionless so that it is compatible with the dimensionless termination parameter E. 3. RESULTS

AND

DISCUSSION

The different types of problems solved here by simulated annealing are the makespan minimization of UIS, NIS, ZW and FIS flowshops with zero transfer times and set up times and MIS flowshops with non-zero transfer times and sequence-dependent set up times. These problems are solved under the assumption of a permutation schedule. Efficient completion time algorithms to calculate the makespan are available for all of these problems (Ku and Karimi, 1988b; Rajagopalan and Karimi, 1989). The MIS flowshops represent much more general and realistic problems than do the UIS, NIS, ZW and FIS flowshops. The same simulated annealing algorithm is applied for the solution of each of these types of problems by using the appropriate completion time algorithms for makespan calculation. The simulated annealing solutions are compared with exact solutions obtained by a complete enumerative search for problem sizes of up to eight products. The exponential time computational effort involved in conducting an enumerative search fixes an upper bound on the problem sizes that can be solved exactly. Sensitivity of the simulated annealing algorithm to different parameters is studied based on these comparisons with exact solutions.

Scheduling of serial multiproduct batch processes We first consider a version of the simulated annealing algorithm with the Metropolis algorithm for move acceptance and the exponential annealing schedule. Different values of the initial temperature & and the exponential annealing schedule parameter a and different types of random moves were tried in an attempt to optimize this set of parameters for the batch scheduling problem. The initial temperature was determined by specifying a desired acceptance probability for a move which gives a maximum change in the objective function (which in this case is the makespan) AC,,,,, . The value of AC,, was determined by initially making a number of random moves with all moves being accepted and monitoring the AC values for the moves. In general, the acceptance probability p for the Metropolis algorithm is given by: P = e-An0_

(6)

Therefore, the initial temperature fl,, can be determined from AC,,, and the specified initial acceptance probability p0 by the following expression:

A value of p,, = 0.9 was found to be high enough to randomize the system. This corresponds to a value of & that is about 10 times the value of AC,,,,,, which is consistent with the suggestion of Aarts and van Laarhoven noted above. The value of & obtained in this manner was determined to be high enough so that the final solution is quite insensitive to the choice of /?,,and to the starting schedule. This allows a random starting schedule to be used since all such schedules are feasible solutions and the initial randomization process of the system at the chosen value of & removes solution dependence on the starting schedule. This value of p. = 0.9 was then used to determine &, for all problems. The actual value of PO will, however, be different for different problems as it will depend on the AC,,,,, value for each problem. The advantage of this method of calculating &, is that it is very general and problem independent. The optimality of the final solution when using the exponential annealing schedule depends on the value of CYused (Dolan et al., 1989b). We tried a number of different values of CLin the exponential annealing schedule ranging from 0.99 to 0.6. A higher value of a implies a slower annealing and a closer asymptotic approach to exactness of the solution but it also implies a higher computation time. An inherent trade-off, therefore, exists in the choice of a. A value of a = 0.89 was found to be appropriate for this application as this also results in a search space similar in size to that explored with the Aarts and van Laarhoven annealing schedule, as discussed below. Five different kinds of random moves were considered and the effect of each on the efficiency of the algorithm was investigated. Figure 1 illustrates the five moves considered. In some cases a single move

1355

was used at all temperature levels and in other cases two types of moves were used, one used for the upper half of temperature levels and the other used for the lower half of temperature levels. The number of moves attempted at each temperature level was set equal to the number of possible configurations accessible from any given configuration by the acceptance of a single random move. Based on the experimentation done with the different combinations of moves in determining their effect on the quality of the solution, move 2 was found to be the most appropriate when used by itself at all of the temperature levels and not in combination with any other move. In view of the nature of move 2, N(N - 1) moves were attempted at each temperature level as this corresponds to the number of possible configurations that can be reached from a given configuration by a single move. The termination condition given in equation (5) was used with E = lO_‘. This value was found to be small enough to achieve termination at an appropriate stage in the algorithm and further reductions in E were not found to improve solutions. Table 1 shows results obtained by using the Metropolis algorithm with the exponential annealing schedule (ME) for UIS, NIS, ZW and FIS flowshops with zero transfer times and set up times and MIS flowshops with non-zero transfer times and sequencedependent set up times. The results are compared with exact solutions obtained from an enumerative search method in terms of the percentage of times that the exact global optimum was obtained, the percentage offset from the exact global optimum and the percentage of the enumerative search space explored. The results are based on 100 randomly generated problems for each case. The random problems were generated by picking each processing time r,, from a uniform distribution of integers in the range of 1 to 30. For the FIS flowshops, the number of intermediate storage units between any two consecutive processing units was picked at random from a uniform distribution of integers in the range of 0 to N - 1, N being the number of products. This corresponds to a range from NIS to UIS operation since N - 1 intermediate storage units between any two consecutive processing units implies UIS operation. In the case of the MIS flowshops, each transfer time ai was picked from a uniform distribution of integers in the range of 1 to 5 and each sequence-dependent set up time sijk was picked from a uniform distribution of integers in the range of 0 to 5. In addition, for the MIS flowshops, a single block of a random number of consecutive processing units was picked to operate as a ZW block and for the remaining processing units the intermediate storage criterion between any two consecutive processing units was selected at random with equal probability to be one of UIS, NIS or FIS. For the FIS criterion, the number of intermediate storage units between any two consecutive processing units was picked at random from a uniform distribution of integers in the range of 1 to

H. Dti et al.

1356

Move

1

: Subsequencereversal

Move 2

1

2

13

4

5

1

2

I6

5

4

61

7

31

7

: Position of a single Product is moved +

7 I

2

3

4

s1

6

7

5

2

3

4

6

7

6

7

Move 3 : Position of a subsequence is moved I‘----$

1

2

13

1

2

6

3

7

4

51

Move 4 : Two products are interchanged I

2

1

2

d 3

5

7

6

4

5

6

7

5

6

7

Move 5 : Two adjacent products are interchanged 1

2

1

2

37 4

3

Fig. I. Types of random moves. N -

2 for an N-product

The random num-

problem.

ber generator used for problem actual implementation of algorithm was GGUBFS

the

generation and the simulated annealing

from the International Mathematical and Statistical Library (IMSL). The numbers within parentheses next to each problem type refer to the number of products and number of units; for example, UIS (8, 6) refers to a UIS flowshop with 8 products and 6 units. The percentage of the times the global optimum solution is obtained is computed as: % global opt = number of problems attaining global optimum total number of problems > x 100. Table

I

Metropolis

(8)

algorithm,

Global UlS(8,6) NIS(8.6) ZW(8.6) FIS(8.6) MIS(8.6)

(%) 94 94 91 ;:

calculated

as:

% offset from global opt = opt, - global opt, global opt,

(

from

annealing

global

opt

schedule

(ME)

(%) Search

opt MeatI 0.0454 0.0734 0.1193 0.0544 0.057 1

>

x 100,

(9)

where opt, is the optimum makespan obtained by simulated annealing for problem i and global opti is the exact global optimum makespan obtained by the enumerative search for the same problem. The mean, variance and worst percentage offset are based on the percentage offset obtained for all of the problems considered in the sample. The percentage of the enumerative search space explored is given by:

exponential

offset Problem

The percentage offset from the global optimum is

VWiatlCC

Worst

(%)

0.0490 0.1288 0.2047 0.0708 0.0441

1.887 2.844 3.241 1.596

8.61 8.50 8.41 8.51 8.38

1.361

space

Scheduling of serial multiproduct batch processes % search space =

and set up times and MIS flowshops with non-zero transfer times and sequence-dependent set up times. The results are again compared with exact solutions obtained from an enumerative search method in terms of the percentage of times that the exact global optimum was obtained, the percentage offset from the exact global optimum and the percentage of the enumerative search space explored. The results are based on the same 100 randomly generated problems used for the Metropolis algorithm with the exponential annealing schedule (ME). The Glauber algorithm was then studied with both the exponential annealing schedule and the Aarts and van Laarhoven annealing schedule. The same type of move, number of moves at each temperature, annealing schedule parameters and termination condition were used as with the Metropolis algorithm. The initial value of fl was again backed out from a specified acceptance probabihty for a move with AC,,, . In view of the difference between the Metropolis algorithm and the Glauber algorithm, the expression for determining the initial value of /3from the acceptance probability is different. In general, the acceptance probability p by the Glauber algorithm is given by:

moves attempted by simulated annealing ( total number of possible configurations > x 100.

(10)

Since a permutation schedule is considered, the total number of possible configurations examined by enumerative search is given by N! for a problem consisting of N products. The actual search space considered by the simulated annealing algorithm is less than the number of moves attempted because some configurations will be examined more than once due to the randomness of the moves. Equating the search space to the number of attempted moves overestimates the actual search space explored but it is nevertheless a good measure of the computational effort expended by the algorithm. It should be noted that by keeping track of all of the currently existing best makespan solutions at any stage during the annealing process, multiple optima can be obtained from the simulated annealing algorithm. This may be useful if certain product sequences are more preferred than other sequences with the same makespan for some operational reasons. The final selection of the desired sequence from these multiple optima sequences obtained from the simulated annealing algorithm can then be made accordingly. A version of the simulated annealing algorithm using the Metropolis algorithm for move acceptance and the Aarts and van Laarhoven annealing schedule was also studied. As in the case of the exponential annealing schedule, the initial temperature PO was determined based on an initial acceptance probability of p0 = 0.9. A number of different values of 6 were tried ranging from 0.01 to 0.8. A value of 6 = 0.3 was found to be appropriate for this application. The termination condition given by equation (5) was again used with E = 10e5. The type of random move used was also the same as that found to be the most appropriate in the exponential annealing schedule; that is, move 2 as shown in Fig. 1 was used by itself at all of the temperature levels and not in combination with any other moves, with N(N - 1) such moves being attempted at each temperature level. Thus, the only difference between the two cases studied was the annealing schedule. Table 2 shows results obtained by using the Metropolis algorithm with the Aarts and van Laarhoven annealing schedule (MAvL) for UIS, NIS, ZW and FIS flowshops with zero transfer times Table

2.

Metropolis

algorithm,

Aarts

and Offset

Global

1357

e-Ac/S P=

1 +e-A”B’

Therefore, the initial temperature /&,can be determined from AC,,, and the specified initial acceptance probability p,, by: (12)

It should be noted that in the Glauber algorithm the maximum possible value of p. is 0.5 as opposed to 1 in the Metropolis algorithm. This is because the maximum acceptance probability of an uphill move in the Glauber algorithm is 0.5 whereas in the Metropolis algorithm it is 1. A value of p. = 0.475 was found to be high enough to randomize the system. This value was used for all problems. As in the case of the Metropolis algorithm, this corresponds to a value of /Jothat is about 10 times the value of AC,_. This also gives a value of & which is high enough so that the final solution is quite insensitive to the choice of & and to the starting schedule. Table 3 shows results obtained by using the Glauber algorithm with the exponential annealing schedule (GE) and Table 4 shows results obtained by van

Laarhoven

from

global

annealing

schedule

(MAvL)

opt (%)

opt

Search

Problem

(%)

Mean

IJIS(8,6) NIS(8,6) ZW(8.6) FIS(8,6) [email protected], 6)

99 97 94 I00 95

0.0047 0.043 0.0647 0.0000 0.0346

1

(11)

Variance

worst

(%)

0.0022 0.0622 0.0922 o.oooo 0.0312

0.469 .778 2.381 0.000 1.347

8.23 8.22 8.36 8.19 8.57

I

space

H.

1358 Table

3.

Glaubcr

DAS

alnorithm.

Global

from

4.

87 84 90 89 86

Glauber

algorithm,

VWiaIKe

0.1406 0.1805 0.1238 0.1020 0.0949

0.2124 0.2952 0.2151 0.1304 0.0794

Aarts

and Offset

Global

schedule

opt

(GE>

opt (%) Search

MlXll

using the Glauber algorithm with the Aarts and van Laarhoven annealing schedule (GAvL). The same randomly generated problems used for the Metropolis algorithm were solved and compared with the exact solutions. Table 5 shows the results obtained by using the IMS heuristic for the same set of randomly generated problems solved by each of the four versions of the simulated annealing atgorithm. Results are again compared with exact solutions obtained by an enumerative search. Larger MIS flowshop problems with 12 products and 6 units and 16 products and 6 units with non-zero transfer times and sequence-dependent set up times were also solved by simulated annealing. All of these problems were solved by each of the four versions of the simulated annealing algorithm discussed; that is, they were solved by the Metropolis and Glauber algorithms with exponential and Aarts and van Laarhoven annealing schedules. In addition, these problems were also solved by the IMS heuristic. The results for these larger MIS probbms could not be compared with exact solutions because of the prohibitively large computational effort required for the exact solutions. The relative merits of the four versions of the simulated annealing algorithm and the IMS heuristic were studied by comparing the minimum makespan obtained for a given problem by each of the four versions of the simulated annealing algorithm and the IMS heuristic. The ability of any algorithm to obtain the best of the makespan values obtained by these five algorithms for a given problem was used as a measure of the relative merit of an algorithm with respect to the other algorithms. Table 6 shows the relative performance of the four versions of the simulated annealing algorithm and the IMS heuristic. The relative comparison is based on the percentage of best makespan solutions obtained by a given algorithm, the percentage offset from this best makespan, and the percentage of the total search space examined, where the total search space corresponds to the search space that would need to be

Table

annealinv global

opt

W)

UIS(S.6) NIS(8,6) [email protected], 6) FIS(8,6) MIS(8,6)

al.

exuoncntial

Offset Problem

et

Worst 2.885 3.21 I 2.917 2.564 1.792

8.58 8.50 8.44 8.5 I 8.44

examined by a full enumerative search. This relative comparison is performed not only for the larger MIS problems but also the smaller UIS, NIS, ZW, FIS and MIS problems that were compared earlier with the exact solutions. On the basis of the results presented in Table l-6 it is evident that the Aarts and van Laarhoven annealing schedule is superior to the exponential annealing schedule. In addition, the Metropolis algorithm for move acceptance is found to be better than the Glauber algorithm. The search space sampled by all four versions of the simulated annealing algorithm is about the same, so the comparison of results is an accurate reflection of the efficiency of the different versions of the simulated annealing algorithm. The conclusion to be drawn from these results is that the Metropolis algorithm with the Aarts and van Laarhoven annealing schedule is the best of the four versions of the simulated annealing algorithm. Each of the four versions of the simulated annealing algorithm is found to give significantly better results than the IMS heuristic. Note, however, that each of these four versions of the simulated annealing algorithm examines a higher percentage of the total search than the IMS heuristic. As the problem size increases the percentage of the total search space examined by each of these four versions of the simulated annealing algorithm decreases significantiy since simulated annealing is a polynomial time algorithm while enumerative search is an exponential time algorithm. For a more detailed study of the simulated annealing algorithm the version employing the Metropolis algorithm with the Aarts and van Laarhoven annealing schedule (MAvL) was chosen since it was found to be the best of the four versions studied. The quality of the simulated annealing solution was compared to the IMS heuristic solution for a wide range of problem types and problem sizes. Table 7 shows this comparison for UIS, NIS, ZW, FIS and MIS problems with problem sizes ranging between 8 products, 6 units and 24 products, 6 units. For all of the

van

Laarhoven

from

global

___

annealing

schedule

(GAvL)

opt (%) Search

Problem

W)

MeWI

Variance

Worst

(“A)

UIS(8,6) NISJ8.6)

96 94 96 9s 90

0.0223 0.0632 0.0376 0.0352 0.0702

0.0132 0.0866 0.0457 0.0312 0.0577

0.862 .896 .66? 1.415

7.95 7.91 8.15 7.72 7.98

ZWS, 6) FIS(8, 6) MIS(8.6)

space

(%)

I I

I so4

space

multiproductbatch processes

of serial

Scheduling

Table

5.

Idle

matrix

Offset

search

from

global

Global opt Problem UIS(8,6) NISS, 6) ZWa, 6) FIS(8.6) MIS(8.6)

(IMS)

opt (%) Search

Mean

ValiaIKX

worst

(%)

2-l 30 68 23 27

2.37 1.83 0.41 2.37 1.52

5.97 4.12 0.86 5.24 2.43

8.98 10.90 6.98 8.98 6.55

0.554 0.540 0.595 0.575 0.576

6.

Relative

on a Sun 3160 workstation, this being the most computationally intensive of all the problems studied. An 8 product, 6 unit MIS problem required 3 min of CPU time per problem whereas an 8 product, 6 unit UIS problem required only 12 s of CPU time per problem. These results indicate that the CPU time is determined not only by the problem size but also by the computational complexity of the completion time algorithm for the problem being considered. The completion time algorithm for a UIS problem is computationally the least intensive whereas that for an MIS problem is computationally the most intensive. The high CPU times, therefore, are not solely a manifestation of the computational demands of the actual simulated annealing algorithm but are also a strong reflection of the computational complexity

performance

of different

Offset

from

best

Best

Problem

Method

ME MAvL GE

UIS(8,6)

NIS(8.6)

zW(8,6)

6)

MIS(8,6)

16.6)

MeatI

VUianCX

95

0.0407

GAvL IMS

o.ocMxl 0.1359 0.0178 2.361

0.0472 0.0000 0.2115 0.0112 5.98 18

ME MAvL GE GAvL IMS

z’: 84 94 30

0.0734 0.043 I 0.1805 0.0632 1.8323

ME

91

MAvL GE GAvL IMS

z 96 68

GE GAvL IMS ME MAvL GE GAvL IMS

95 loo 89 95 23 91 95 86 :

algorithms

opt (%) Worst

Search space (%)

1.887 O.ooO 2.885 0.862 8.980

8.61 8.23 8.58 7.95 0.554

0.1288 0.0622 0.2952 0.0866 4.1199

2.844 I .778 3.21 I 1.896 10.90

8.50 8.22 8.50 7.91 0.540

0.1193 0.0647 0.1238 0.0376 0.4127

0.2047 0.0922 0.2151 0.0457 0.8558

3.241 2.381 2.917 I.667 6.977

8.41 8.36 8.44 8.15 0.595

0.0544 0.0000 0.1020 0.0352 2.3737

0.0708 0.0000 0.1304 0.03 12 5.2382

I .596 0.000 2.564 2.374 8.980

8.51 8.19 8.51 7.72 0.575

0.057 1 0.0346 0.0949 0.0702 1.5234

0.0441 0.0312 0.0794 0.0577 2.4301

1.361 1.347 1.792 1.504 6.545

8.38 8.57 8.44 7.98 0.576

0.1844 0.1081 0.2075 0.1289 2.5932

2.260 1.579 1.663 1.977 7.8 13

1.78 2.09 1.78 2.07 1.43

x x x x x

IO-” lO-3 IO-’ IO-’ 10-4

0.4130 0.2423 0.5242 0.2535 2.1325

4.149 2.326 3.644 1.976 7.246

7.59 x 10.7 x 7.61 x 10.3 x 6.96 x

10-8 10-e IO-’ 10-8 IO-’

ME MAvL

58 68 43

GZL IMS

65 3

0.2727 0.1781 0.3747 0.2042 2.4542

ME MAvL GE GAvL IMS

34 48 26 42 2

0.5020 0.3730 0.6940 0.4235 2.6461

MIS(12,6)

MIS(

opt (%) 100 88 97 27

ME MAvL FIS(S,

space

(%)

problems simulated annealing is consistently found to be superior to the IMS heuristic with no sign of sohttion quality degradation with an increase in problem size. Of all of the problems considered, the IMS heuristic appears to perform the best for ZW problems. This is consistent with what has been reported for the IMS heuristic (Rajagopalan and Karimi, 1987), that best solutions are obtained for systems with limited or no storage. In terms of computational effort measured by CPU time, our implementation of simulated annealing scales as a fourth-order polynomial in the number of products. It requires about two orders of magnitude more CPU time than the IMS heuristic. The simulated annealing solution of a 16 product, 6 unit MIS problem required 53 min of CPU time per problem Table

1359

^

H. Dm et ~1.

1360 Table

7.

MAvL

vs IMS,

quality

of solution IMS

MAvL offset Problem

Best opt (%)

UIS(8,6) NIS(8.6)

100

ZW(8.6) FlS(8,6)

98 loo 99 99 96

MIS(8,6) UIS(I2,6) NIS(l2,6) ZW(l2.6) FIS( 12,6) MIS(l2,6) UIS(l6,6) NIS(16.6) ZW(l6.6) FIS(16,6) MIS(16.6) UIS(20.6) NIS(20,6) ZW(20.6) FIS(20,6) UIS(24.6) NIS(24,6) ZW(24,6) FIS(24.6)

99

Meall O.OMCl 0.0088 0.0280 O.OCOO 0.0101 0.0158 0.063 I

90 97 99 98 95 81 98 97 99 94

0.0695 0.0269 0.0025 0.0153 0.0256 0.1301 0.0187 0.0 I 78 0.0025 0.0567

:;I 99 97

0.0816 0.0292 0.0022 0.0262

87 99

0.0833 0.0022

from

best opt

offset

(%) Worst

Valian.X

0.0000 0.0077 0.0577 0.0000 0.0101 0.0248 0.1460 0.0694 0.0409 0.00060 0.0152 0.0157 0.0962 0.0172 0.0136 O.ooo61 0.0881 0.0753 0.0727 0.00047 0.0301 0.0890 0.00047

0.000 0.881 2.381 O.ooO I.010 1.581 3.463 1.989 1.976 0.246 1.201 0.955 I .34x 0.971 1.006 0.248 2.477 I.840 2.703 0.219 1.447 I.764 0.217

Search space (%) 8.23 8.22 8.36 8.19 8.57 2.11 2.05 2.15 2.10 2.09 1.00 1.02 1.08 1.01 1.07 1.49 1.57 1.70 1.53

x x x x x x x x x x x x x x

Table

Offset Problem MIS(8.6) MIS(I2,6) MIS(l6,6) MIS(20.6) MIS(24.6) MIS(28,6) MIS(32,6) MIS(36,6)

Best opt (%) 73 :: 84 77 82 86 89

Meall 0.4353 0.3052 0.2760 0.1466 0. I787 0.1068 0.1348 0.0783

from

8.

Fast

lo-’ 10-3 IO-’ IO-’ IO-’ lo-’ 10-T IO-’ 10-7 lo-’ lo-‘2 10-12 10-12 lo-‘2

9.12 x 1O-‘8 9.80 x IO-” 1.09 x 10-I’ 9.39 x lo-‘*

inherent in the completion time algorithms that are associated with the implementation of the simulated annealing algorithm. In order to make simulated annealing more competitive with IMS from a computational effort point of view, a faster version of the simulated annealing algorithm was studied. As before, the version of the simulated annealing algorithm employing the Metropolis algorithm with the Aarts and van Laarhoven annealing schedule (MAvL) was used. The annealing schedule parameters were, however, modified to obtain a much faster algorithm (fast MAvL). The objective in tuning the annealing schedule parameter values was to obtain a simulated annealing algorithm that sampled approximately the same percentage of the total search space as the IMS heuristic. The comparison of the solution quality obtained by simulated annealing and the IMS heuristic would be more meaningful under these circumstances. A value of S = 10 was now used instead of S = 0.3 and N(N - 1)/4 moves were attempted at each temperature level instead of the N(N - 1) moves attempted earlier. As was the case previously, the initial temperature level & was based on p0 = 0.9 and the termination condition was based on E = lo-‘.

Fast

Best opt (%)

MAvL

27 31 70 23 27 9 11 47 10 4 13 6 32 9 4 9 6 22 7 7 4 20 6

from

VkXklCC 1.0872 0.5108 0.4514 0.1914 0.1645 0.098 1 0.1545 0.0942

opt (%) Search

Mean

VarianCe

Worst

2.3631 1.7975 0.3759 2.3737 I .49X6 2.9008 2.0360 0.6521 3.1041 2.2743

5.9818 4.0865 0.8363 5.2382 2.4129 6.3239 2.6834 0.7975 6.6987 2.4155 5.9368 2.3338 1.0529 5.5196 2.3365 3.4912 1.7438

8.980 10.90 6.977 8.980 6.545 11.987 7.850 3.812 1 I .439

2.8112 2.0392 0.8922 3.0133 2.2851 2.2147 2.1480 1.1835 2.5809 2.4300 2.4487 1.2467 2.489 I

7.333 11.250 9.21 1 5.528 11.250 7.246 9.685 5.41 I 4.124 9.685 7.555 7.075 3.559 Il.332

1.0841 3.8497 2.8471 2.4387 0.9009 3.5315

space

(%)

0.554 0.540 0.595 0.575 0.576 I.35 x 10-4 1.39 x 10-4 1.49 x lo-’ 1.32 x IO-’ 1.43 x lo-’ 5.67 x 10m9 6.52 x lo-’ 7.34 x 10-q 5.83 x lO-9 6.96 x IO-’ 8.54 x lo-l4 1.00 x 10-I’ 1.14 x 10-13 8.67 x lo-” 4.63 6.18 6.82 4.76

x x x x

IO-I9 10-19 lo-” IO-l9

Table 8 shows a comparison of the quality of the fast simulated annealing solution with the IMS heuristic solution. MIS problems are considered with problem sizes ranging between 8 products, 6 units and 36 products, 6 units. MIS problems are considered because they are by far the most general and of the most practical interest for real scheduling. The simulated annealing solution quality has deteriorated as compared to the slower version of the algorithm considered in Table 7 but nevertheless for all of the problems considered in Table 8, the fast simulated annealing algorithm is still found to be consistently superior to the IMS heuristic and again with no sign of solution quality degradation with an increase in problem size. The computational effort has been reduced signilicantly by employing a faster version of the simulated annealing algorithm as is evident from the percentage of the total search space explored. For example, the CPU time required for a 16 product, 6 unit MIS problem has now been reduced to 5 min per problem on a Sun 3/60 workstation as opposed to 53 min per problem previously. The largest problem that has been considered, the 36 product, 6 unit MIS problem takes 158 min of CPU time per problem when

vs IMS.

aualitv

of solution

MAvL best opt

best

IMS Offset

(%) Worst

Search space (%)

6.931 3.608 4.979 2.681 I.811 I.956 2.007 2.108

0.685 1.47 x lo-’ 6.93 x lO-9 1.05 x 10-13 6.21 x lo-l9 1.81 x IO-” 2.93 x lo-‘O 2.86 x lo-‘*

Best opt (%) 45 31 32 18 26 23 14 13

MeaIl 1.0341 1.4093 1.2545 1.4125 I .3464 1.433 1 1.7522 1.6942

from

best opt

VariakICC

Worst

Search space (%.)

2.0317 2.1836 I.7541 I .7603 1.9750 1.8366 1.7760 1.6608

6.545 6.388 5.925 5.230 5.649 5.621 5.825 4.728

0.576 1.43 x lo-’ 6.96 x IO-’ 1.08 x IO-” 7.06 x IO-” 2.18 x IO-” 3.36 x IO-“’ 3.13 x 10-x

Scheduling of serial multiproduct batch processes employing the fast version of the simulated annealing algorithm. In general, the fast version of the simulated annealing algorithm requires only one order of magnitude more CPU time than the IMS heuristic as opposed to two orders of magnitude more for the slower version. The reason that the fast version of the simulated annealing algorithm still requires more CPU time than the IMS heuristic in spite of the similar percentage of total search space explored is because of the computational complexity of the exact makespan calculations in the simulated annealing algorithm after each move is attempted. The IMS heuristic includes as part of the algorithm a technique for estimating the change in the approximate idle time between two different schedules which is used in lieu of exact makespan calculations for each schedule. This accounts for the lower execution time of the IMS algorithm while searching the same total search space as simulated annealing. This suggests that if a computationally fast method can be devised for calculating a change in makespan associated with two different schedules, then the total computational effort of the simulated annealing algorithm can be further reduced without affecting the quality of the solution since this will not affect the percentage of the total search space explored. Until such a method can be devised, the most meaningful comparison of simulated annealing with the TMS heuristic is to require both methods to explore approximately the same percentage of the total search space. Under these circumstances, the algorithm exhibiting the better solution quality of the two methods (i.e. simulated annealing) can be regarded as the more effective algorithm. It is also interesting to note that a trade-off can be obtained between the speed of the simulated annealing algorithm and the quality of the solution simply by changing some parametric values in the annealing schedule. This suggests that depending on the quality of the solution required, an appropriately fast or slow version of the simulated annealing algorithm can be used, thereby implying an inherent flexibility in the simulated annealing algorithm for different applications. 4. CONCLUDING

REMARKS

The strength of simulated annealing lies in the fact that it can be used to sequence any flowshop problem for which a completion time algorithm is available for makespan calculation. For different types of flowshops, the makespan calculation method is different but the basic simulated annealing algorithm is the same. This is in contrast to some of the heuristic sequencing methods which are more suitable for certain types of problems than others. Comparisons with exact solutions indicate that simulated annealing gives very good results. The Metropolis algorithm gives better results than the Glauber algorithm and the Aarts and van Laarhoven annealing schedule gives better results than the

1361

exponential annealing schedule. Of the four versions of the simulated annealing algorithm studied, the Metropolis algorithm with the Aarts and van Laarhoven annealing schedule gives the best results. Each of the four versions of the simulated annealing algorithm, as implemented here, gives significantly better results than the IMS heuristic while examining a larger search space. A faster version of the simulated annealing algorithm employing the Metropolis algorithm with the Aarts and van Laarhoven annealing schedule was also studied. This explores the same amount of search space as the IMS heuristic and still gives significantly better results than the IMS heuristic although they are not as good as the slower version of the simulated annealing algorithm. An inherent trade-off exists between the effort expended by the simulated annealing algorithm and the quality of the solution, leading to a flexibility in the manner in which simulated annealing can be applied. Further reduction in the computational effort, but not at the expense of solution quality, is possible if computationally fast methods to calculate the change in makespan between two different schedules can be developed. The polynomial time nature of the simulated annealing algorithm allows it to be used for solving large problems. Therefore, simulated annealing holds great promise as a solution technique for the scheduling of serial multiproduct batch processes. NOMENCLATURE

Transfer time of product i out of unit i C = Objective function AC= Change in the objective function after a given at, =

move = Maximum change in the objective function after a given move C = Average value of the objective function at a given temperature level C, = Smoothed average value of the objective function over a number of temperature levels E = Energy function AE = Change in the energy function after a given move k = Boltzmann constant M = Number of units N = Number of products P = Number of problems p = Acceptance probability of move pO= Acceptance probability of the highest uphill move that determines the temperature level at which annealing is started sjH = Set up time for productj on unit k after processing product i T = Temperature tij= Processing time of product i on unit j

AC,,

Greek

symbols D!= Parameter in exponential annealing schedule

fl= B’ = &, = S=

Temperature analog New value of temperature analog Value of fl at which annealing is started Parameter in Aarts and van Laarhoven annealing schedule E = Termination parameter (r = Standard deviation of the objective function at a given value of fl

I-I. DAS er al.

1362 Abbreviations

CPU = Central processing unit FIS = Finite intermediate storage GAvL = Glauber algorithm with the Aarts and van Laarhoven annealing schedule GE = Glauber algorithm with the exponential annealing schedule IMS = Idle matrix search MAvL E Metropolis algorithm with the Aarts and van Laarhoven annealing schedule ME = Metropolis algorithm with the exponential annealing schedule MIS = Mixed intermediate storage NIS = No intermediate storage UIS = Unlimited intermediate storage ZW = Zero wait processing REFERENCES

Aarts E. H. L. and P. J. M. van Laarhoven, Statistical cooling: a general approach to combinatorial optimization problems. Philips J. Res. 40, 193 (1985). Allen M. P. and D. J. Tildesley, Computer Simulation of Liquirls. Oxford University Press, New York (1987). Baker K. R., Introduction to Sequencing and Scheduling. Wiley, New York (1974). Conway R. W., W. L. Maxwell and L. W. Miller, Theory of Scheduling. Addison-Wesley, Reading, Mass. (1967). Das H., P. T. Cummings and M. D. LeVan, Applications of simulated annealing to scheduling of serial multiproduct batch processes. AIChE Annual Meeting, San Francisco (1989). Dolan W. B., Private Communication (1988). Dolan W. B., P. T. Cummings and M. D. I&Van, Heat exchanger network design by simulated annealing. Foundations of Computer Aided Process Operations (G. V. Reklaitis and H. D. Spriggs, Eds). CACHE Corporation, Elsevier, New York (1987). Dolan W. B., P. T. Cummings and M. D. LeVan, HENSSA: a CAD package for heat exchanger network synthesis by simulated annealing. Proc. 26th National Heat Transfer Con& ASiUE HTD 108, 153 (1989a). Dolan W. B.. P. T. Cummings and M. D. LeVan, Process optimization via simulated annealing: application to network design. AIChE JI 35, 725 (1989b).

Douglas J. M., Conceptual Design of Chemical Processes. McGraw-Hill, New York (1988). Glauber R. J., Time-dependent statistics of the Ising model. J. Math. Phys. 4, 294 (1963). Johnson S. M., Optimal two and three stage production schedules with set-up times included. Naval Res. Logist. Q. 1, 61 (1954). Kirkpatrick S., C. D. Gelatt Jr and M. P. V-hi. Optimization by simulated annealing. Science 220, 671 (1983). Ku H. and 1. A. Karimi, Evaluation of a new method for scheduling batch process operations. AIChE Spring National Meeting, New Orleans (1988). Ku H. and I. A. Karimi, Scheduling in serial multiproduct batch processes with finite interstage storage: a mixed integer linear program formulation. Ind. Engng Chem. Res. 27, 1840 (1988b). Ku H. and I. A. Karimi, Scheduling algorithms for serial multiproduct batch processes with due date penalties. AIChE Annual Meeting, San Francisco (1989). Loonkar Y. R. and J. D. Robinson, Minimization of capital investment for batch processes. Ind. Engng Chem. Process Des. Deu. 9, 625 (1970). Malone M. F., Batch sequencing by simulated annealing. AIChE Annual Meeting, San Francisco (1989). Metropolis N., A Rosenbluth, M. Rosenbluth, A. Teller and E. Teller, Equation of state calculations by fast computing machines. J. C/tern. Phys. 21, 1087 (1953). Parakrama R.. Improving batch chemical processes. The Chem. Engr 417, 24 (1985). Rajagopalan D. and I. A. Karimi, Scheduling in serial mixed-storage multiproduct processes with transfer and set-up times. Foundations of Computer Aiakd Process Operations (G. V. Reklaitis and H. D. Spriggs, Eds). CACHE Corporation, Elsevier, New York (1987). Rajagopalan D. and 1. A. Karimi, Completion times in serial mixed-storage multiproduct processes with transfer and set-up times. Computers &em. Engng 13, 175 (1989). Reklaitis G. V., Review of scheduling of process operations. AIChE Symp. Ser. No. 214 78, 119 (1982). Solem O., Contribution to the solution of sequencing problems in process industry. Int. J. Prod. Res. 12, 55 (1974). Vecchi M. P. and S. Kirkpatrick, Global wiring by simulated annealing. IEEE Trans. Cornput. Aided Des. CAD-2, 215 (1983).