Reducing simulation models for scheduling manufacturing facilities

Reducing simulation models for scheduling manufacturing facilities

European Journal of Operational Research 161 (2005) 111–125 www.elsevier.com/locate/dsw Reducing simulation models for scheduling manufacturing facil...

452KB Sizes 2 Downloads 35 Views

European Journal of Operational Research 161 (2005) 111–125 www.elsevier.com/locate/dsw

Reducing simulation models for scheduling manufacturing facilities Andre Thomas *, Patrick Charpentier CRAN, Faculte des sciences, BP239, 54506 Vandoeuvre les Nancy, France Available online 3 December 2003

Abstract Within the framework of the application of their industrial management system, companies compile a Master Production Schedule (MPS). However, once the MPS is released, daily events may require it to be brought into question. The use of reduced models within the framework of flow dynamic simulation enables quick decision-making while maximizing the use of resources and minimizing risk. The article shows the advantage of model reduction and how we arrive at it. Afterwards we develop an analysis of the influence of the model factors by highlighting the differences between the simulation results and MPS. Finally we show the circumstances in which the flow dynamic simulation with reduced models is relevant. Ó 2003 Elsevier B.V. All rights reserved. Keywords: Simulation; Modeling systems; Re-scheduling; MPS

1. Introduction Increasing both productivity and reactivity has become a prime objective for managers of manufacturing systems. Productivity implies paying very close attention to external variables such as clients, sales etc as well as to resources (machines, labour etc.). Reactivity can lead to extreme flexibility in planning, an excellent awareness of the expectations expressed by external factors (customers, socio-economic context . . .) and a perfect awareness of expectations and activities within the organisation. This may give rise to increasingly immediate problems such as being aware of the situation of the production system at any moment, measuring gaps and changes and being aware of their causes, knowing and measuring the changes in the factors involving employees. New elements must therefore be added to the production management system in order to increase productivity and reactivity and this based upon improved knowledge of what happens on the shop floor.

*

Corresponding author. E-mail address: andre.thoma[email protected] (A. Thomas).

0377-2217/$ - see front matter Ó 2003 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2003.08.042

112

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

The system of planning and re-planning should thus take into account all the events related to production resources whilst at the same time maintaining their meaning. Within the framework of the application of their industrial management system, companies compile a daily, weekly or monthly Master Production Schedule (MPS)––weekly being the general rule. The choice of a particular MPS gives rise to a ‘‘predictive scheduling’’ with respect to a group of Manufacturing Orders (MO), viewed statically at this level as a ‘‘complete entity’’. ‘‘Predictive scheduling’’ is the function that concerns the MPS initially established with the Manufacturing Planning and Control System (MPCS), in opposite of ‘‘Reactive scheduling’’ that gives the new MPS established after disruptions during the concerned week. Each MO deals with the manufacturing of an article, has a manufacturing routing and hence relates to a list of Work Centers (WC). At this level of planning, load/capacity equilibrium is obtained via the ‘‘management of critical capacity’’ function or Rough-Cut Capacity Planning (RCCP) which essentially concerns bottlenecks [21]. Goldratt and Cox, in «The Goal» [6] put forward the Theory of Constraint (TOC). This approach seeks to identify capacity constraints, attempts to use them as best as possible and to subordinate all other things to the exploitation of these constraints. The aim of the TOC is to maximise the bottleneck production. When reached, equilibrium means that the MPS is validated and goes into operation for the period (P) in question. This validation is made after the establishment of different scenarios that must be evaluated by the master scheduler. However, once the MPS is released, daily events may require it to be brought into question: itÕs the problem of rescheduling. The real time systems performing manufacturing checks (production reporting) will input information very rapidly into the management systems [13]. The decision-making process (Fig. 1) include the establishment of scenarios, the evaluation of these scenarios and finally the decision. The ever present problem today is the speed at which decisions can be made when faced with all this information which may appear to have lost all meaning [15,16]. Indeed, when the decision is made within the space of a few hours only, the situation on the shop floor has already changed. Following predictive scheduling, the MPCS will propose a placing of the MO that we have said to be ‘‘static’’. Effectively their load has been calculated using the cumulated mean processing time per MO taken as a fixed time entity (mean unit run time multiplied by the number of items per MO plus the mean setup time). At this level, the manager will seek to saturate the bottlenecks (this is the essential role of Rough Cut Capacity Planning). Following the release of this ‘‘MO portfolio’’, the time variables relative to the arrival intervals, k, and service durations, l, on the work centers (queuing systems theory) will create temporary overloads and under-loads. In this context, the apparent balance emanating from the predictive scheduling will, in fact, be an imbalance that the manager has to handle. Here it is useful to use dynamic simulation (discrete event simulation) of flows not only to highlight possible overloads but also to locate and deal with possible residual capacity (temporary underload). A rescheduling following a problem will inevitably lead to an even more critical situation on the bottlenecks. It becomes important to ‘‘retrieve’’ all the ‘‘time spaces’’ that have been freed by the unpredictable phenomena influencing production flows. It is the main problem of this decision-making process. The master scheduler look for the first schedule that attains to these objectives.

Scenarios

Decision

Evaluation

Fig. 1. The decision-making process.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

113

2. The reducing model problem 2.1. The scenario evaluation Experience has shown us that to reply in an efficient way to these objectives of reactivity, an extremely flexible, rapid and user friendly system needs to be developed. As it is necessarily located in the very short term, the volume of the ‘‘MO portfolio’’ it will have to consider for re-scheduling will always be low in comparison to that of the initial scheduling. This does in fact enable re-planning simulations to be rapid [18]. The approach to this has always been to represent reality by a model. All simulation models represent all the work centers (WC) and all the characteristics of the real production system. The simulation software will generate in this model the events which reveal the ‘‘reality’’ of the workshop. However, neither the time spent creating the machine model with the dynamic flow simulation software nor that spent performing these simulations should be penalising. The scenario evaluation must be done very quickly. So we have therefore put forward the hypothesis that there is a generic model in this context (Fig. 2) representing all flows resulting from a manufacturing process which save time in construction, simulation and parametering but which are sufficiently significant for risk of error to be minimized [4]. 2.2. Literature review on model reducing Some research work has attempted to put forward model reducing techniques. Amongst various authors, Zeigler was the first to deal with this problem [22]. In his view, the complexity of a model is relative to the number of elements, connections and model calculations. He distinguished four ways of simplifying a discrete simulation model in replacing part of the model by a random variable, coarsening the range of values taken by a variable and grouping parts of a model together. Belz et al. [2] developed a prototype decision support system for short-term rescheduling in manufacturing. They coupled expert systems and simulation, but they did not use reduced models in their simulation. Innis et al. [9] first listed 17 simplification techniques for general modelling. Their approach was comprised of four steps: hypotheses (identifying the important parts of the system), formulation (specifying the model), coding (building the model) and experiments. Leachman [10] proposed a model that considers cycle times in production planning models, especially for the semi-conductor industry which uses cycle time as an indicator. Brooks and Tobias [3] suggest a ‘‘simplification of models’’ approach for those cases where the indicators to be followed are the average throughput rates. They suggest an eight stage procedure. The reduced model can be very simple and then an analytical solution becomes feasible and the dynamic simulation redundant. Their work is interesting but is valid in cases where the required results are averages and where the aim is to measure throughput. It is of no interest to follow the various events taking place in the WC. However, as our objectives were to maximize the bottleneck utilization rate and to reschedule the remaining MO to be performed on them, we had to leave in our reduced model the means to carry out these analyses. Hung and Leachman [7] propose a technique for model reduction applied to large wafer fabrication facilities. They use ‘‘total cycle time’’ and ‘‘equipment utilization’’ as decision-making indicators to do away

MO

Pre Block Bottleneck

Fig. 2. The primary theoretical model.

Post Block

114

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

with the WC. In their case, these WC have a low utilization rate and a fixed service level (they use standard deviation of batch waiting time as a decision-making criterion). These are the most relevant indicators in the problematic we have presented. Indeed, as the group of MO to be re-scheduled and time range are fixed, the manager might accept the first solution enabling him to perform the residual work. Our work has shown that the variance on l (service level on a WC, mentioned later ‘‘l variance’’) has a significant effect on the decision making risk as the internal waiting queues at the aggregated blocks are no longer negligible in this case. We will show that the ‘‘l variance’’ will be an indicator for the decision to reduce the model (Fig. 10). Tseng [20] compares the regression techniques applied to an ‘‘aggregate model’’ (macro) by using the ‘‘flow time’’ indicator. Indeed, he suggests reducing the model by mixing ‘‘macro’’ and ‘‘micro’’ approaches so as to minimise errors in the case of complex models. Here again, for the ‘‘macro’’ view, he only deals with the estimation of flow time as a whole. For the ‘‘micro’’ approach, he constructs an individual regression model for each stage of the operation to estimate its individual flow time. The cumulative order of flow time estimates is then the sum of the individual operation flow time estimates. He tries then to mix the macro and micro approaches. Li et al. [11] and Li and Shaw [12] proposed rescheduling expert simulation systems that integrate simulation techniques, artificial neural network, expert knowledge and dispatching rules. Their systems faced fuzzy and random disturbances as incorrect work, machine breakdowns, rework due to quality problem and rush orders. Their work could be interesting for our research in rescheduling rules. In Petri netÕs work we find some attempts to simplify network structures which use ‘‘macro-places’’ representing complex activities associated with function groups. Hwang et al. [8] use the principle similar to that adopted in the GRAI [5] method and is developed in visual C. 3. Our model reducing method Unlike the above-mentioned authors, we have sought first of all, to reduce the manufacturing routings of the MO in order to construct a reduced model enabling them to be simulated. In reality, we do not totally model a problem so as to be able to reduce it later. We directly build up a simple model based on a reduced routing. Our hypothesis has lead us to put forward a reduced model (Fig. 3 explains its principle) in which we find the bottlenecks and the ‘‘blocks’’ which are ‘‘aggregates’’ of the work centers required by the released MO. The reduced model will thus have fewer elements, connections or calculations. Hence, it is possible to put forward the following simple indicators: Model Reduction Ratio ¼ Time Save Ratio ¼

Nbr of necessary elements for the complete model ; Nbr of elements in the new reduced model

Time to simulate and parameter the whole model : Time to simulate and parameter the reduced model

3 WC are aggregated in one «Bloc»

EB1

XTS

XT4

Fig. 3. Reduced model––principle.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

115

The WC remaining in the model are either conjunctural and structural bottlenecks or WC which are vital to the synchronization of the MO. All other WC are ‘‘aggregated blocks’’ upstream or downstream of the bottlenecks. By ‘‘conjunctural bottleneck’’ we mean a WC which, for the MPS and predictive scheduling in question, is saturated. This is to say that it uses all available capacity. By ‘‘structural bottleneck’’ we mean a WC which (in the past) has often been in such a condition. Effectively, for one specific portfolio (one specific MPS) there is only one bottleneck––the most loaded WC––but this WC can be another WC than the traditional bottlenecks. Theoretically, only the conjunctural bottleneck is useful for our objectives. But to improve the possibilities in the decision-making process in rescheduling, we chose to modelize the most frequent bottleneck too. We call a ‘‘synchronization work center’’ one or several resources enabling the planning of MO with bottlenecks and those without to be synchronized. To minimize the number of these ‘‘synchronization work centers’’, we need to find WC having the most in common amongst all this MO portfolio not using bottlenecks and which figure in the routing of at least one MO using them. The reduction algorithm shown below highlights these so-called ‘‘synchronization’’ work centers. Indeed, the MO using structural or conjunctural bottlenecks may be synchronized and scheduled in comparison with one another thanks to the scheduling of these bottlenecks. But for certain MO that do not use them, synchronization WC will need to be used. Reduction algorithm: Variables and parameters Let R be the set of resources: R ¼ fri ji 2 ½1; mg;

m is the number of resources:

We will notify by rk;j the used resource for the MO k, operation j. rbs 2 R is defined as the structural bottleneck of the system: rbc 2 R is the conjunctural bottleneck of the system: Let M be the set of manufacturing orders: M ¼ fmk jk 2 ½1; ng, n is the number of manufacturing orders in the considered Master Production Planning. Each manufacturing order mk is defined by a set of attributes: Gk : set of ordered operation for mk . Gk ¼ fgk;j jk 2 ½1; n; j 2 ½1; qk g; 

j is the sequencing order; qk is the maximum of operations in the routing k  :

rk;j : the utilised resource for gk;j . sk;j : the set-up time for operation j of mk . pk;j : the processing time of one part of mk for operation j. lk;j : service level of one part of mk for operation j (lk;j ¼ 1=pk;j ). kk : arrival rate on the system for mk . Let O be the subset of manufacturing orders that do not use the bottlenecks: O ¼ fol j8l 2 ½1; n; ðrl;j 6¼ rbs Þ

and

ðrl;j 6¼ rbc Þg:

116

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

The algorithm objective is to provide: R0 ¼ fri0 ji 2 ½1; m0 g; m0 is the number of new resources to determine: 0 the used block for the MO k, operation j. We will notify rk;j

G0k : set of ordered operation for m0k . R0 and G0k represent respectively all the new resources to be integrated into the reduced model, and the routings of each of the MO on these blocks. Each of these sets will keep the same attributes as in the equivalent sets of the complete model. The notations will be identical, the character ‘‘prime’’ (0 ) indicating simply that these attributes are for the reduced model. All the variables not appearing here are intermediate and temporary variables used in the algorithm. Algorithm (1) initialisation of R0 with the bottleneck resources frbs ; rbc g; O0 O; C 0 R  R0 R0 (2) research of synchronisation resource do obtain rz 2 C 0 , the most commonly used resource by the MO of O0 X set of MO using rz if X 6¼ f;g R0 R0 þ frz g Ôrz is added in R0 O0 O0  X ÔMO using rz are removed from OÕ else ÔThe resource which cannot be synchronised is removed and the operation is repeatedÕ C0 C 0  frz g endif while O0 6¼ f;g and C 0 6¼ f;g (3) Generation of reduced MO & generation of new routings for k ¼ 1 to n Ôfor all the manufacturing orders flag 0, j0 1 for j ¼ 1  a qk Ôfor each operation (3.1) if rk;j 2 R0 Ôif operation is realised on one of the element of R0 Ô a new operation is defined for the reduced routing, and its attributes updated 0 0 0 gk;j gk;j ; rk;j rk;j ; s0k;j0 sk;j ; pk;j pk;j ; j0 j0 þ 1 0 0 0 (3.2) else if flag ¼ 0 Ôblock not still created flag 1 Ôthe creation of the block is marked 0 gk;j gk;j Ôupdate of the new operation in new routing 0 0 rk;j ; 0 endif 0 0 0 0 0 rk;j rk;j s0k;j0 þ sk;j ; pk;j pk;j 0 0 [ rk;j ; sk;j0 0 0 þ pk;j 0 if j ¼ qk or rk;jþ1 2 R Ôif operation is the last or if the following belongs to RÕ 0 R0 ¼ R0 þ frk;j Ôthe new block in R0 is added 0g 0 0 flag 0, j j þ 1 Ômarking end of block endif endif next j next k

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

117

Explanations concerning the reduction algorithm: (1) initialization: the bottleneck resources are placed in R0 ; (2) the search for synchronization resources. This phase consists in placing the resources enabling synchronization of the MO not using bottlenecks in R with those using them. We seek out the resource having the most in common amongst all the MO not using bottlenecks (the most common resource that do not use bottlenecks). This resources will become the synchronization work center if it is used by at least one MO using bottlenecks. (3) Generating reduced MO and new routings. Here R0 is completed with ‘‘aggregated blocks’’ that can be used for several operations in the same routing. For the different operations of each of the MO, the various required aggregate blocks can then be added to R0 . Two cases become apparent: (3.1) The operation is performed on one of the already existing blocks in R0 : this block is updated (resource integrated, set-up time, processing time) and a new operation in the routing in progress is defined. (3.2) The operation is performed on a resource not belonging to R0 . A new block is begun in this case whilst gradually updating the different attributes. This block will be closed (or created) if the last operation of the MO has been attained or if the following operation is performed on a resource already belonging to R0 . We used normal laws for the arrival frequencies on the WCs and exponential laws for the service levels. If we add different exponential laws an Erlang law is obtained, but we can only add processing times if we do not take into account the queue times between the WCs into the blocks.

4. Illustration 4.1. The elements of the problem The case described below concerns the manufacture of an assembled finished product whose parts are machine made. This true case has been simplified for the want of clarity. The routing file in Fig. 4 shows that this case study puts 17 work centers into operation where Tu is the run time and Ts the setup time. The realization of the routing of furs is subject to the production of the shirt ranges with their packing. The problem is the same for the distributor of the furs, of the secondary bushel and the bushel. By hypothesis the re-planning period considered concerns a group of ten MO for these six items, results of a predictive MPS (for example there was not possible to realise 4 MO in the last period). 4.2. Experimentation At first the load in hours must be calculated for the period. Table 1 illustrates this. It highlights the conjunctural bottleneck ‘‘XTS’’. The structural bottleneck XT4 is also heavily loaded. Indeed, in this study the available maximum capacity is 120H. First of all, we construct a finite capacity scheduling using a MPCS software (we used Prelude Production software). Secondly, a so called ‘‘complete’’ model enabling us to dynamically simulate this scheduling (using ARENA software) so as to determine whether we obtain the same results for the utilization rates on the bottlenecks and the production time for the 10 MO in question. The same MO portfolio was used with ARENA, with the same schedule and the same priority rules. We then simulate successively using constant, then variable parameters. This case is a reflection of

118

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

SHIRT

WRAPPER

Resources

Tu Ts

Resources

Tu Ts

SECONDARY BUSHEL

BUSHEL

Resources

Resources

TNB

0.18 0.25

TNB

0.20 0.25

TNB

EB1

0.10 0.25

EB1

0.07 0.25

XTS

1

RC3

0.02 2.5

1.5

Tu Ts

Tu Ts

0.14 0.25

TNB

0.15 0.25

LH3

0.10 0.5

RE3

0.05 1

RE3

0.10 1

GC1

0.11 1.5

LH3

0.05 0.5

RI3

0.11 2

EB1

0.08 0.25

PS1

0.06 0.55

RE3

0.15 1.5

CM0

0.01

XTS

1

1.5

EB1

0.07 0.25

RI1

0.03 1.5

LH3

0.05 0.5

XTS

1

EB1

0.15 0.15

RI1

0.3 1.5

EB1

0.05 0.25

CM0

0.01

LH3

0.15 0.5

XT4

1.5

EB1

0.05 0.25

CM0

0.01

XT4

1.5

CM0

0.01

1.5

FURS Resources AJ0

Tu Ts 0.05 0.5

DISTRIBUTOR Resources

Tu Ts

LH3

0.05 0.5

LH3

0.15 0.15

RE3

0.55 3

EB4

0.02 0.15

EB1

0.05 0.3

RE3

0.4 1.5

EB2

0.10 0.1

EB4

0.02 0.15

EB1

0.02 0.15

EB1

0.08 0.25

EA6

0.255

LH3

0.05 0.25

RE3

0.34 1.5

Tu = Run Time Ts = Setup Time

EB1

0.07 0.4

PE0

0.88 4.5

Fig. 4. The distributorÕs simplified routing.

what actually takes place in the shops. Indeed, as maximizing the bottleneck utilization rate is the No. 1 criterion for decision making, the reflections of scheduling in ‘‘static MO’’ as proposed by the MPCS (Fig. 5) are no longer sufficient. On this software generated Gantt chart we notice that the system proposes a scheduling which apparently saturates the XTS bottleneck completely (120h) Fig. 5 shows a smaller load (112h) on XT4. If problems arise during a period separating two MPS, the managers role will be to reschedule the tasks remaining to be completed, the main aim being to saturate the bottlenecks. This is the only way to maximize the work performed in this finite time period at his disposal until the next MPS. The discrete event simulation of flow enables the time variables to be visualized. We note ‘‘l Variance’’ this parameter characterizing the variability of the service level on the WC of the MO routing. In fact, simulation enables us to show that the distribution of on these WC visible loads is totally theoretical. These time variables will create temporary overloading and under-loading on the bottlenecks. The complete model is made up of 275 ARENA basics modules. This is explained by the fact that it is not sufficient to model resources but that distinctions must be drawn in the model as to manufacturing and setup. But also because the MO for which the machine has been set must be carried out before another MO begins with its own setting at this WC. The machines used for this process represent 17 WC. The complete

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

119

Table 1 Load table

TNB LH3 AJ0 RE3 EB1 EB2 EB4 XTS RI1 XT4 EA6 CM0 RC3 RI3 GC1 PE0 PS1 P

MO1

MO2

MO3

MO4

MO5

MO6

MO7

MO8

MO9

MO10

10 2.1 1 0 3 2.9 0 0 11 1.8 0 0 0.1 0 0 0 0 0 21.9

10 2.3 0 0 0 1 0 0 0 0 0 0 0.1 2.7 3.1 0 0 0 9.2

10 1.7 4.5 0 2 1.8 0 0 11 4.5 15 0 0.1 0 0 0 0 0 40.6

10 1.8 0 0 1.5 1.7 0 0 11 0 15 0 0.1 0 0 2.6 0 1.2 34.9

10 0 1 1 8.5 1.2 1.1 0 0 0 0 2.6 0 0 0 0 0 0 15.4

27 5.1 1.9 0 5.6 7.2 0 0 29 2.3 0 0 0.3 0 0 0 0 0 51.4

27 5.7 0 0 0 2.1 0 0 0 0 0 0 0.3 3 5 0 0 0 16.1

27 4 9.6 0 3.7 4 0 0 29 9.6 41 0 0.3 0 0 0 0 0 101

27 4.3 0 0 2.4 3.7 0 0 29 0 41 0 0.3 0 0 4.5 0 2.2 87.4

10 0 2.4 0 3.7 2.2 0 0.7 0 0 0 0 0 0 0 0 13 0 22

P 27 20.4 1 30.4 27.8 1.1 0.7 120 18.2 112 2.6 1.6 5.7 8.1 7.1 13 3.4

Fig. 5. MPCS work center schedule.

model is thus made up of 17 times the 14 ARENA basics modules necessary for the parametering of the events relative to a WC and a part representing the modeling of MO release composed of 30 ARENA basics modules. The mean operation time for the 10 MO using the complete model is 478h as against 232h according to the MPS resulting from the MPCS (here the overall lead time exhibits a large difference––478h rather than 232h. This is due to the fact that we chose a large batch size and high frequency of service). The indicator variability of ‘‘overall lead time’’ is 10h. We thus detect a considerable influence of time data relative to the frequency parameters controlling the waiting queues in the system (Table 2). We thus witness the interest of dynamic simulation for optimizing production schedules. Simulation on a complete model with constant parameters obviously provides the same results as MPCS. However, the model with variable highlights waiting queue phenomena.

120

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

Table 2 MPCS vs. complete model comparison Criteria

MPCS

Complete model Constant parameters

Variable parameters

Utilisation rate XTS Utilisation rate XT4 Global Lead Time XT4 Load XTS Load

89% 82% 232h 112h 120h

89% 82% 232h 112h 120h

86% 70% 478h 108.4h 132.3h

Table 3 MPCS, complete model and reduced model comparison Criteria

MPCS

Complete model

Reduced model

Constant parameters

Constant parameters

Variable parameters

Util rate XTS Util rate XT4 Global Lead Time XT4 Load XTS Load

89% 82% 232h 112h 120h

89% 82% 232h 112h 120h

99% 91% 478h 120h 130h

98% 91% 483h 120.2h 129.3h

4.3. Implementing the reduced model The remaining difficulty in this problematic is that of the time required to: 1. create or modify the model, 2. perform simulations, 3. analyze and decide on re-planning. Hence the interest of the reduced model (Table 3). The simulation and the modeling of the reduced model is quicker (time gain ration of 1.65) than that of the complete model (remember that the ‘‘time save ratio’’ highlights the gain in both the time to simulate and the time to parameter the model). In fact the model only contains 79 modules (the reduction ratio is then 3.5). The average implementation of the MO portfolio is 483h and the standard deviation is 12.06h. It would thus appear that the reduced model leads us, even more than with predictive scheduling, to underestimate MO implementation time, but only minimally. The lead time of 483h is indeed only slightly greater: the level of error is very low (only 5h difference). 5. Analysis of the simulation parameters The interest of such a model is the scope it provides to perform more simulations. If the organization allows it, the user has the freedom to vary the size of the transfer batches and to instantly become aware of the impact on the true dynamic of the parts. The model also enables the service level to be varied if required (interim operator or any production problem . . .), and/or any other parameter reflecting the true dynamic of the work shop considered in view of the information he receives and which describe the events having taken place in this very short period. We have sought to highlight the effect of factors that appear essential to us in the parametering of the model, i.e. the variability in time frequencies, the number of work centers and the size of the batches. For

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

121

this we have experimented with complete and reduced models. We have used experimental designs and in particular TaguchiÕs orthogonal designs [14,17]. Taguchi has perfected a method that uses fractional and orthogonal tables to perform experience plans and study the effects of factors of a system (Fig. 6). In a case where the system under study is subject to the effect of external and uncontrollable variables, he recommends the use of ‘‘orthogonal’’ or ‘‘product plans’’ which can be optimized through the use of a signal/noise ratio (S/N). We have implemented plans having three then four modalities for the factors variability on the service level (noted «l Variance»), batch size and number of WC. Indeed, as the expected responses were imagined to be non-linear, we had to define more than two modalities per factor. We implemented four complete models and one reduced model. In fact one model corresponding to the criteria for reduction defined above does exist. So different complete models were reduced into the same (reduced) model using the same method. The responses studied were global time and the bottleneck load rate. Fig. 7a and b show the graphs of the effects of the factors on the mean lead time. In this paper we do not show the interactions between factors. Indeed, we were only seeking parameter combinations that would lead to extremums. And, in this case, we wish to analyze the evolution of the indicators with regard to the evolution of the parameters and not to find the value of extremums (lead time, load rate . . .). So as to analyze the variances in repetition, each experiment was performed 20 times using PlanExpert software. The use of repetition enabled us to highlight the fact that the reduced model performs differently to the MPS but similarly to the complete model (Table 4). The analysis of responses and the construction of the models in both cases has enabled us to show that ‘‘batch size’’ and variance factors are significant over the mean of the responses (Fig. 7a). Indeed, the graph relating to the factor ‘‘WC number’’ exhibits variations of little significance compared to those of the other factors. This is valid for the four modalities. We also performed a variance analysis to test the factorsÕ degree of meaning (Table 5). However, the three factors have a considerable influence on the variation in responses (Fig. 7b and Table 6). The effect of the «number of WC» does remain minimal with the reduced model. Finally, we were able to show on the one hand, that this factor has little effect on the differences between the average indicators of global lead time and bottleneck load level obtained by dynamic simulation and by Control factors

Input Data

Studied System

Answer of the system

Variation of control factors

Taguchi method

Fig. 6. Taguchi experimental designs.

Factor effects on the answer

122

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

1000

1000

900

900

800

800

700

700

600 Effects 500

600 Effects 500

Nbr of WC

300

300

200

200

100

100

0

1

(a)

2

3

Q/Lot

400

Q/Lot

400

Nbr of WC

0

4

1

2

3

4

Modalities

Modalities

LT variability effects - Reduced Mod

LT variability effect - Complet Mod 7000

12000

6000

10000

5000

8000

Effects

Effects

4000

Nbr of WC

6000

Nbr of WC

Q/Lot

Q/Lot

3000

4000 2000 2000

1000 0

(b)

1

2

3

0

4

1

2

3

4

Modalities

Modalities

Fig. 7. Effects of the factors on the (a) mean and (b) variance.

Table 4 Studied factors summary Factors

l Variance Nbr of WC Quantity/lot

Modalities 1

2

3

4

0.5r 10 5

1r 12 10

2r 17 30

3r 22 50

predictive scheduling of the MPCS on the other. However, the variance in the mean differences is clearly influenced by the three factors.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

123

Table 5 Variance analysis, effect on the mean Actions

Square sum

DDL

Variance

F

QðF Þ

Significance

Variance Nbr WC Q/lot Residus Total

736947.5342 35351.0524 1036938.0830 214014.6709 2023251.3400

3 3 3 6 15

245649.1781 11783.6841 345646.0276 35669.1118

6.8869 0.3304 9.6903

0.0227 0.8042 0.0102

S S

Table 6 Variance analysis, effect on the variance Actions

Square sum

DDL

Variance

F

QðF Þ

Significance

Variance Nbr WC Q/lot Residus Total Used variance

3684737.6710 176755.2619 5184690.4140 1343690.1810 10390143.530 273886.8267

3 3 3 70 79 64

1228245.890 58918.4206 1728230.138 19199.4312

287.0081 13.6777 403.8410

0.0000 0.0000 0.0000

S S S

4279.4817

This means that between the initial MPS resulting from the MPCS and the results of the simulation on a reduced model, the variability of the results can in some cases be considerable (Fig. 8). This, however, is not due to the model being reduced. On Fig. 8––corresponding to the case of 22 WC, batch size 50 (the biggest) and with l Variance equal to 3r (the biggest)––we note that there are no significant differences between the models that have been dynamically simulated. But the difference is located between the dynamic simulations and the results predicted by the MPCS (which is normal). This type of model, reduced in this way, thus has the following advantages: 1. On the critical centers and on the synchronization centers it maintains the qualities of the complete model. 2. It enables simulations to be performed more quickly. 3. It reduces decision making time. MPS / Simulations Comparison 1600 1400 1200

LT

1000

MPS

800

Complet simulated LT

600

Reduced simulated LT

400 200 0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

Fig. 8. Comparison of the results.

15

16

124

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

Fig. 9. Advantages and risk of the reduced model simulation.

Differences in secondes 2500 2000 1500 1000 500

Variability

Order Quantity 1

2

10

100

200

300

400

500

600

normal(1m,4s) tps fixe 1m 1

1000 800 700 600 500 400 300 200 100 10 2

0

700

800

1000

Fig. 10. Interest for the simulation.

Moreover, the major advantage on the one hand, is when the released batches are significant values and when the MO are related to products from the same technological family on the other (thus highly common in the manufacturing process). It then highlights the differences between planned and simulated reality. What is more, it is when these released batches are large and that the variance on the frequencies of service and arrival of the parts is high that the risk in decision making is all the higher (Fig. 9). The analysis of these different parameters has enabled us to highlight an area for which simulation has a definite interest (Fig. 10) compared to the scheduling decisions of the MPS [19]. The area in the circle shows the difference (expressed in seconds, between the result of the scheduling of the MPS obtained from the MPCS and the simulation) brought about by a change in value of the WC service level parameters and those of the size of the batch.

6. Conclusion We have shown in this article that dynamic simulation of flow on reduced models can, in certain cases, be interesting and advantageous. Following the description of our method of constructing reduced models, we have demonstrated the effects of the significant factors on the scheduling results for a MPS. We have particularly been able to reveal an area for which dynamic simulation of flow was not appropriate.

A. Thomas, P. Charpentier / European Journal of Operational Research 161 (2005) 111–125

125

We have observed a certain generic in reduced models. We wish to work on this aspect of the problem in more detail so as to show that it may be possible to create data banks of standard reduced models. We believe that the modes of parameter calculation for the ‘‘aggregated blocks’’ may well be different according to the case under study and consequently, they might act differently on the model. Indeed, according to the type of statistical law used to define the behavior of the WC, the calculation of the parameters of the aggregated blocks may be different. For example, if the service levels of the WC are constant or ‘‘symmetrical’’ distributions, it is sufficient to total them up to arrive at the service life of the block. Other investigations are required in this field. Finally, reflection needs to be carried out into the relevant methods for the rapid re-planning of the MPS concerned. Indeed, when a problem arises in the shop, the planner should, with the aid of the dynamic simulation of flows performed on these reduced models, be able to propose a new scheduling for the MO remaining to be made before calculating the next MPS. This problem is similar to the heuristics of the ‘‘shifting bottleneck’’ put forward by Adams et al. [1] for which a problem of scheduling of a simple machine with latent time is solved. A schedule needs to be found on this bottleneck machine which will comply with a certain number of constraints. This should be done by saturating the critical bottlenecks whilst meeting the due dates.

References [1] J. Adams, E. Balas, D. Zawack, The shifting bottleneck procedure for the job-shop scheduling, Management Science 34 (3) (1988). [2] R. Belz, P. Mertens, Combining knowledge-based systems and simulation to solve rescheduling problems, Decision Support Systems 17 (1996) 141–157. [3] R.J. Brooks, A.M. Tobias, Simplification in the simulation of manufacturing systems, International Journal of Production Research 38 (2000) 1009–1027. [4] P. Charpentier, A. Thomas, Model reducing method for the scheduling decision-making, IEPM2001, Quebec, 2001. [5] M. Doumeing, Conception de syste mes flexibles de production, Laboratoire GRAI, 1989. [6] E.M. Goldratt, J. Cox, Le but, Ed AFNOR Gestion, 1986. [7] Y.F. Hung, R.C. Leachman, Reduced simulation models of wafer fabrication facilities, International Journal of Production Research 37 (1999) 2685–2701. [8] J.S. Hwang, S. Hsieh, H.C. Chou, A Petri-net based structure for AS/RS operation modelling, International Journal of Production Research 36 (1999) 3323–3346. [9] G.S. Innis, E. Rexstad, Simulation model simplification techniques, Simulation 41 (1983) 7–15. [10] R.C. Leachman, Preliminary design and development of a corporate-level production planning system for the semi-conductor industry, Optimization in Industry, Chichester, UK, 1986. [11] H. Li, Z. Li, L.X. Li, B. Hu, A production rescheduling expert simulation system, European Journal of Operational Research 124 (2000) 283–293. [12] Y.C.E. Li, W.H. Shaw, Simulation modeling of a dynamic job shop rescheduling with machine availability constraints, CIE 35 (1998) 117–120. [13] M. Khouja, An aggregate production planning framework for the evaluation of volume flexibility, Production Planning and Control 9 (2) (1998) 27–137. [14] M. Pillet, Introduction aux plans dÕexperiences par la methode Taguchi, Editions dÕorganisation Universite, Paris, 1992. [15] A. Pritsker, K. Snyder, Simulation for planning and scheduling, APICS (1994). [16] P. Roder, Visibility is the Key to scheduling success, APICS, Planning and Scheduling (1994) 53. [17] G. Taguchi, Orthogonal Arrays and Linear Graph, American Supplier Institute Press, 1986. [18] A. Thomas, Pour un pilotage dynamique et integre, Revue Logistique et Management 7 (1999) 43–55. [19] A. Thomas, P. Charpentier, Pertinence de modeles reduits pour la prise de decision en reordonnancement, CPI2001, Fes, 2001. [20] T.Y. Tseng, T.F. Ho, R.K. Li, Mixing macro and micro flowtime estimation model: wafer fabrication, International Journal of Production Research 37 (1999) 2447–2461. [21] Vollmann, Berry, Whybark, Manufacturing, Planning and Systems Control, The Business One Irwin, 1992. [22] B.P. Zeigler, Theory of Modelling and Simulation, New York, Wiley, 1976.