- Email: [email protected]

Predictive machinability models for a selected hard material in turning operations A.M.A. Al-Ahmari ∗ Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia Received 22 August 2006; received in revised form 1 December 2006; accepted 16 February 2007

Abstract In this paper, empirical models for tool life, surface roughness and cutting force are developed for turning operations. Process parameters (cutting speed, feed rate, depth of cut and tool nose radius) are used as inputs to the developed machinability models. Two important data mining techniques are used; they are response surface methodology and neural networks. Data of 28 experiments when turning austenitic AISI 302 have been used to generate, compare and evaluate the proposed models of tool life, cutting force and surface roughness for the considered material. © 2007 Elsevier B.V. All rights reserved. Keywords: Neural networks; Response surface methodology; Machinability models

1. Introduction Predicting process of machinability models and determining the optimum values of process parameters in manufacturing systems have been areas of interest for researchers and manufacturing engineers. Machinability database systems are essential for selecting optimum process parameters during the process planning stage which represents an important component in Computer Integrated Manufacturing (CIM) [1]. Machining economic problem of single pass and multipass turning operations has been considered by many researchers, see for example [2–11]. Most of these researchers have used the generalized empirical equations systems that utilize expanded Taylor tool life equations to determine proper machining conditions based on three criteria: minimum production cost, maximum production rate or maximum profit rate per item. These criteria have been considered in both constrained and unconstrained problems of machining economics. Tool life, surface roughness and cutting forces have an effect on cost, time, design features and quality of manufacturing operations. In addition, these machinability models are very critical constraints for process parameters selection in process planning systems. Therefore, accurate representation of these three important factors in empir-

∗

Tel.: +966 1 4676664; fax: +966 1 4678657. E-mail address: [email protected]

0924-0136/$ – see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jmatprotec.2007.02.031

ical models is very important. These models cannot be predicted without experimental work on specific material using specific conditions. The ideal tool life, t equation used by researchers is [12]: vt x f y d z = C

(1)

where v, f and d are cutting speed, feed rate and depth of cut, respectively, and x, y, z are tool life exponents and C is a constant. Researchers also used the following formula to determine surface roughness. Ra =

f2 32r

(2)

where r is the nose radius. And the following formula to determine cutting force: Fc = Kf n1 d n2

(3)

where K is constant, n1 and n2 are tool life equation exponents, and d is depth of cut. The above three machining functions (tool life, surface roughness and cutting force) are developed as functions of process parameters using different data mining techniques as model building methods. It has been found that the relationships between machining functions and parameters are commonly approximated by polynomial functions. Multiple linear regression analysis techniques (RA), response surface methodology

306

A.M.A. Al-Ahmari / Journal of Materials Processing Technology 190 (2007) 305–311

(RSM), and computational neural networks (CNN) are used to predict models of these process functions. A considerable amount of studies have investigated the general effects of process parameters (cutting speed, feed rate, depth of cut, etc.) on process functions (tool life, cutting force and surface roughness) [1,13]. Most of these models are based on RA, very few papers used CNN [14–19]. In this paper, data of factorial design for the three responses (tool life, cutting force and surface roughness) when turning austenitic AISI 302 are used to predict the machinability models using RSM and CNN. The obtained machinability models are compared against each other using the relative error analysis, descriptive statistics and hypothesis testing. Four machining parameters were considered (cutting speed (v, m/min), feed rate (f, mm/r), depth of cut (d, mm) and tool nose radius (r, mm)). Therefore, this paper presents the following contributions. First it applies response surface methodology (RSM) to improve the tool life, surface roughness and cutting force models using Minitab-14 [20]. The impact of each parameter, interactions, and quadratics of these parameters on the three models are examined and evaluated. Second, neural network (NN) models are developed to predict the tool life, cutting force, and surface roughness using MATLAB software. Finally, the resulting models are compared against each others to illustrate the most appropriate approach for predicting the empirical models of the considered responses.

Table 2 Design of experiment and data for model constructions Exp. no.

v

f

d

r

T

Fc

Ra

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

−1 1 −1 1 0 0 0 0 −1 1 −1 1 0 0 0 0 0 0 0 0 −1 1 −1 1 0 0 0 0

−1 −1 1 1 0 0 0 0 0 0 0 0 −1 1 −1 1 −1 1 −1 1 0 0 0 0 0 0 0 0

0 0 0 0 −1 1 −1 1 0 0 0 0 −1 −1 1 1 0 0 0 0 −1 −1 1 1 0 0 0 0

0 0 0 0 −1 −1 1 1 −1 −1 1 1 0 0 0 0 −1 −1 1 1 0 0 0 0 0 0 0 0

25.76 4.5 13.5 2.7 13 9.67 14.83 10.5 19.33 3.5 20 2.88 17.83 11.88 13.1 9.67 13 5.8 14.5 6.75 26.5 3.17 17.17 2.52 9.87 10.73 8.33 10.17

470 350 1520 1400 410 1020 540 1220 700 540 750 600 520 1840 720 2640 410 1400 530 1520 420 380 1350 1140 680 460 600 520

1.1 0.8 13.8 14.2 4.1 3.7 1.35 1.2 4.5 3.7 2.35 1.6 1.6 16 1.3 15.4 1.1 29 0.6 10.8 4.8 3.2 3.1 2.1 2.8 3.2 2.7 3.5

2. Predictive methods of machinability models using the analysis of lack of fit. The three models are as follows:

2.1. Design of experiment and RA models Mozher [21] conducted 28 experiments on austenitic AISI 302 with 50 mm diameter and 500 mm length. The details of experimental setup are given in [21]. He considered four machining parameters (cutting speed (v, m/min), feed rate (f, mm/r), depth of cut (d, mm) and tool nose radius (r, mm)). The considered outputs are tool life (T, min), cutting force (Fc , N) and surface finish (Ra , m). The four process parameters were considered at three levels (−1, 0 and 1) as shown in Table 1. The experiments are conducted based on Box and Dehnken design. This design is rotatable and consists of blocks in an orthogonal arrangement. This inputs and responses are illustrated in Table 2. The above experiments have been done on 16K20 lathe machine using throw away carbide inserts IN.2, IN.3 and IN.4 based on the design levels of tool nose radius with toolholder T2. Mozher constructed three models of tool life, cutting force and surface finish using RA technique and the logarithmic transformation of their first order models and evaluated these models Table 1 Factors and levels of the experiment for model constructions Level

Low (−1) Center (0) High (+1)

Factors Cutting speed, v

Feed, f

Depth of cut, d

Nose radius, r

25 60 144

0.1 0.25 0.7

0.25 0.5 1.6

0.4 0.8 1.6

T =

406.423r 0.038 v1.051 f 0.289 d 0.219

(4)

Fc =

4182.220f 0.621 d 0.563 r 0.116 v0.108

(5)

Ra =

31.025f 1.347 v0.159 d 0.159 r 0.605

(6)

The major drawback of RA models was that they did not apply the factorial experimentation approach to design the experiments [17]. Therefore, the data and conclusion obtained could be biased as factor interactions were not clearly examined. In addition, the method is limited to a few numbers of parameters [17]. Therefore, in the next sections we developed the machinability models using response surface methodology and computational neural network methods. 2.2. Response surface method Response surface methods are used to examine the relationship between one or more response variables and a set of quantitative experimental variables or factors. These methods are often employed after identifying the controllable factors and the objective is to find the factor settings that optimize the response. Designs of this type are usually chosen when there is suspecting curvature in the response surface. It is clear from the literature that the tool life, cutting force and surface finish equa-

A.M.A. Al-Ahmari / Journal of Materials Processing Technology 190 (2007) 305–311

307

Table 3 Response surface analysis Term

Coefficient

S.E. coefficient

T

p

18.088 −12.721 −6.561 −6.656 8.435 4.196 5.244 3.912 2.525 1.875

0.000 0.000 0.000 0.000 0.000 0.001 0.000 0.001 0.021 0.077

a

Response surface regression: T vs. v, f, d (the analysis was done using uncoded units) Estimated regression coefficients for T Constant 53.8187 2.97541 v −0.4811 0.03782 f −51.9747 7.92187 d −40.9756 6.15585 v×v 0.0016 0.00018 f×f 34.2576 8.16431 d×d 22.3558 4.26304 v×f 0.1263 0.03229 v×d 0.0680 0.02693 f×d 9.7773 5.21588 Source

d.f.

Seq SS

Adj SS

Adj MS

F

p

Analysis of variance for T Regression Linear Square Interaction Residual error Lack-of-fit Pure error

9 3 3 3 18 9 9

1127.13 933.83 153.47 39.82 30.86 23.68 7.18

1127.128 315.423 154.108 39.823 30.856 23.679 7.177

125.236 105.141 51.369 13.274 1.714 2.631 0.797

73.06 61.33 29.97 7.74

0.000 0.000 0.000 0.002

3.30

0.045

27

1157.98

Total Term

Coefficient

S.E. coefficient

Response surface regression: Fc vs. f, d (the analysis was done using uncoded units)b Estimated regression coefficients for Fc Constant 733.3 261.2 f −462.2 898.3 d −1549.0 691.3 f×f 2185.6 963.6 d×d 1615.8 502.4 f×d 1379.3 625.0

T

p

2.808 −0.515 −2.241 2.268 3.216 2.207

0.010 0.612 0.035 0.033 0.004 0.038

Source

d.f.

Seq SS

Adj SS

Adj MS

F

p

Analysis of variance for Fc Regression Linear Square Interaction Residual error Lack-of-fit Pure error

5 2 2 1 22 3 19

7,671,607 7,208,112 342,928 120,567 544,689 371,552 173,138

7,671,607 124,336 336,422 120,567 544,689 371,552 173,138

1,534,321 62,168 168,211 120,567 24,759 123,851 9,113

61.97 2.51 6.79 4.87

0.000 0.104 0.005 0.038

13.59

0.000

27

8,216,296

Total Term

Coefficient

S.E. coefficient

Response surface regression: Ra vs. f, r (the analysis was done using uncoded units)c Estimated regression coefficients for Ra Constant −5.459 1.833 f 45.590 5.071 r 2.780 1.911 f×r −21.095 5.391

T

p

−2.978 8.990 1.455 −3.913

0.007 0.000 0.159 0.001

Source

d.f.

Seq SS

Adj SS

Adj MS

F

p

Analysis of variance for Ra Regression Linear Interaction Residual error Lack-of-fit Pure error

3 2 1 24 5 19

1055.47 983.26 72.22 113.18 104.19 8.99

1055.472 767.360 72.215 113.177 104.190 8.987

351.824 383.680 72.215 4.716 20.838 0.473

74.61 81.36 15.31

0.000 0.000 0.001

44.05

0.000

27

1168.65

Total a b c

R2

R2 (adj) = 96.0%.

S = 1.309, = 97.3%, S = 157.3, R2 = 93.4%, R2 (adj) = 91.9%. S = 2.172, R2 = 90.3%, R2 (adj) = 89.1%.

308

A.M.A. Al-Ahmari / Journal of Materials Processing Technology 190 (2007) 305–311

tions are not linear and they could be predicted using the response surface method. The initial analysis of the output obtained from RSM includes all parameters and their interactions. The models are reduced by eliminating elements which have no significant effect on the responses. The revised RSM analysis is illustrated in Table 3. From the above table the empirical models of the tool life, cutting force and surface finish are as follows: T = 53.819 − 0.481v − 51.975f − 40.976d + 0.002v2 +34.258f 2 + 22.356d 2 + 0.126vf +0.068vd + 9.777fd

(7)

Fc = 733.3 − 462.2f − 1549.0d + 2185.6f 2 +1615.8d 2 + 1379.3fd Ra = −5.459 + 45.590f + 2.780r − 21.095fr

(8)

Fig. 1. Two-hidden layer ANN structure. The description of the symbols used as follows: P = input vector, R = number of inputs = 4. Superscript indicates order of layer, i.e. first layer, or second layer, etc. S = number of neurons (S1 = 73, S2 = 45, and S3 = 3). a = outputs. F1 and F2 are hyperbolic tangent sigmoid (tansig), i.e. ai = f (ni ) = (eni − e−ni )/(eni + e−ni ). W = weights, b = biases.

(9)

2.3. Neural network models Computational neural networks possess a number of attractive features for modeling complex manufacturing operations: universal function approximation capability, resistance to the noise of missing data, accommodation of multiple non-linear variables for unknown interactions and good generalization capability [22]. The neural network used for data mining is a feed-forward backpropagation (BP) multilayer network. It consists of two-hidden hyperbolic tangent sigmoid (tansig) layers; followed by a linear (purelin) of fully connected processing elements which map inputs from the continuous domain to outputs normalized to a range of (−1.0 to 1.0). This neural network is chosen, because it can be trained to approximate most functions arbitrarily well [23–25]. The two tansig hidden layers have 73 and 45 neurons and the purelin output layer has three neurons for the three outputs. The number of hidden layers and the neurons in each hidden layer are subject to the complexity of the mapping, computer memory, computation time and the desired data control effect. Too many neurons result in a waste of computer memory and computation time, while too few neurons may not provide the desired data control effect. The more complicated the input–output mapping, the more neurons are needed. Due to the non-linearity of the data set given, two-hidden layers are selected as shown in Fig. 1. In Fig. 1 each neuron consists of the following:

5. An output or activation function f(ni ) that computes a single output value (ai ). In computing the activation value of each neuron all the weights in the network are first set to random values (W1 , b1 and W2 , b2 ). The conjugate gradient backpropagation algorithm is used for training the artificial neural network (ANN) because it is faster than the ordinary backpropagation algorithm developed by [25]. This algorithm is summarized in 11 steps as illustrated in Appendix A [26]. Data input–output pairs required for training are obtained by adding normally distributed noise to the given data to create a training set. The non-noisy data is then used as the testing data. The Neural Network Toolbox software package is used for training and testing the given data [27], as shown in Fig. 2.

1. An input vector: P = (P1 , P2 , P3 , P4 , . . ., Pn ) 2. A weighting matrix: W = (W1 , W2 , W3 , W4 , . . ., Wn ) of interconnection weights. 3. A biasing vector: b = (b1 , b2 , b3 , b4 , . . ., bn ). 4. A combining function (ni ) that is the weighted sum of inputs to that neuron, i.e. ni = (Wik Pk ) (10) k=1

where k = 1, 2, 3, . . ., S and S is the number of neurons.

Fig. 2. Training error.

A.M.A. Al-Ahmari / Journal of Materials Processing Technology 190 (2007) 305–311

Table 4 illustrates the relative errors for the three modeling techniques. Descriptive statistics of the errors are calculated for the three models as shown in Table 5. This table illustrates error means, standard deviation, median, S.E. error, Tr mean, minimum and maximum errors. It is evident that CNN models provide better prediction capabilities because they are generally offer the ability to model more complex non-linearities and interactions than RA and RSM. RSM is better than RA for tool life and cutting force models. To compare the goodness of fit of the RA, RSM, and CNN models, some representative hypothesis tests are conducted for the model construction process and the results are shown in

3. Evaluation and comparisons of the models The three methods of predicting machinability models are evaluated and compared to determine the best prediction method that provides high accuracy and better results. The relative error in percentage terms between the fitted values that are predicted by the three methods (RA, RSM, and CNN) and the observed values of the three outputs are computed. This relative error is computed using the following formula: Relative error (%) =

309

|model predicted value − observed value| × 100 observed value

Table 4 The relative error for the three modeling techniques Relative error (%) of the predicted values of RA

Relative error (%) of the predicted values of RSM

Relative error (%) of the predicted values of CNN

T

Fc

Ra

T

Fc

Ra

T

Fc

Ra

21.50 10.63 31.61 4.67 16.61 17.25 22.97 19.69 20.82 6.13 23.05 35.92 18.51 30.58 18.13 37.04 6.46 19.01 11.63 7.76 5.30 40.01 19.96 30.01 3.13 10.89 14.78 5.99

4.51 6.32 3.38 6.93 12.32 1.46 0.16 3.24 6.74 14.72 17.00 21.26 46.80 47.36 16.15 19.93 8.02 5.69 16.44 2.02 30.50 19.59 11.39 13.00 8.43 60.28 22.88 41.79

9.31 5.44 12.37 17.19 34.09 19.20 67.93 51.55 25.66 15.89 0.78 10.51 39.38 5.77 40.15 21.46 22.98 27.49 7.03 19.71 15.54 3.93 4.91 17.44 12.93 1.19 17.11 9.66

4.15 14.84 9.47 58.23 11.84 0.11 1.96 8.01 2.96 24.82 0.49 8.63 6.75 13.25 0.35 9.77 9.58 20.42 1.75 3.47 5.97 77.43 6.30 99.55 2.51 5.70 21.46 0.51

13.33 16.39 4.80 13.78 35.19 14.31 2.65 4.43 20.54 3.00 25.84 7.30 12.09 21.97 26.90 4.81 0.65 13.78 23.14 4.80 31.97 45.87 13.63 2.28 18.20 20.92 7.30 6.96

133.05 145.45 22.21 18.77 20.51 33.54 44.33 62.38 9.80 33.54 17.09 21.78 122.73 5.41 127.97 9.51 157.44 25.31 71.20 32.63 17.84 23.23 27.21 87.79 40.84 23.23 46.06 12.67

0.55 1.40 1.45 6.22 1.13 0.14 1.09 0.50 0.31 5.80 0.54 23.82 0.19 0.47 3.13 5.34 2.45 10.62 2.49 8.06 0.18 2.87 3.08 33.59 5.34 3.10 24.82 2.24

0.49 8.77 0.00 1.98 4.02 1.36 0.48 4.35 1.65 5.36 3.37 0.10 1.70 0.68 5.30 0.03 2.29 3.27 0.34 2.36 1.62 9.82 0.55 7.33 39.64 10.77 31.59 21.07

2.57 4.82 2.69 0.67 11.53 2.27 2.72 11.90 2.35 1.65 3.77 7.65 0.16 0.90 14.31 0.98 12.64 0.99 18.93 0.62 1.83 8.83 0.39 13.23 29.75 13.54 34.56 3.80

Maximum

Table 5 Descriptive statistics for error comparisons Variables

Models

N

Mean

Median

Tr mean

S.D.

S.E. mean

Minimum

T

RA RSM CNN

28 28 28

18.22 15.37 5.39

18.32 7.38 2.47

17.96 12.72 4.51

10.5 23.83 8.32

1.98 4.5 1.57

3.13 0.11 0.14

40.01 99.55 33.59

RA RSM CNN

28 28 28

16.73 14.89 6.08

12.66 13.7 2.33

15.69 14.24 5.02

15.52 11.2 9.55

2.93 2.12 1.8

0.16 0.65 0

60.28 45.87 39.64

RA RSM CNN

28 28 28

19.16 49.77 7.5

16.5 29.92 3.25

18 47.33 6.74

15.66 45.81 8.8

2.96 8.66 1.66

0.78 5.41 0.16

67.93 157.44 34.56

Fc

Ra

310

A.M.A. Al-Ahmari / Journal of Materials Processing Technology 190 (2007) 305–311

Table 6 Hypothesis testing to compare the prediction approaches 95% CI

p-Value (tool life)

p-Value (cutting force)

p-Value (surface finish)

RA

RSM

CNN

RA

RSM

CNN

RA

RSM

CNN

Mean Paired t-test F-test

0.815 0.428

0.141 0.767

0.404 0.979

0.406 0.225

0.998 0.860

0.190 0.889

0.291 0.246

0.999 0.793

0.372 0.944

Variance Levene’s test

0.815

0.840

0.959

0.448

0.851

0.920

0.720

0.921

0.959

Table 6. t-Test, f-test, and Levene’s test are used to test the results. p-values are grater than 0.05, which means that the null hypothesis cannot be rejected. All the p-values in the above table also indicate that there is no significant evidence to indicate that the observed data and the data obtained from RA, RSM, and CNN models differ. Therefore, all the three prediction models have statistically satisfactory goodness of fit from the modeling point of view. From the above comparisons, it can be concluded that the CNN is better than the RA and RSM method in predicting machinability models.

2. Compute the errors, using the relation: eq = tq − aqm

3. Compute the sum of squared errors over all inputs F(x) by the equation: F (x) =

Q

(tq − aq )T (tq − aq )

q=1

=

Q

m

eTq eq

q=1

4. Conclusion In this paper, empirical models for prediction of machinability models (tool life, cutting force and surface roughness) have been developed based on cutting experiments. Response surface methodology and neural networks are used to construct new machinability models. The methodologies of RSM and CNN are discussed in this paper. The developed neural networks predict the tool life, cutting force, and surface roughness together. The three model building methods (RA, RSM, and CNN) are compared and evaluated using descriptive statistics and hypothesis testing. It has been found that the CNN models are better than RA and RSM models. Also, RSM models are better that RA models for predicting tool life and cutting force models. The developed machinability models can be utilized to formulate an optimization model for the machining economic problem to determine the optimal values of process parameters for the selected material. In addition, the modern soft computing techniques that integrate the machinability models with optimization methods would be a filed of interest for future research. Appendix A

=

Q S

e2j,q =

q=1 j=1

1. Present all inputs to the network and compute the corresponding network outputs, using the equations: ∂eTq eq ∂Fˆ ≡ ∂nm ∂nm i i

(11)

∂ek,q ∂vh m ≡ m = m S˜ i,h ∂ni,q ∂ni,q

(12)

N

v2i

(14)

i=1

4. Compute the Jacobian matrix, by the equation: ⎡ ∂e ∂e1,1 ∂e1,1 ∂e1,1 1,1 ··· 1 1 ⎢ ∂w11,1 ∂w1,2 ∂wS 1 ,R ∂b11 ⎢ ⎢ ∂e ∂e2,1 ∂e2,1 ∂e2,1 ⎢ 2,1 ··· ⎢ 1 1 1 ⎢ ∂w1,1 ∂w1,2 ∂wS 1 ,R ∂b11 ⎢ ⎢ . .. .. .. ⎢ . ⎢ . . ··· . . ⎢ J(x)=⎢ m m m ∂e ∂e ∂e ∂e S ,1 S ,1 S m ,1 ⎢ S ,1 ··· ⎢ ∂w1 1 1 ∂w1,2 ∂wS 1 ,R ∂b11 ⎢ 1,1 ⎢ ⎢ ∂e1,2 ∂e1,2 ∂e1,2 ∂e1,2 ⎢ ··· ⎢ ∂w1 1 1 ∂w ∂w ∂b11 ⎢ 1,1 1,2 S 1 ,R ⎣ .. .. .. .. . . ··· . .

···

⎤

⎥ ⎥ ⎥ ⎥ ···⎥ ⎥ ⎥ ⎥ ⎥ ···⎥ ⎥ ⎥ ···⎥ ⎥ ⎥ ⎥ ⎥ ···⎥ ⎥ ⎥ ⎦ ··· (15)

5. Initialize for computing the sensitivities with the equation: S˜ qm = −F˙ m (nm q)

(16)

6. Calculate the sensitivities with the recurrence relations by the equation: m+1 T ˜ m+1 ) Sq S˜ qm = F˙ m (nm q )(W

The conjugate gradient backpropagation algorithm.

Sim =

(13)

(17)

7. Augment the individual matrices into the sensitivities by the equation:

m (18) S˜ m = S˜ 1m S˜ 2m · · · S˜ Q 8. Compute the elements of the Jacobian matrix by the equations: m m ∂vh ∂ek,q ∂ek,q ∂ni,q ˜ m ∂ni,q ˜ m m−1 = m= m = S =Si,h aj,q [J]h,l = i,h ∂wm ∂xl ∂wi,j ∂ni,q ∂wm i,j i,j (19)

A.M.A. Al-Ahmari / Journal of Materials Processing Technology 190 (2007) 305–311

[J]h,l =

m

m

∂ni,q ∂ek,q ∂ek,q ∂ni,q ∂vh m m = = m = S˜ i,h = S˜ i,h m m ∂xl ∂bi ∂ni,q ∂bi ∂bim (20)

9. Solve to obtain xk using equation: −1 T

xk = −[J T (xk )J(xk ) + μk I]

J (xk )v(xk )

(21)

10. Re-compute sum of squared errors, using xk + xk . If this new sum of squares is smaller than computed in step 1, divide μ by some factor α > 1 (e.g. α = 10), let: xk+1 = xk + xk , and go back to step 1. If the sum of squares is not reduced, then multiply μ by α and go back to step 3. 11. The algorithm is assumed to have converged when the norm of the gradient: ∇F (x) = 2J T (x)v(x)

(22)

is less than some predetermined value, or when the sum of squares has been reduced to some error goal. References [1] I. Coudhury, M. El-Baradie, Machinability assessment of inconel 718 by factorial design of experiment coupled with response surface methodology, J. Mater. Process. Technol. 95 (1999) 30–39. [2] B. Arezoo, K. Ridgway, A.M.A. Al-Ahmari, Selection of cutting tools and conditions of machining operations using an expert system, Comput. Ind. 42 (2000) 43–58. [3] M. Cakir, A. Gurarda, Optimization of machining conditions for multi-tool milling operations, Int. J. Prod. Res. 38 (2000) 3537–3552. [4] D. Ermer, S. Kromodihardjo, Optimization of multipass turning with constraints, Trans. ASME J. Eng. Ind. 10 (1981) 462–468. [5] R. Gupta, J. Batra, G. Lal, Determination of optimal subdivisions of depth of cut in multipass turning with constraints, Int. J. Prod. Res. 33 (1995) 2555–2565. [6] B. Gopalakrishan, F. Al-Khayyal, Machine parameter selection for turning with constraints: an analytical approach based on geometric programming, Int. J. Prod. Res. 29 (1991) 1897–1908. [7] B. Lambert, A. Walvekar, Optimization of multipass machining operations, Int. J. Prod. Res. 16 (1978) 259–265. [8] H. Lee, B. Yang, K. Moon, An economic machining process model using fuzzy non-linear programming and neural network, Int. J. Prod. Res. 37 (1999) 835–847.

311

[9] Y. Shin, Y. Joo, Optimization of machining conditions with practical constraints, Int. J. Prod. Res. 30 (1992) 2907–2919. [10] I. Yellowey, E. Gunn, The optimal subdivision of cut in multipass machining operations, Int. J. Prod. Res. 27 (1989) 1573–1579. [11] A.M.A. Al-Ahmari, Mathematical model for determining machining parameters in multipass turning operations with constraints, Int. J. Prod. Res. 39 (2001) 3367–3376. [12] K. Taraman, Utilization of design of experiments for reduced cost and added reliability, SME (1981). [13] L. Ozler, A. Ozel, Theoretical and experimental determination of tool life in hot machining of austenitic manganese steel, Int. J. Mach. Tool Manufact. 41 (2001) 163–172. [14] U. Zuperl, F. Cus, B. Mursec, T. Ploj, A hybrid analytical-neural network approach to the determination of optimal cutting conditions, J. Mater. Process. Technol. 175 (2004) 82–90. [15] S. Wong, A. Hamouda, Machinability data representation with artificial neural network, J. Mater. Process. Technol. 138 (2003) 538–544. [16] N. Tosum, L. Ozler, A study of tool life in hot machining using artificial neural networks and regression analysis method, J. Mater. Process. Technol. 124 (2002) 99–104. [17] C. Feng, X. Wang, Digitizing uncertainty modeling for reverse engineering applications: regression vs. neural networks, J. Intell. Manufact. 13 (2002) 189–199. [18] T. Ozel, Y. Karpat, Predictive modelling of surface roughness and tool wear in hard turning using regression and neural networks, Int. J. Mach. Tool Manufact. 45 (2005) 467–479. [19] W. Chien, C. Chou, The predictive model for machinability of 304 stainless steel, J. Mater. Process. Technol. 118 (2001) 441–447. [20] Minitab Inc., Meet MINITAB Release 14, Minitab Inc., 2003. [21] J. Mozher, Development of Machinability Models for High Strength Materials, Master Thesis, Industrial Engineering Department, KSU, Saudi Arabia, 1998. [22] D. Coit, B. Jackson, A. Smith, Static neural network process models: considerations and case studies, Int. J. Prod. Res. 36 (1998) 2953–2967. [23] P.J. Werbos, Backpropagation through time: what it does and how to do it, Proc. IEEE 78 (1990) 1550–1560. [24] B. Widrow, M.A. Lehr, Thirty years of adaptive neural networks: perceprtron, madaline, and backpropagation, Proc. IEEE 78 (1990) 1415–1444. [25] D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning internal representation by error propagation, in: D. Rumelhart, J. MeClelland (Eds.), Parallel Data Processing, MIT Press, Cambridge, MA, 1986. [26] M.T. Hagan, H.B. Demuth, M. Beale, Neural Network Design, PWS Publishing Company, Boston, MA, 1996. [27] H. Demuth, M. Beale, Neural Network TOOLBOX for use with Matlab, The MathWorks Inc., 1994.