# Nonlinear Integer Programming by Simulated Annealing

## Nonlinear Integer Programming by Simulated Annealing

Copyright © IFAC Large Scale Systems. London. UK. 1995 NONLINEAR INTEGER PROGRAMMING BY SIMULATED ANNEALING Peng TIAN, Huanchen WANG, and Dongme ZHA...

NONLINEAR INTEGER PROGRAMMING BY SIMULATED ANNEALING

Peng TIAN, Huanchen WANG, and Dongme ZHANG

institute of Systems Engineering, Shanghai Jiao Tong University Shanghai, 200030, P.R. China

Abstract: The Simulated Annealing (SA) algorithm is a stochastic iterative technique for combinatorial optimization. This paper proposes and implements a simulated annealing algorithm for Nonlinear Integer Programming (NIP) which has an extensive theoretical and practical background. The algorithm has been proved that it converges asymptotically to global optimums and is a polynomial one. Evaluations of examples show that the algorithm is efficient. The work provides a new way for studying and solving of nonlinear integer programming. Keywords: Nonlinear programming, Algorithm, Global optimization, Asymptotic analysis, Polynomial methods

1. INTRODUCTION

However, many programming problems of interest are computationally intractable, i.e. their decision versions are NP-complete (Garey and Johnson, 1979). A direct consequence of the property of NPcompleteness is that optimal solutions cannot be obtained in reasonable amounts of computation time and memory. One of two options one might choose for constructing appropriate algorithms constitutes the class of approximation algorithms, also often called heuristic algorithms.

Considering a problem of Nonlinear Integer Programming (NIP) which can be mathematically expressed as

rrnin S.t.

t=

(or rnax) f(x) (1)

a :::; x:::;b

x E Z"

where x (Xl . .. · ' xJ is the variables or unknowns of the programming problem, Z" is a set of integer n-dimensional b = (b l , .. · .bJ

vectors. E

Z"

are

a

=(a I ' ... , a") E Z"

n-dimensional

and

constant

vectors. and a:::; b . Let S = {x: a :::; x :::; b, X E Z" } denote a solution space and then f : S ~ R is a cost function of the programming problem. Because of its robustness, a remarkably rich variety of problems can be represented by the general model, and obviously, some of them can be viewed as integer and combinatorial optimization. There have been significant success stories in the analysis and solution of problems discussed here, specially integer and combinatorial optimization. 629

The Simulated Annealing (SA) introduced by Kirkpatrick. et al. (1983 ) and Cerny (1985) is a stochastic iterative procedure for solving combinatorial optimization problems. Proofs of convergence to a global optimum, based on the theory of finite Markov chains, and successful experimental implementations have led to the widespread use of SA and its acceptance as a viable general method of combinatorial optimization (Aarts and Korst, 1989; Laarhoven van and Aarts. 1987). Advantages of SA are its potential to find near optimal solutions, its general applicability. its flexibility, and its ease of implementation. Furthermore, it is possible to give a polynomial upper bound on the computation time and memory for some implementation of the algorithm.

Co

For the problems of NIP above, a new solving approach by SA is presented in this paper, and described in Section 2. Section 3 contains some conclusions on its global convergence and . its computational complexity. The results of experimental evaluation appear in Section 4. At the end of the paper, there are some conclusions and remarks.

=- ilJ max

(b) Function of reducing control parameter is

given by Ck + l

=

I+Ck · In(I+G')/

C
l:.u

, k=O.l ,2,·· · (3)

= !l.fnun

(4)

In(iSV G")

f

(d) 'Metropolis equilibrium' means

2. SIMULATED ANNEALING ALGORITIiM

Lk =L=p·max{I N (x)l}'

FOR NONLINEAR INfEGER PROGRAMMING

k=0,l,2,·· · (S)

se S

The definitions of the relative parameters above can be referred to Tian (1993) .

Given an instance (S,j) of the NIP formulated by

Sop,

Ck

(c) Criterion for 'frozen' is k -

(1), let Sop,

(2)

Ina

denote the set of optimums, i.e.

={xop, : I(xop,) \$

I(x), x e S}, and N(x) c S

3. GLOBAL CONVERGENCE AND COMPUTATIONAL COMPLEXITY

the neighbourhood of solution x. We can describe the SA algorithm for the NIP (1) as follows : Algorithm: SA-NIP Step 1: Given an initial solution x e S . Step 2: Given an initial control parameter C> O. Step 3: While not yet 'frozen' do the following. 3-1: While not yet 'Metropolis equilibrium', perform the following. Generate a random neighbour x'e N(x) ; If 'acceptance criterion' is satisfied, i.e. min{ l,exp«(f(x) - I(x')) / C)}>,., e [0,1), then set x = x' . 3-2: Set C = C - AC, AC> 0 , (reduce control parameter). Step 4: Return xop,

Obviously, given a neighbourhood structure, the SA algorithm can be viewed as a stochastic iterative algorithm that continuously attempts to transform the current solution into one of its neighbours. This mechanism is mathematically best described by means of a Markov chain, so the analysis of the SA algorithm for the NIP is based on the theory of finite Markov chains. For the homogeneous Markov chain associated with the algorithm, we can prove that it converges asymptotically to the set of global optimums and is a polynomial algorithm. Definition 1. Given an instance (S,j) of the NIP

=x . End.

(1), then the one-step transition probabilities P~. (I) for the finite homogeneous Markov chain f( C ) = {';c (k) ; k e Z+} associated with the SA algorithm are defined as follows : \ix ,x'e S , k e Z+:

The following is the procedure for generating a random neighbour x' e N(x) , also called as integer sampling procedure. Procedure: Integer Sampling (IS) Step 1: Given a solution x =(xl'··· , xJ . Step 2: Set r = Randint[l,n] . Step 3: If xr = br ' then set k = 1, d = Randint[4,S] , goto Step d; If xr = or' then set k = 0 , d = Randint[4,S] , goto Step d; Else set k = Randint[l ,n] , goto Step S.

P{';c (k

+ I) = x'l';c (k) = x} = P~. (1) = Pn . ( C ) = _jGn.(C)An.(C) -

1- LP",(C)

x:t= x'

x=x '

(6)

ye S l s

where Gn . (C) denotes the generation probability, i.e. the probability of generating a solution x' from a solution x; An ' (C ) denotes the acceptance probability of accepting the solution x', once it is generated from the solution x; and C is the corresponding control parameter.

Step 4: Set x'r = x r ' goto Step 6. Step 5: Set x'r = xr +(_I)k . Step 6: Set x=(xl, ·· · ,x'r'···' x n ). Return. where Randint[n, ,n 2 ] is a procedural function which can generate a uniform-distribution random integer over [nl ,n 2 ] ·

Definition 2. The generation probabilities G n . (C) are defined by \ix ,x'eS:

The implementation of the algorithm adopts the control schedule designed by the author (Tian, 1993), which includes the following distinct items: (a) Initial control parameter is

x'e N (x)

(7) x'~

630

N (x)

Proof. In fact, a neighbourhood N (x) c S of a solution x E S is a subset of S closing to x, and here it is constituted by N (x) = {x' E S: x' obtained through a single random sampling from x } (15) The integer sampling procedure IS picks randomly a component xr in n components of x, and xr has two values of increment and decrement; and then considering xr may be a boundary point ( i.e. xr = ar' or xr =br ), xr only have one value of increment or decrement. Thus we have IN(x)I=2n-n. (16) From (7), the expression (14) is obtained. This completes the proof the lemma.

Definition 3. The acceptance probabilities A u ' (C) are defined to be identical to the Metropolis criterion and given by VX,X'E S: I(x') \$; I(x) Au'(C) =

11exp( I(x)- I(x') ) C

(8)

I(x') > I(x)

Tbeorem 1. (Aarts and Korst, 1989) Let (S,f) denote an instance of the NIP (1) and P~, (1) denote the transition probabilities of the Markov chain ;(C) associated with the SA algorithm defined by (6), (7), and (8). Furthermore, let the following condition be satisfied: V'X,X'E S, 3m 21, 3x o'xl'''·'x .. E S, with Xo = x, x.. = x', and G•• .t

"+1

>0,

k=O,l,"',m-l

Tbeorem 2. The generation probabilities Go' given in (14) satisfy the condition (9) on global asymptotic convergence, i.e. the revelant SA algorithm for the NIP (1) converges asymptotically to the set of global optimal solutions. Proof. Through the integer sampling procedure IS. a component xr picked in x has become a random walk particle with two reflecting walls on state space i r ={ar ,ar +l,"·,br -l,b), and its one-step transition probabilities are given by

(9)

Then the Markov chain ;(C) has a stationary distribution q( C), whose components are given by VX,X'E S: q.(C) = lim P{;c(k) = xl;c(O) = x'} = k-+oo

IN(x)lexp( - I(x) / C) =

~]N(x')lexp(-f(x')/C)

r".' r

(10)

x'e S

Moreover we have

r

limq (C) = q' c-+o •

1

JloiSopti

Hence, we finally obtain (12)

or

(17) ~,

G 1:.1 ... 1

The Theorem 1 concludes that under the condition (9), or in other words, if the generation probabilities G u' satisfy the condition (9), the SA algorithm for the NIP (1) converges asymptotically to the set of global optimal solutions, i.e. finds with probability one a global optimal solution.

I(2n~n.)

lo

X'E

r=I,2,·".n

1

=-->0 2n - n '

k=O,I , ",m-l (18)

Theorem 3. Let (S,/) denote an instance of the NIP (1) and the SA algorithm is implemented with the control schedule (2) - (5) . Then computation time C T. of the algorithm is given by (19) C. T.= 0(.· L ·lnlsl) and computation storagespace Cs. is C.S.=

o
(20)

where r is the computation time required to Carry out a single transition, and L denotes the length ~f the individual Markov chain. Proof. Referring to Tian (1993).

N(x) (14)

X'i:

=Pb·..-, =Pb...-,-, =-, 2

which completes the proof of the theorem.

Lemma 1. The generation probabilities Gu ' corresponding to the SA algorithm for the NIP (1) is given by VX,X'E S: Gg =

a,+l
Referring to Isaacson, et af. (1976) and Seneta (1981), a Markov chain corresponding to a random walk particle with two reflecting walls is irreducible and aperiodic, and the procedure IS picks with probability lIn every component in x, thus we have VX,X'E S, 3m 21, 3x o'x, ,"·,x .. E S, with Xo = x, x.. = x' , and

(11)

limlimP{;c(k)=x}=limq (C)=q' c .... o J: I

=2'

=P.-,

lP••, , =p••,

XESopt

C-+Ok-+«)

1

N(x) For the (1), we have

where n. is the number of component which is a boundary point in vector x. 631

n

In/si = In(TI (b i -

a;

+ I»

where Numopl is the number of solving attaining

=

global optimum, Num is the total number of solving.

; =)

=~)n(b; -a

i

+1)

Offset from Global Opt.(%) =

fO- f

1=1

(21)

= min{a) , b", = max{b), L = p. maxIN(x)1 = p ·2n

where am

j

and

the algorithm, and

;

(22)

=O(n ,b",-am )

IS'~ving I

(29)

where S_vrng _. . is the set of solutions attempted by the algorithm.

(23)

and

5. CONCLUSIONS

C.S.= odNoj) = O(n)

(24)

The analysis and solution of nonlinear integer programming have very important significance for theory and applications. For these NP-complete problems, there are no effective solving approaches at present. This paper introduces and implements the simulated annealing algorithm for nonlinear integer programming, and reaches the following conclusions: (a) The algorithm converges asymptotically to global optimal solutions and is a polynomial one. (b) The results of experimental evaluations show that the algorithm is efficient. (c) The study of this paper provides a new general and effective approach for the analysis and solution of nonlinear integer programming.

The analysis results (23) and (24) show that the SA algorithm for the NIP (1) is a polynomial algorithm, i.e. it runs in polynomial time and memory.

4. EXPERIMENTAL EVALUATIONS Considering the following problems of the NIP (I), Example I.

rminf(x) = (XI -3)' , cos(1r ' XI)+(X 2 -6) .sin('::' . x 2)+

(X) -2.5)2 '(X2 +2rl+(X)+2)) .e-~'

XI :S; bl

XI E Z , i

is the exact global optimal

Search Space(%) = -Is-I- x 100

= O(n·2n.n·ln(b", -a", +1),b", -a",) 3

f opl

value.

KES

O:S;

100

(28) where f(-) is the optimal objective function value by

and r is a constant which increases linearly as the size (n ,b; -a;) of an instance. Furthermore, we finally obtain C. T .= O( T' L . In/si)

Is. t. l

x f OPI

:S;n·ln(b", -a", +1)

J

OPI

= \,2,3,4 (25)

Example 2

REFERENCES

,rminf(x) =(x l -2.5)2 '(X 2 +12.6)2 '(X3 +25.4)+ (x _4.5) 2 '(X +18.4r l 3

j

3

4

X 4 '(X5 +10.8)

,

2

. e ( z,-65)

Aarts, E. and 1. Korst (1989). Simulated Annealing and Boltzmann Machine. John Wiley & Sons. New York. Cerny, V. (1985). Thermodynamical approach to the travelling salesman problem : an efficient simulation algorithm. 1. of Optimization Theory and Applications, 45, 41-51 . Garey, M. R. and D. S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NPCompleteness. W. H. Freeman and Company. San Francisco. Isaacson, D. L. and R. W. Madson (1976). M arkov Chains Theory and Applications. John Wiley & Sons, New York. Kirkpatrick, S., C. D. Gelatt, Jr. and M. P. Vecchi (1983). Optimization by simulated annealing. Science , 220(4598), 671-680 . Laarhoven van, P. L. M. and E. H. L. Aarts (1987) . Simulated A nnealing: Theory and A pplications.

+

1r

' sin(- , (x6 +1)·x5 ) 10

s. t. a, :S; x ; :S; b;

l

Xi E

Z, i

=1,2,· ··,6

(26) Because there is no other solving approach for exact optimal solutions of the examples above, the results of enumerating their instances are given in Table 1 and Table 2. And then we carry out experimental evaluations by the SA algorithm for the examples, the results obtained are listed in Table 3, where the perfonnance of the SA algorithm is quantified by the following quantities: Numapl Global Opt. (%) = x 100 (27) Num 63 2