Copyright C IFAC Computer Aided Desiiil1 Indiana . USA 1982
INTERACTIVE SYSTEM ANALYSIS AND SYSTEM DESIGN USING SIMULATION AND OPTIMIZATION P. P.
van den Bosch
Laboratory for Control Engz"neerz"ng, Delft Unz"versz"ty of Technology , Postbox 5031, 2600 CA Delft, The Netherlands
Abstract. The use of simulation and opt~m~zation for system analysis and system d'esign is discussed. This approach is compared with other, mathematically-oriented analysis and design methods. Although a simulation and optimization approach requires more calculation time, this analysis and design method offers much more flexibility, if suitable interactive software is available. An interactive, interpreter-like, block-oriented simulation program will be described that can be used for this purpose. Keywords. Computer aided design, Computer software, Control system analysis, Control system synthesis, Identification, Parameter estimation, Simulation.
In this paper the use of si~ulation for syste~ analysis and system design will be discussed. This approach will be compared with other, mathematically-oriented analysis and design methods. These latter methods make use of, for eKample, the linearity of the system so that mathematical solution techniques are available. Then a quantitative judgement may eKist on the range of validity of the results, for both this and other comparable systems. In general, systems do not satisfy assumptions such as linearity, order information, etc., so that either a ~athematical approach cannot be applied or a simplified, linear model has to be used.
Identification methods 3re based on an analysis of the input and output signals of the system that has to be identified. Esti~ates of the parameters of a model, whose structure and order are determined in advance, can be calculated. Depending on the identification method a priori information, based on noise characteristics, can also be included in the algorithm in order to obtain better esti~ates. Let us briefly consider the Least Squares ~ethod (LS), as illustrated in Figure 1.
An analysis or design approach based on simulation and opti~ization is much more fleKible and can deal with nearly any system description, linear or nonlinear, continuous or discrete or any mixture of differential, difference, algebraic or logical equations. This fleKibility has to be paid for by eKtra c31culation time. Suitable software has to be available in order to use such an approach. In this paper we will describe the proposed simulation and opt~m~zation approach for system analysis and system design. ~e will compare it with other, mathematically-oriented methods, derive requirements for software and describe a simulation program, PSI, that has been designed to realize nearly all these requirements.
Fig. I. Least Squares Hethod estimates the discrete transfer function H(z). H(z)=A(z)/B(z).
Due to the linear ~odel a(z)=A(z)/S(z), the LS method can calculate the parameters a and b. i (A(z)= a + 3 z + •••• , S(z)= b + b JZ +. 1 model ve~y fast in solving 3 set of on llnear equations with n unknown parameters a . and b .. 1-
P. P. J. van den Bosch
LS estimates only unbiased parameters a. and b., if the noise n(k) satisfies s€veral c5nditions for exa~ple, n(k) has to be colored noise, ari~~ng fro~ white noise filtered by means of B (z). If this condition is not met, the noise characteristics have to be estimated too. This can be achieved by, for example, the Extended Matrix Estimator (Talmon and Van den Boom (1973». This extension introduces an iterative solution procedure which increases the calculation time considerably. Simulation and optimization can also use, in addition to the input and output signal(s), some a priori information of the system, for example, some knowledge about the internal structure, some known parameters, in some cases the shape or even the exact values of a non-linearity, etc. rhis a priori information can be obtained from additional measurements or from the understanding of the physical laws that describe the system under consideration. rnis additional a priori information can be very useful in finding an apropriate model of the system. However, it is difficult or even impossible to use this information together with, for example, the LS algorithm.
Identification via simulation and optimization.
By means of simulation and optimization we can calculate the "best-fit" model of the system. Linearity assumptions are no longer necessary. Such an approach is illustrated in Fig. 2. ~ criterion is defined, based on the error between the output of the system and the output of the model, that gets the input signal of the system. The output of the model is obtained by using simulation techniques. rherefore, the model may be described by continuous parts, discrete parts, non-1ine3.r or logical elements or any combi~ation of these. rhen an optimization algorithm is able to find optimal model parameters of a (non-)linear model with a user-defined structure and a user-defined criterion. For example, if we know in advance that the system under consideration has two time constants (and thus two real poles) this knowledge can be used in the identification scheme of Fig. 2, but not in the LS method. This flexibility
is achieved at the expense of calculation time. Optimization is inherently a non-linear iterative procedure. Each iteration requires a complete simulation run so that, more calculation time is needed than with the LS method. For system analysis this is not a real limitation. Real-time identification for adaptive control poses very strict limitations on the calculation-time requirements, so that the proposed identification method can not always be used. For interactive use of this facility, the number of parameters has to be limited to a m3.ximum of about 5 to 10. Each non-linear optimization procedure can only find a local minimum of the criterion. There can be no guarantee that the global optimum has been found. The minimum that will be found not only depends on the optimization algorithm but also on the start values of the parameters of the model, so that choosing other start values may yield a better optimum. It should be stressed that both approaches attempt to find a model whose output is as good as possible equivalent to the output measured. It turns out that there are many models with different orders and completely different sets of parameters that offer about toe same time responses. So, if we want to det e rmine the time constants or g3.ins of a system, we may obtain, after identification with the LS method, values th3.t are not correct or 3.re even completely errorneous, although the time response is quite appropriate. In some cases, especi3.11y when a model has to be obtained in order to design a controller for the system, this situation may be acceptable as long as the calculated model and the system have about the same dynamical behavior. In case we want to know the accurate values of some parameters, for instance, to find certain physical par3.meters from some experiments, this situation may give rise to serious problems. In such a case a priori information may improve the results so that the simulation and optimization approach is highly preferable when compared with the LS metnod, as illustrated by Mota (1979).
SYSrEM DESIGN rhere are many ways to design a syst e m such tnat it satisfies pre-defined design requirements. In general, some control structure has to be implemented to improve the system behavior. In designing such a controller we can use several graphical representations of the system in order to study its dynamic behavior and to find ways to define controllers such that the system behavior will improve . Linear single-input single-output systems can be designed by using the Bode or Nyquist diagrams or root loci. Linear nultivariable systems can be treated by graphic design methods such as the Inverse Nyquist Array method, the Characteristic Locus Design method, etc. For non-linear systems the describing function method and the circle criterion are available, 3.lthough they are rather conservative. rhese gr3.pnic design
Interactive System Analysis and System Design methods offer much qualitative and quantitative information about the system behavior. Nevertheless, if the system is compleK much eKperience and knowledge is required to be able to design an appropriate controller which satisfies the design requirements. Another approach to desi gnin g systems is to formulate the design problem in terms of an optimization problem: formulate a criterion, parameters of a controller that have to be optimized and constraints. The criterion has to satsify two requirements, namely it has to expr e ss th e design objectives and has to be e asy to calculate. In choosing a mathematically-oriented criterion the optimization process can be quite fast, but th e link with the design objectives , such as overshoo t, rLse time, damping etc., may be we ak o r even non-eKistent. For eKample, the line ar optimal state f eedback matriK, according to the quadratic fun c tional J:
MAX ( y( t)
1 05 y
+ uTRu) dt
This constraint limits the overshoot to a maximum of 5%. So, Mayne and Polak's design method based on optimization with infinite-dimensional constraints offers increased flexiblity. An important requirement is that both the criterion and the constraints be continuously differentiable and that their gradients be available for the optimization algorithm. These requirements restrict the application of their design procedure.
CRITER I O
takin g into account th e stat e K and the input u, can be easily found by solving a Riccati e quation. rhere also e Kist fast algorithms for pol e plac e ment, etc. Output feedback, instead of state feedback, complicates the optlmlzation considerably. rh e n, relatively simple e xpr e ssions eKist to calculate both functional J (1) and its gradient with respect to the coefficients of th e f ee dback ,m atrix. Hirzinger (1975) has pro pos ed an usabl e output-feedback configuration for multivariable systems. His dynami c controller has bo th f eedback and f eed f o rward. Th e desi gn requir eme nts placed o n dynami~ be havior and decouplin g ar e eKpressed in a parallel refer e nce model, which causes an unconstrained optimization problem to aris e .. ith functional J (1) as criterion. rhe dime nsion of the stat e now become s the sum of th e dimension of th e stat e s of the original syst em, of the c ontroll e r and of the parallel mod e l. rhe value of J and its gradient are calculat ed by solving Lyapunov equations. Anoth e r way to c al c ulat e an arbitrary functional of the output(s) or states of a syst e m is to use the fast nume ric inverse Laplace me thod as proposed by Zakian (1970). rhe syst e m still has to be linear, but now the criterion can be any lin e ar o r nonlinear fun c tional of, for example, the outputCs). rh e s e approaChes deal with unconstrained o ptimization problems. All design objectives have to be implemented in the criterion. Constraints are incorporated into the criterion via penalty functions. Another way to de al with constraints is to incorporate th em directly into the problem formulation. rh e n, Zakian's Method of Inequalities (Zakian (1979» or the more powerful optimization as proposed by Mayne and Polak (1982) become attractive. The latter, in particular, can
Fig. 3. System design using simulation and optimization.
By usin g simulation and optimization a more fl e xible , but also more time consuming, design method can be defined, as illustrated in Fi g . 3. Simulation teChniques are used to cal c ulate the error signal e due to the c ontroll e r and the system. A criterion can be de fined, based on this error signal and/or the output, which can be optimiz ed with respect to the parameters of the controller. So, any (non-)linear system and any controller confi g uration can be used with any criterion. Finite or infinite-dimensional constraints can be included, via penalty functions, in the crit e rion. EVen the combination of a discrete controller which controls a (non-)linear c ontinuous system offers no problems. Yet, optimization is just a design tool. It dO e S not directly give the required controller parameters. First the criterion, the controller structure and the free controller parameters have to be defined. The solution of this specific optimization problem offers optimal controller settings.
Calculation-Time Comparison Sev e ral design approaches, all based on the optimization of a certainm criterion, have briefly been discussed. In this subsection we
P. P. J. van den Bosch
will roughly compara the calculation-time requirements of the linear output-feedback controller based on the solution of (1) by means of Lyapunov equations and the simulation and optimization approach. The following assumptions are made: Suppose we want to calculate J (1) of linear system
NI A>< + Bu C>< + Du
__~~__~__- r__~__~r-__~____ 20 25 30 J0 J5 5 n_
The calculation time is deter~ined mainly by solviong the associated Lyapunov equation. If the iterative algorithm of Smith (1971) is used with 8 iterations, Ni floating-point oparations (FPO) are necessary: NI ?:- 8(5n 3 +10n 2 + 4n +4) + n 2 +n FPO Other solution methods for Lyapunov equations have different numbers for the FPO but they all suffer from at least an eKpression with nl. Considering simulation to calculate the criterion J (1) of system (2), we find that the derivative calculation requires, with fully populated A and B matrices, 2n2 + 2nm FPO and the integration requires 4n FPO per cycle of the numerical integration. Suppose the integration interval is one fifth of the smallest time constant and the time horizon is five times the largest time constant (appropriate for step responses) and the ratio between the largest and smallest time constant is 50, then 5*5*50=1250 integration steps are necessary. With the fi>
In Figure 4 the ratio NZ/Nl is given for several values of nand m. Especially for large values of n, mathematically-oriented design methods based on output feedback, lose tneir relative advantage with respect to calculation-time requirements. As stated earlier, this ratio N2/Nl is only a rough indication of the relative calculation-time requirements. It is important to recognize that both approaches have features that may change this ratio, namely: - Output feedback can calculate the gradient of J (1) by solving one additional Lyapunov equation. Then, the application of advanced non-linear-optimization methods, e.g. variable metric algorithms, can be advantageous. When using a direct-search optimization algorithm, a considerable savings of calculation time can be obtained by stopping a simulation run as soon as the value of the criterion becomes larger than the ma>< method (Nelder and
Fig. 4. Ratio NZ/Nl of the calculation-time requirements between simulation and output feedback. Mead (1965» or larger than the current lowest value of the criterion found during a Random Search (Schwefel (1981». Another time-saving mechanism involved in using simulation is a variable-step integration method. - In general, the A and B matrices will not be fully populated. Then calculation time under simulation is directly proportional to the nu~ber of non-zero elements of both matrices. This reduction cannot be obtained as easily in output feedback. - Functional J (1) is not fle>
In the preceding section the use of simulation and optimization for system analysis and system design has been described and illustrated. In this section we wish to concentrate on the requiraments that have to be put on the simulation facility in order to use this approach. Both digital and hybrid computers can be used. Due to the many advantages of the digital computer when
Interactive System Analysis and System Design compared with a hybrid one (price, availability, size of the problem, etc.) we shal focus our attention on the requirements for simulation programs intended for digital computers. Integration Methods Simulation programs calculate the solution of sets of linear or non-linear differential and/or difference equations. Digital computers calculate a variable only as a sequence of values at discrete time intervals, determined by the integration interval. Therefore, the continuous integrator has to be approKimated. The accuracy with which this approKimation can be realized determines the accuracy of the simulation and depends both on the integration method and the integration interval. With a small integration interval and a compleK, higher-order integration method more accurate results can be eKpected than with a larger integration interval lnd a simpler integration method. But, both a small integration interval and a higher-order integration method increase calculation time. So, a compromise can and has to be made between calculation time and accuracy. The following eKample illustrates this point. solution yet) The of the second-order differental equation
1, y(o) = 0
is compared with the analytical solution sin(t). The absolute, average error J between yet) and sin(t) over a period of 10 seconds is calculated. sine t)
Five numerical integration methods have been selected to calculate yet) with several values of the integration interval. The error J and the time Tc needed to calculate yet) and J are measured. Fig.5 illustrates the mutual dependence between the calculation time Tc and the accuracy as determined by the error functional J. This eKample clearly indicates the value of compleK, higher-order integration methods. With 2 seconds of calculation time Tc available for each method, the results obtained with Runge Kutta 4 are about 100, 50J and IJ,OOO times more accurate than those obtained with the Runge Kutta 2/Simpson, Adams Bashfort 2 and Euler methods, respectively. Another eKample may yield different numbers, but, in general, the higher-order integr~tion methods such as Runge Kutta 4 and Simpson: will offer the best accuracy/calulation-time per for ,n ance. rhis eKample also illustrates the importance of appropriately choosing the integration interval T. When T is too large for a given
Fig. 5. The error due to numerical integration method calculated as a function of the calculation time Tc.
problem, large errors will occur. When T is too small, the calculation tine is spoiled because the accuracy does not increase with decreasing values of T. The limited accuracy with which the digital computer represents numbers causes an accumulation of errors. There are some rules of thumb for determining, for a given simulation model, an appropriate value of T. nowever, it is also possible to use variable-step-size integration methods tnat determine the step size or integration interval at each time interval. These integration methods adjust the step size according to some error criterion. By defini~g a small or less small error, accurate (but time consuming) or less accurate (but fast), respectively, results can be eKpected. For many systems and especially for optimization, the variable-step-size integration methods offer the best performance. In each iteration of the opt1m1zation the system dynamics will be different, making an a priori selection of a "good" integration interval T rather difficult.
Algebraic Loops A second problem arises in solving a parallel defined system with a sequentially oriented digital computer. This problen can be solved by using a proper sorting procedure, eKcept when there is an algebraic loop (an equation in which a variable is an algebraic function of its own value), for eKample, K = sin(K) + y. It is always advisable to avoid algebraic loops. If they cannot be avoided they have to be solved with the aid of time-consuming, iterative algorithms, which can be used not only for the solution of algebraic loops, but also for the solution of any general, non-linear algebraic equation.
P. P. J. van den Bosch
There is an important distinction between preprocessor-like programs (in general batch-oriented) and interpreter-like programs (in general interactive) for simulation purposes. The former allows statements of ~ high-level program~ing language to be included in the simulation model description. Therefore, these programs can be made as fleKible as, for eKa~ple, a Fortran program. Interpreter-like programs l~ck this facility, so that special measures have to be taken to realize, for eKample, multi-run facilities such as optimization, comparison of variables between different runs, etc. User Interface If a simulation program has very attractive mathematical aids and perfect multi-run facilities, it may still be inferior with respect to control system design when the interaction between the program and the user is not accepted by the user. This interaction is determined by a number of factors, but especially by the communication between the user and the program and the presentation of graphic information (Van den Bosch and Bruijn (1979)). Only an interactive program can support a designer-oriented environment. Either a command language or a question/answer approach can take care of the communication between the program and the user. A command language offers much more fleKibility but lacks the guidance available in a question/answer approach. In designing control systems, graphic representations of the system behavior are of paramount importance. Although numbers are much more eKact, design considerations mainly deal with graphic representations of a system. For eKample, linear optimal state, output feedback or pole placement are well-established, mathematically-oriented methods for control system design. However, whether or not such a design meets the ultimate design requirements cannot be jud~ed by looking at only the value of the criterion or at the feedback matriK. In general, only time (or freqency) responses offer enough information to make a judgement of the ultimate system behavior possible. So, a graphics display, which is very fast, or a plotter is almost unavoidable when analyzing or designing systems. FACILrTrES AND LIMITATrONS OF PSI
Facilities Up to now facilities enabling a simulation program to be used for interactive system analysis and system design have been discussed. At the Labor~tory for Control Engineering an rnteractive Simulation Program (PSI), (Van den Bosch 0979,1981)), which satisfies almost ~ll stated requirements has
been designed and realized. This interpreter-like, block-oriented simulation pro~ram offers, for eKample, the following facilities: SiK numerical integration methods are available, namely four fiKed-step methods (Euler, Runge Kutta 2, Simpson and Runge Kutta ~) and two variable-step-size methods (Runge Kutta 2 and 4). - Solution of algebraic equations is realized by a fast Newton-Raphson algorithm. If this procedure fails, a ,n ore reliable, although slower, optimization algorithm is used. - Optimization with scaling and constraints is supported. In PSI the user can define the output of an arbitrary block as the criterion and up to four arbitrary parameters of the simulation model as parameters of the optimization. rhe parameters that offer the smallest value of the criterion will be accepted as the solution of the optimization procedure. The SimpleK method of Nelder and ~ead (1965) has been selected as the minimization procedure, due to its robustness and lack of a line-minimization procedure. Although the SimpleK method adjusts its search step size according to the "shape" of the criterion, improvement of the speed of convergence can be obtained by using scaling. Scaling can make each parameter about equally important for the optimization algorithm. Not only is scaling supported by PSI, but constraints are also allowed. Each parameter may have an upper and a lower limit. The optimization algorithm will only search for an optimum Ln the feasible region of the parameter space. ~ulti-run facilities are available. For eKample, run-control blocks, comparison of signals between several runs, etc. EKtensive tests on all user-supplied information is implemented. Eacn error is indicated by a meaningful error message of which there are about 60. - About ~O powerful block types are available. Fortran programming in a non-interactive mode is required to define new block types. The user only needs to write a subroutine in which the output is defined as a function of the input(s) and parameter(s), compile it and after a link step, his block is available. - There are four memories to store up to four signals during a simulation run. These signals can be studied after the simulation run, can be saved on disk or can be used as inputs for future runs. These signals can be redrawn on the screen, as responses or as phase trajectories, after which a cursor, controlled by keyboar:l cO(ll,nands, can "walk along" these responses. The nU(llerical values appear directly on the screen, so that overshoot, rise time or accuracy can be determined both quantitatively and qualitatively. - SY(llbolic block names can be used. Instead of numbers each block or variable can be assigned a user-selected na(lle of up to eight characters. So blocks can get meaningful names like PRESSURE, SPEED or OUTPUT instead of abstract numbers like block 13, 91 or 512, etc.
Interactive System Analysis and System Design This section has described a number 3f facilities which make programs, such as PSI, highly suited to syste~ analysis and system design. PSI is able to s31ve (non-linear) differential, difference, algebraic and logical, Boolean equations or any nixture of them. Moreover, an attractive and powerful interaction is realized between the user and the progra,n.
EXAMPLE fhe example given here illustrates the use of the combination of simulation and optimization in designing a controller for the attitude system of a satellite. The attitude control system of a satellite has two main tasks, namely to realize large angle maneuvres to point the satellite from one position in space to another one ("slewing") and to direct the satellite as accurately as possible at sone object in space ("pointing"). I~ Van den Bosch and Jongkind (1979) a model-reference adaptive attitude control system has been described for a three-axes slew.
A real-time version of PSI can be used to realize a real-time process or a real-time controller which, by means of AO- and OA-converters, is connected with some laborat3ry equip~ent or another computer. So, in a flexible and interactive way, quite complex processes and controllers can be easily realized.
In the pointing mode another type of controller has been implemented. Each axis is designed separately. The basic control scheme is illustrated in Fig. 6. The reaction wheel delivers a control torque that can co~pensate for the disturbance torque Td. This torque Td partly originates from the gyroscopic coupling between the three axes of the satellite/reaction wheels and from external disturbances. The satellite, the reaction wheel and tne sensor dynamics are continuous. The controller is implemented in an on-board co~puter and realizes a discrete PLO-control algorithm. In addition, both the sensor and the controller introduce quantization effects, while the maximum input signal to the reaction wheel is limited. The reaction wheel has many non-linearities arising from friction (static as well as dynamic) and saturation effects.
Limitations Still, PSI has limitations. These limitations arise from the minimum hardware requirements and the bounded facilities supported by PSI. As a consequence of the choice of Fortran as programming language, the many tests on input data and the extensive error messages, PSI has become quite large. Therefore, PSI has to run in a 28k (16 bits) computer in an overlay environment, so that a fast background memory, for example floppy disk or hard disk, is a prerequisite. The minimum ~ardware configuration consists of a processor with a 28k (16 bits) memory, a terminal and a floppy-disk unit. A display is not necessary out very valuable. Moreover, an operating system with a Fortran compiler has to be available.
This mixed continuous discrete non-linear system cannot be analyzed with an analytical design method, without eliminating the quantizati3n effects and the non-linearities wnich play an important role in the pointing mode. Simulation offers the possibility and the flexibility to design this attitude control system. Optimization has been carried out to find optimal PLO-settings for an integral e rror-squared criterion.
Like most other interactive, block-oriented simulation programs, PSI does not support special facilities to solve partial differencal equations, stiff systems and polyno~ial or matrix equations. These programs deal with single-valued variables, and consequently not with vectors and matrices. The solution of the Ricatti equation of a second-order system LS possible, but the solution of this equation of higher-order systems cannot be obtained easily.
rhe simulation and optimization approach is a nice design method, although some pitfalls have to be avoided.
r-------- ---- ---. ---- ---- ---- ---, I
P I D
L..-_ _ _..J
... - - - - - ·NT=t5~1lYll. - complieer - - - - - - - - - -
Fig . 6 . Attitude control system for one aX1S of a satellite .
P. P. J. van den Bosch
- A nice result was obtained for the attitude control syste~ by using 100 seconds of simulation time. This controller satisfies the requirements for speed and accuracy. Yet, it turns out that this system was unstable. This could not be detected by the optimization due to the short simulation time (100 seconds) and the non-linearities of the system. - There may be room for discussion about the criterion. Nearly all criteria based on the integral of the error offer reasonable responses. Sometimes it seems profitable to use a more compleK criterion. Then undesired responses may be found. For ~Kample, taking into account only the error at the end of the simulation run, i.e. e(T), ~ay offer a v2ry oscillating system in which, indeed, e(T)=O. Using a criterion that punishes overshoot may result in responses with or without some ov~rshoot but with a considerable undershoot, etc. The choice of the input or disturbance signal is very i~portant. These signals have to reflect the desired characterictics, or else undesired results ~ay occur. CONCLUSIONS The value of simulation ani optimization for system analysis and system design has been discussed. It appears that many systems can only be studied by using simulation techniques. But even when analytical methods are available, simulation and optimization have their own unique merits. Yet, it has to be stressed that a us~r of these facilities shouli be aware of the potentialities as well as of the limitations and pitfalls of the proposed analysis and design method. Facilities which allow the use of both simulation and optimization in an interactive way have to be available. It has been i1lustratei that interactive si~ulation programs such as PSI are very well suited to use in interactive system analysis and system design. REFERENCES Bosch, P.P.J. van den (1979). PSI-An EKtended, Int~ractive Block-Oriented Simulation Program. Proceedings IFAC Symposiu~ on Compter Aiied Design of Control Syste~s. Zurich (223-228). Bosch, P.P.J. van den and P.M. Bruijn (1979). Requirements and Use of CAD Programs for Control System Design. Proceedings IFAC Sy~posium on Computer Aided Design of Control Syste~s. Zurich (459-464). Bosch, P.P.J. van den and Jongkind, W, Model Reference Adaptive Satellite Attitude Control. Methods and Applications in Adaptive Control. (H. Unbehauen, editor) Springer Verlag, 1980 (209-218). Bosch, P.P.J. van den (1981). Manual of PSI. Laboratory for Control Engineering, De1ft University of Technology (64 pages).
Hirzinger, G. (1975). Decoupling Multivariable Systems by Optimal Control Techniques. Int. J. of Control, vol 22, no 2 (157). Mayne, D.Q., Polak, E and Sangiovanni Vincentelli, A (1982). Comput~r Aided Design via Optimization: A Review. Automatica, vol 18, no 2 (147-154). Mota, M.M. A Contribution to the Development of Flow- Through Clean-Up Systems in Chemical Analysis. PhD Thesis, Utrecht University, The Netherlands, 1979. ~elder, J.A. and R. Mead (1965). A SimpleK Methoi for Function Minimization. The Computer Journal, Vol 7 (308-312).
Schwefe1, H.P. (1981). Numerical Optimization of Computer Models. John Wi1ey and Sons, Chichester. Sirisena, H.R. and Choi (1977). Minimal Order Compensators for Decoupling and Arbitrary Pol~-Placement in Linear Multivariable Systems. Int.J. of Control, vol 25, no 5 Smith, P.G. (1971). Numerical Solution of the MatriK Equation AX+XAt+B=O. IEEE Trans. on Autom. Control, June (278-279). fa1mon, J.L. and Boom, A.J.~. van der (1973). On the Estimation of the Transfer Function of Process and Noise Dynamics Using a Single Estimator. Proc. IFAC Symposium on and Process Parameter Identification Estimation, The Hague. Zakian, V. (1970). Optimisation of Numerical Inversion of Laplace Transforms. Elec. Letters, vol 6, no 21 (677-679). Zakian, V. (1979). New Formulation for the Method of Inequalities. Proc. lEE, vol 126, no 6 (579-584).