hyperbolic PDE systems

hyperbolic PDE systems

Accepted Manuscript Title: Graph Representation and Decomposition of ODE/Hyperbolic PDE Systems Author: Manjiri Moharir Lixia Kang Prodromos Daoutidis...

1MB Sizes 1 Downloads 28 Views

Accepted Manuscript Title: Graph Representation and Decomposition of ODE/Hyperbolic PDE Systems Author: Manjiri Moharir Lixia Kang Prodromos Daoutidis Ali Almansoori PII: DOI: Reference:

S0098-1354(17)30276-4 http://dx.doi.org/doi:10.1016/j.compchemeng.2017.07.005 CACE 5861

To appear in:

Computers and Chemical Engineering

Received date: Revised date: Accepted date:

16-3-2017 6-6-2017 15-7-2017

Please cite this article as: Manjiri Moharir, Lixia Kang, Prodromos Daoutidis, Ali Almansoori, Graph Representation and Decomposition of ODE/Hyperbolic PDE Systems, (2017), http://dx.doi.org/10.1016/j.compchemeng.2017.07.005 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

*Highlights

Highlights

Equation graph representation of convection-reaction system variables is proposed.



Structural interactions among variables are calculated from the equation graphs.



The chemical network is decomposed based on the structural interaction measure.



Optimal distributed control structure is obtained based on a modularity measure.

Ac ce p

te

d

M

an

us

cr

ip t



Page 1 of 28

Graph Representation and Decomposition of ODE/Hyperbolic PDE Systems Manjiri Moharira , Lixia Kangb , Prodromos Daoutidisa,∗, Ali Almansooric a

ip t

Department of Chemical Engineering and Materials Science, University of Minnesota, MN, 55455, USA Department of Chemical Engineering, Chemical Engineering and Technology, Xi’an Jiaotong University, Shaanxi, China c Department of Chemical Engineering, The Petroleum Institute, Abu Dhabi, United Arab Emirates

cr

b

us

Abstract

ed

M

an

This paper deals with the decomposition of process networks consisting of distributed parameter systems modeled by first-order hyperbolic partial differential equations (PDEs) and lumped parameter systems modeled by ordinary differential equations (ODEs) into compact, weakly interacting subsystems. A structural interaction parameter (SIP) generalizing the concept of relative degree in ODE systems to first-order hyperbolic PDE systems is defined. An equation graph representation of these systems is developed for efficient calculation of SIPs. An agglomerative (bottom-up) hierarchical clustering algorithm and a divisive (top-down) algorithm are used to obtain hierarchical decompositions based on the SIPs. Modularity maximization is used to select the optimal decomposition. A network of two absorbers and two desorbers serves as a case study. The optimal decompositions of this network obtained from both the algorithms illustrate the effectiveness of the graph-based procedure in capturing key structural connectivity properties of the process network.

Ac ce

pt

Keywords: distributed control, distributed parameter system, graph theory, process network decomposition

1. Introduction

Chemical plants are complex, integrated networks of process systems that can be broadly classified as lumped parameter systems or LPS’s, which are described by ordinary differential equations (ODEs) (e.g., well-mixed reactors, staged separators, etc.) and distributed parameter systems or DPS’s (e.g., heat exchangers, plug-flow reactors, etc.), which are described by partial differential equations (hyperbolic or parabolic PDEs). The individual process systems’ dynamics and the emerging dynamic behavior due to the interconnections among the ∗

Corresponding author Email addresses: [email protected] (Manjiri Moharir), [email protected] (Lixia Kang), [email protected] (Prodromos Daoutidis ), [email protected] (Ali Almansoori) URL: http://research.cems.umn.edu/daoutidis/ (Prodromos Daoutidis ) Preprint submitted to Computers and Chemical Engineering

June 6, 2017

Page 2 of 28

Ac ce

pt

ed

M

an

us

cr

ip t

systems limit the effectiveness of fully decentralized control of the network [1]. Also, fully centralized control has practical limitations in terms of design and tuning of the controller [2]. Distributed control, a middle-ground between fully decentralized and fully centralized control, is commonly used for the control of networked systems [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. It involves the decomposition of the network into smaller subsystems, which are controlled individually with some transfer of information among the controllers of each sub-system. The decomposition of a network into sub-networks for distributed control is an open and challenging problem [13]. Some methods have been proposed in the literature [14, 15], but there is no generically applicable framework for computing decompositions which are optimal in a well-defined sense [13]. Recently, methods from network theory have been used for systematic decomposition based on the strength of the variable interactions. The contributions in [16, 17, 18, 19, 20, 21, 22, 23] highlight relative degree as a measure of structural connectivity among a system’s variables that can be used to address this problem. The relative degree of a controlled variable with respect to a manipulated variable for continuous-time ODE systems is the smallest order of the time derivative of the output that directly depends on the input. It therefore captures how direct the effect of an input is on an output, and how sluggish the corresponding response is. In [16], it was shown that relative degrees could be calculated from an equation graph capturing the connectivity among the variables in an ODE system, as the shortest path length between the input and the output nodes. Building on this idea, hierarchical clustering procedures [19, 20] have been developed to determine input/output clusters for process networks modeled by ODE systems at various levels of decentralization based on structural closeness (in a relative degree sense) within and among clusters. A notion of modularity [24] was used in [21] to evaluate the resulting hierarchy of clusters and identify the optimal decomposition. This approach results in the identification of input-output clusters that are allocated to separate controllers, but does not explicitly allocate state variables to the resulting clusters; yet state variables represent information sharing in a distributed control framework. An algorithm proposed in [25] considers the network equation graph, which is bisected in a hierarchical manner into smaller blocks so as to minimize input-state and state-state interactions among the resulting clusters. The change in the modularity value [26] with each bisection is maximized to obtain the optimal decomposition. The optimal decomposition thus obtained has been shown to allow for computationally more efficient solutions of the distributed model predictive control design problem without compromising on the closed loop performance as compared to centralized model predictive control [27]. The above-mentioned algorithms, taking advantage of well-established graph theory algorithms, are generically applicable to and well suited for large process networks. They are, however, limited to networks modeled by ODE systems. Chemical plants often comprise interconnected LPS’s and DPS’s. Currently there exists no method to represent networks of a combination of ODE and PDE systems on a common equation graph that captures the interconnections among all the network variables. A very commonly encountered class of DPS’s is that of convection-reaction processes (e.g., heat exchangers, plug flow reactors, adsorption and absorption columns, etc.), which are modeled by first-order hyperbolic PDEs. This paper proposes an extension of the afore-mentioned methods to include variables of such first-order hyperbolic PDE systems. 2

Page 3 of 28

Specifically, • we propose a structural interaction parameter (SIP) which quantifies the interactions among the variables of first-order hyperbolic PDE systems in a way analogous to relative degree in ODE systems;

ip t

• we propose an equation graph representation for hyperbolic PDE systems and establish the graph theoretic interpretation of the structural interaction parameter in terms of the length of input-output paths; and

us

cr

• we generalize the methods in [21, 19] and [25] to allow for decomposition of process networks modeled by ODE and hyperbolic PDE systems based on the structural interaction parameters.

an

A small network with two reactors is used to illustrate the construction of the equation graphs. The application of the proposed process network decomposition methods is illustrated via a case study on an absorption-desorption network. 2. Graph Representations of Hyperbolic PDE Systems

M

We consider linear, first-order hyperbolic PDE systems of the form: ∂x ∂x =A + Bx + Gu(z, t) ∂t ∂z

(1)

Ac ce

pt

ed

where, x(z, t) = [x1 (z, t) · · · xnx (z, t)]T ∈ Rnx is the vector of distributed state variables, nx is the number of state variables, z ∈ [0, L] ⊂ R is the spatial coordinate (L being the length of the spatial domain), t ∈ [0, ∞) is the time, and A, B, and G are constant matrices of conforming dimensions. The matrix A is diagonal and contains the velocities, vi contributing to the convective transport in the system. Non-linear first-order hyperbolic PDEs can be linearized around a steady state to the generic form described above. The boundary conditions are: K1 x(0, t) + K2 x(L, t) = R(t)

(2)

where K1 and K2 are constants and R(t) is a smooth function of time. The initial conditions are: x(z, 0) = x0 (z) (3) The manipulated inputs to the system could be of the following types: • Velocity inputs: the elements of the diagonal matrix A. • Distributed inputs: the elements of the vector u(z, t). • Boundary inputs: the elements of the vector R(t). The controlled outputs of the system could be of the following types: 3

Page 4 of 28

• Distributed outputs: functions of state variables that are dependent on space and time: yd (z, t) = Hd x

(4)

where Hd is a constant matrix of conforming dimensions.

where Hb is a constant matrix of conforming dimensions.

(5)

cr

yb (t) = Hb x|z=L

ip t

• Boundary outputs: functions of the values of the state variables at a certain spatial location (typically a boundary), that are functions of time alone, e.g.,

Ac ce

pt

ed

M

an

us

Boundary inputs and outputs can be treated as a special case of distributed ones, as will be discussed in the subsequent sections.

Figure 1: Types of input - output combinations: (a) Velocity input - Boundary output, (b) Distributed input - Distributed output, (c) Boundary input - Boundary output

Figure 1 shows combinations of different types of inputs and outputs that are commonly observed in distributed chemical processes modeled by first-order hyperbolic PDEs. The goal is to develop a method to represent on a graph the structural interactions among the input, state and output variables of multiple DPS’s of the above form possibly connected to LPS’s. The first step to this end is the selection of parameters that quantify the structural proximity among the variables. In what follows, we define the structural interaction parameter (SIP) for specific, typical combinations of inputs and outputs, allowing for a natural generalization of the concept of relative degree as a measure of structural interactions and a unified graph representation of such interactions for hyperbolic PDE systems. 4

Page 5 of 28

cr

ip t

2.1. Structural Interaction Parameters for Hyperbolic PDEs 1. Velocity Input - Boundary Output Case: In this case (figure 1-(a)), if the fluid is assumed to be incompressible, then despite the distributed nature of the system, the input and output variables depend solely on time (as in the case of an ODE system). In [28], a concept analogous to relative degree was postulated for such systems under the assumption that there is a single stream of an incompressible fluid flowing in the system. For such systems, the SIP of an output with respect to the manipulated flow velocity can be defined as this relative degree analog, i.e. the smallest order time derivative of the output which explicitly depends on the manipulated flow velocity. For example, if the first time-derivative of the output yj (j ∈ {1, ny }), given by:

us

 nx  ∂xi dyj X b,T b = hji vi + (h Bx) j dt ∂z z=L z=L i=1

(6)

an

where hbji is the (j, i)th element of Hb and hb,T is the j th row of Hb , explicitly depends j on the velocity vi , that is: b ∂xi hji 6= 0 (7) ∂z z=L

Ac ce

pt

ed

M

then, the SIP of yj with respect to vi is equal to 1. If equation (7) is not satisfied, then the SIP is greater than 1, and can be determined by further differentiation of the output variable. This concept can be directly extended to systems with multiple flow velocities (e.g., counter-current heat-exchangers, absorber columns, etc.). 2. Distributed Input - Distributed Output Case: In this case, the input and the output are functions of space and time as shown in (figure 1-(b)). Consider a single-input single-output (SISO) first-order hyperbolic PDE system as described by equation (1). In the case of distributed actuation, it is customary to assume that the distributed input variable, u(z, t), is of the form of a finite number (nz ) of actuators distributed over the length of the DPS, and each actuator acts on a spatial interval (e.g., [zk−1 , zk ] ∀k ∈ {1, nz }) [29]. Mathematically, this implies that: u(z, t) = pk (z)ˆ uk (t)

(8)

where pk (z) is a smooth function defined over the k th spatial interval (z ∈ [zk−1 , zk ]). Also, the variable yd (z, t) = hTd x(z, t) is usually controlled over a finite number of spatial intervals ([zk−1 , zk ] ∀ k ∈ {1, nz }) by considering nz spatially averaged output variables given by: yˆk (t) = Ck y(z, t) (9) where Ck is a bounded operator defined over the interval [zk−1 , zk ] of the form: Z zk Ck y(z, t) = ck (z)y(z, t)dz

(10)

zk−1

5

Page 6 of 28

with ck (z) a smooth function. Hence, the variables whose interactions are relevant are: ˆ (t) = [ˆ u u1 (t)...ˆ unz (t)]T

(11)

ˆ (t) = [ˆ y y1 (t)...ˆ ynz (t)]T

(12)



σ(k) −1 ∂ A +B g 6= 0 ∂z

us

hTd

cr

ip t

These are finite dimensional vectors, which depend solely on time. For these finite dimensional input and output vectors, the concept of characteristic index σ (k) , of yˆk with respect to uˆk was defined in [29] as the smallest order of time derivative of yˆk in which uˆk appears explicitly. This is equivalent to determining the smallest integer, σ (k) , for which the following equation is satisfied: (13)

M

an

Typically, the characteristic indices for all pairs {ˆ uk /ˆ yk } are equal, and the correspondˆ (t) with ing value (σ (1) = σ (2) ...σ (nz ) = σ) can be viewed as the characteristic index of y ˆ (t) [29]. This concept was defined for a SISO system in [29], but, it can be respect to u directly extended to a multiple-input multiple-output (MIMO) system. This concept of characteristic index will be used as the SIP in this case of distributed actuation. 3. Boundary Input - Boundary Output Case: This can be considered as a specific instance of the distributed input - distributed output case, wherein the input has the form:

ed

u(z, t) = δ(z)xm (z, t)

m ∈ {1, nx }

(14)

and the output has the form:

pt

y(z, t) = δ(z − L)hTb x

(15)

Ac ce

where δ(z) is the Dirac delta function. Note that the input and output variables above are at different spatial locations (noncollocated). If the SIP definition from the distributed input - distributed output case is directly applied to this case, the SIP of the output with respect to the input would be ∞ [30]. To overcome this problem, the contribution in [31] defined a modified output variable as: Z L yˆ(t) = c(z)hTb xdz (16) 0

where c(z) is a smooth shaping function defined on [0, L]. The first time-derivative of the new output is given by: Z L ∂x dˆ y (t) = c(z)hTb dz dt ∂t 0

(17)

Substituting for ∂x/∂t from the system dynamics with the input given by equation 6

Page 7 of 28

(14), we get: dˆ y (t) = dt

Z

L

c(z)hTb

0

  ∂x A + Bx + Gδ(z)xm (z, t) dz ∂z

(18)

ip t

The characteristic index of yˆ(t) with respect to input u(z, t) will be finite if and only if c(z) is chosen such that: Z L c(z)δ(z)dz 6= 0 (19) 0

us

cr

Hence, any smooth function c(z) that approximates δ(z − L) but also satisfies c(0) 6= 0 can be chosen for the new output to "collocate" the input and output variables [30]. We adopt this definition of the characteristic index as the SIP for the case of boundary input - boundary output. This definition can be directly extended to a MIMO boundary input - boundary output case too.

M

an

2.2. Equation Graphs of a Network of Hyperbolic PDE Systems We now propose a graph representation of a network comprising hyperbolic PDE systems. This representation can be seamlessly integrated with a standard equation graph representation of ODE systems, thus allowing the graph theoretic analysis of networks comprising both lumped and distributed parameter systems. For this purpose, the following graph representation of a network is proposed:

ed

• There is a node for every input variable, which could be a manipulated velocity vi (t), or ˆ i (t) (equation (11)) or a boundary manipulation a finite dimensional distributed input u xi (zb , t) acting at spatial location zb = 0 or L.

Ac ce

pt

• There is a node for every output variable, which could be the finite dimensional disˆ j (t) (equation (12)), or spatially averaged output yˆ(t) defined to tributed output y capture the boundary outputs (equation (16)). • There is a node for every spatially averaged state variable defined by: Z L xˆm (t) = cm (z)xm (z, t)dz

(20)

0

where cm (z) is a smooth shaping function defined on [0, L]. The spatial averaging for the state variables is performed for consistency since every input and output variable on the equation graph is a function of time alone. The weight functions (cm (z) in equation (20)) are consistent with those used for the output variables. • There is an edge from vi (t) to xˆm (t) if vi (t) contributes to the convective transport of state variable xm (t). ˆ i (t) to xˆm (t) if u ˆ i (t) appears in the first-order time derivative • There is an edge from u of xˆm (t). 7

Page 8 of 28

• There is an edge from xi (zb , t) to xˆm (t) if xi (zb , t) is the boundary condition of xm (z, t). • There is an edge from xˆm (t) to xˆn (t) if xˆm (t) appears in the first-order time derivative of xˆn (t).

ip t

• There is an edge from xˆm (t) to any output variable node if the output variable is a linear function of the state variable xˆm (t). In this equation graph,

us

cr

• A path is defined as the sequence of edges such that an edge terminates at the node from which the succeeding edge of the sequence begins [16]. An input-output path begins at an input node and terminates at an output node. • The number of edges contained in a path is called the length of the path.

an

In an equation graph as that defined above, the following theorems hold, which generalize a similar result for ODE systems [16].

M

Theorem 1: In the equation graph of a linear first-order hyperbolic PDE system with velocity manipulation vi (t) and a boundary output yˆj (t), let the length of the shortest path from vi (t) to yˆj (t) be lij and the corresponding SIP be σij . Then (21)

ed

σij = lij − 1

The proof of Theorem 1 is given in the Appendix A.

Ac ce

pt

Theorem 2: In the equation graph of a linear first-order hyperbolic PDE system with a ˆ i (t) and a distributed output y ˆj (t), let the length of the shortest path from distributed input u ˆ i (t) to output y ˆj (t) be lij and the corresponding SIP be σij . Then input u σij = lij − 1

(22)

The proof of Theorem 2 is given in Appendix B. Corollary 1: In the equation graph of a linear first-order hyperbolic PDE system with a boundary input ui (t) and a boundary output yˆj (t), let the length of the shortest path from input ui (t) to output yˆj (t) be lij and the corresponding SIP be σij . Then σij = lij − 1

(23)

Corollary 1 follows from Theorem 2, because a boundary input - boundary output pair is a special case of a distributed input - distributed output pair; its proof is therefore omitted.

8

Page 9 of 28

us

cr

ip t

2.3. Illustrative Example The proposed graph representation is now illustrated by an example comprising of both lumped and distributed parameter systems. Specifically, consider the network shown in figure 2, of an ideal continuous stirred tank reactor (CSTR, an ODE system) connected in series with a plug flow reactor (PFR, a first-order hyperbolic PDE system).

an

Figure 2: CSTR and PFR in series

M

The fluid in these reactors is incompressible. A reactant at concentration C0 (t) is fed to the CSTR, where it reacts at concentration C1 (t). The temperature in the CSTR is T1 (t). In the PFR, the concentration profile is C2 (z, t) and the temperature profile is T2 (z, t). The rate constant of the reaction follows the Arrhenius equation, k(T ) = k0 e−EA /RT , and the enthalpy of the reaction is ∆H. The manipulated variables in the network are:

ed

• v(t): the flow velocity of the fluid in the system. • Q(t): the heat input given to the CSTR.

pt

• TS (z, t): the temperature profile in the jacket of the PFR. The controlled outputs in the network are:

Ac ce

• T1 (t): the temperature in the CSTR. • T2 (L, t): the temperature at the exit of the PFR. • C2 (L, t): the concentration of the reactant at the exit of the PFR. The CSTR has volume V1 and residence time τ1 . The PFR has space-time τ2 . The fluid has density ρ and specific heat capacity Cp . The material and energy balance equations, with the boundary inputs incorporated using the Dirac delta function are given by: C0 − C1 dC1 = − k(T1 )C1 dt τ1 dT1 T0 − T1 Q −∆H = + + k(T1 )C1 dt τ1 ρCp V1 ρCp

(24) (25)

9

Page 10 of 28

(26) (27)

an

us

cr

ip t

∂C2 ∂C2 = −v − k(T2 )C2 + vδ(z)C1 ∂t ∂z ∂T2 ∂T2 hA(Ts − T2 ) ∆H = −v + − k(T2 )C2 + vδ(z)T1 ∂t ∂z ρCp ρCp

M

Figure 3: Equation graph of CSTR-PFR in series

ed

The equation graph for this network, after reformulating the variables as discussed previously, is given in figure 3. It can be seen from the equation graph that the length of the path from v(t) to T1 (t) is 2. Equivalently, v(t) appears in the first time derivative of T1 (t), so the relative degree of T1 (t) with respect to v(t) is 1. Similar is the case for all other input-output interactions. The SIPs can, thus, be calculated from the equation graph using the theorems and the corollary stated above as:

Ac ce

pt

T1 Tˆ2 (t) Cˆ2 (t) " v(t) 1 1 1 # Q(t) 1 3 2 ˆ TS (t) ∞ 2 1

3. Process Network Decomposition Two algorithms are proposed for the decomposition, adopted from the corresponding algorithms for pure ODE systems. The first algorithm considers the input-output interaction strength as captured by the SIP values, and determines hierarchical decompositions of only the input and the output variables [19]. This algorithm follows a bottom-up approach, that is, starting from a fully decentralized control configuration, the variables are clustered hierarchically to a fully centralized configuration. The optimal decomposition is selected based on the modularity values of the decompositions at various hierarchies [21]. The second algorithm considers the interactions among the input, state and output variables, and determines the optimal decomposition of these variables based on modularity maximization 10

Page 11 of 28

[25]. This algorithm follows a top-down approach, that is, starting from a fully centralized configuration, the clusters of input, output and state variables are bisected until further decomposition does not yield an increase in modularity are found to be less that the previous decomposition.

us

cr

ip t

3.1. Input/Output Clustering Step 1: Structural Interaction Matrix Calculation We consider a network of interconnected systems modeled by ODEs and first-order hyperbolic PDEs. From the network model, the equation graph of the network is obtained. The shortest path lengths from each input variable to each output variable are calculated from the equation graph using Dijkstra’s algorithm, which determines the shortest path between any two vertices of a graph. We form the structural interaction matrix (SIM), M, as a matrix whose elements, σij , are the SIPs (if it is a PDE system) or the relative degrees (if it is an ODE system) of the output yj with respect to the input ui .

M

an

Step 2: Decentralized Decomposition An optimal fully decentralized decomposition is obtained by maximizing the value of [19]: nu X nu X (σij − σii ) (28) JDC = i=1 j=1

pt

ed

i.e. pairing inputs and outputs so that the differences between the off-diagonal and diagonal SIPs are maximized. nu is the number of input and output variables in the network (assumed equal). We define the structural decoupling matrix (SDM) whose ij th element is given by [21]:  X nu nu X σkj − (nu + nu )σij , 0 (29) σik + sij = max k=1

k=1

Ac ce

sij is a measure of how decoupled an input-output pair is from all other inputs and outputs in the network. Using sij , the optimization problem can be reformulated as an integer program by re-defining the objective function to be maximized as: JDC =

ny nu X X

sij pij

(30)

i=1 j=1

where pij is a binary variable which has the value 1 if ui and yj are selected as an inputoutput pair and the value 0 otherwise. The optimal solution obtained from this integer programing problem yields a nu × nu matrix Popt containing all the pij ’s. The SIM of the optimal decentralized-control decomposition is given by: T Mopt = Popt M

(31)

The diagonal elements of Mopt are the SIPs corresponding to each input-output pair. It is possible that there are multiple such optimal decompositions. 11

Page 12 of 28

In the case where the inputs are more than the outputs (nu > ny ), the above procedure results in a square ny × ny matrix Mopt , in which nu − ny inputs are not associated with any output and are not considered further.

ip t

Step 3: Agglomerative Clustering Procedure Starting from the optimal decentralized decomposition, clusters of input-output pairs are formed in a hierarchical manner until a single cluster is obtained. This agglomerative clustering procedure is as follows [19]:

an

1. dij = max{σij , σji } − σii + max{σij , σji } − σjj 2. δij = min{σij , σji } − σii + min{σij , σji } − σjj 3. ∆ij = max {σij , σji }

us

cr

• Two input-output pairs, {ui , yi }, {uj , yj } are structurally close if the {ui , yj } and {uj , yi } interactions are comparable to the {ui , yi }, {uj , yj } interactions (i.e., the offdiagonal SIPs, σij and σji are comparable to the diagonal SIPs, σii and σjj ). The following measures of distance among input-output pairs are considered:

M

dij captures the largest difference between off-diagonal and diagonal SIPs for the two input-output pairs. δij is the smallest difference between off-diagonal and diagonal SIPs. ∆ij is simply the largest off-diagonal SIP.

ed

• A distance among input-output clusters is then defined. The following are the corresponding measures of the distance among clusters:

pt

1. d(A, B) = max{d(a, b) : a ∈ A, b ∈ B} 2. δ(A, B) = min{δ(a, b) : a ∈ A, b ∈ B} 3. ∆(A, B) = max{∆(a, b) : a ∈ A, b ∈ B}

Ac ce

where, A and B are input-output clusters, a and b are individual inputs or outputs in the clusters. • For clustering the following procedure is followed: 1. At each level of clustering, compute the matrices Dd , Dδ and D∆ for all inputoutput clusters (or pairs, in the case of the first level of clustering); these matrices have as elements the distance measures d, δ and ∆, respectively, between two clusters. 2. Determine the pair of input-output clusters for which the distance d is minimum, and merge them. 3. If there are more than one such pair of clusters, compare the values of δ and ∆ of the two input-output clusters, and merge the ones with the minimum distance. 4. If there are more than one pair of clusters with identical distance measures d, δ and ∆, create separate configurations for further clustering. 5. Continue 1-4 until a single cluster is obtained. 12

Page 13 of 28

The algorithms used for the calculation of the SIM and the agglomerative clustering are polynomial and linearithmic time algorithms respectively, and hence scale well with the size of the network [21].

us

cr

ip t

Step 4: Optimal Process Network Decomposition A notion of modularity [24] is used to compare the process network decompositions at all levels of the hierarchical clustering. A decomposition with a high modularity value is such that the variables within a sub-network have strong interactions among themselves, and weak interactions with the variables outside the sub-network. The decomposition with the highest value of modularity is optimal. The modularity matrix, B is expressed as [21]: P P   m e m e m e ij ik kj k − kP (32) B = bij |bij = P m e ( m e )2 ij ij ij ij

an

where m e ij is the inverse of mij , the ij th entry in Mopt . δijk is a binary variable defined as ( 1, if ui and yj are included in the k th cluster δijk = (33) 0, otherwise

Q=

nk X  X

m e P ij − e ij ij m

ed

k=1 i,j

M

The overall modularity of the network with nk clusters is: P

(

P

 m e kj δijk e ij )2 ij m

m e ik kP

k

(34)

pt

The modularities of all decompositions are calculated and the one with the maximum modularity is selected as the optimal.

Ac ce

3.2. Modularity Based Decomposition of Equation Graph The network is modeled as interconnected LPS’s and DPS’s. From the model, the equation graph of the network is obtained. The equation graph is used to determine the adjacency matrix A (a matrix such that the element Aij of the matrix is 1 if there is an edge from the node j to the node i, and 0 otherwise). The value of the modularity is [25, 26]: Q=

1 T s (B + B T )s 4m

(35)

where B, the modularity matrix, is a matrix whose elements are given by: kiin kjout Bij = Aij − m

(36)

where m is the total number of edges in the equation graph of the network, Aij is the ij th element of A, and kiin and kiout are the in-degree and out-degree of the node i respectively. 13

Page 14 of 28

The vector s is the bisection vector, whose elements are given by: ( 1 if node i belongs to community 1 si = −1 if node i belongs to community 2

(37)

ip t

In order to determine s (representing the decomposition) that maximizes the modularity, the following approach is followed [26]:

cr

• Determine the largest positive eigenvalue λmax of (B + B T ) and the corresponding ¯ max . eigenvector v

us

¯ max is positive and to com• Assign the node i to community 1 if the ith element of v munity 2 of it is negative.

an

• Transfer each node to the other community one at a time and check if the modularity increases by doing so. If it does, update the configuration. Further bisections of the smaller communities are obtained by maximizing the change in modularity when a community is bisected, which is given by: 1 T T s (B(c) + B(c) )s 4m

M

∆Q =

(38)

pt

ed

where B(c) is the modularity matrix of the bisected community. The decomposition in which no further community bisection increases the modularity is the optimal one. This algorithm has an average running time of O(n2 log n) where n is the total number of nodes in the equation graph [25]. 4. Case Study: Absorption-Desorption Network

Ac ce

Consider the process network shown in figure 4 comprising two absorber columns and two stripper columns. The gaseous stream entering the absorber column A-01 contains a component that is absorbed in the liquid solvent stream. The spent solvent is sent to the stripper column S-01, in which the absorbed component is removed using steam coming from another stripper column S-02 and a reboiler, and the regenerated solvent is recycled back to the absorber column A-01. The gaseous stream from A-01 with a fraction of the component absorbed is sent to the second absorber column A-02 in which the remaining component is absorbed using another solvent stream. The spent solvent from A-02 is regenerated in S-02 using steam from a reboiler. The absorber columns (A-01 and A-02) are connected via the gaseous stream, which contains the component that is absorbed by the solvent. The first stage absorber and stripper columns (A-01 and S-01) are connected by a stream containing the solvent. The second stage absorber and stripper columns (A-02 and S-02) are connected by another stream containing the solvent. The two stripper columns are connected by a stream containing steam used for regenerating the solvent. The partially cleansed gaseous stream leaving from A-01 is used to heat the incoming gaseous stream. The spent solvent 14

Page 15 of 28

ip t cr us an

M

Figure 4: Absorption-desorption network

pt

ed

from absorber A-02 is heated using the regenerated solvent from stripper S-02. The four columns and the two heat exchangers are all distributed parameter systems modeled by first-order hyperbolic PDEs (see Appendix C). Networks like the one described above are widely used in natural gas sweetening and flue gas processing [32, 33, 34], to reduce acid gas discharge into the atmosphere. The manipulated inputs for this network are:

Ac ce

u1 (t) Gaseous stream temperature entering the first heat exchanger u2 (t) Flow velocity of solvent in the first stage absorber and stripper u3 (t) Flow velocity of solvent in the second stage absorber and stripper u4 (t) Steam flow-rate in the second stage stripper u5 (t) Steam flow-rate in the first stage stripper u6 (t) Temperature of steam entering the second stage stripper and the controlled outputs are: y1 (t) y2 (t) y3 (t) y4 (t) y5 (t) y6 (t)

Component concentration in gaseous stream exiting the first stage absorber Component concentration in solvent stream entering the first stage stripper Temperature of the solvent stream exiting the first absorber Temperature of the gaseous stream exiting the second stage absorber Component concentration in the solvent exiting the second stage stripper Temperature of the solvent stream entering the second stage stripper 15

Page 16 of 28

following SIM is calculated:  6 5 5 5  1 2  2 4  7 7 2 4

(39)

ip t

From the digraph of the network (figure 6), the  4 5 3 5 2 1 1 3  6 4 5 2 M = 5 3 4 4  4 2 3 5 5 3 4 4

cr

Using the algorithms described in the previous section, we obtain the optimal process network decompositions discussed below.

us

4.1. Input/Output Clustering The following four optimal decentralized decompositions are obtained:

an

1.{u2 , y1 }, {u5 , y2 }, {u1 , y3 }, {u4 , y4 }, {u6 , y5 }, {u3 , y6 } 2.{u1 , y1 }, {u5 , y2 }, {u2 , y3 }, {u4 , y4 }, {u6 , y5 }, {u3 , y6 }

M

3.{u1 , y1 }, {u5 , y2 }, {u2 , y3 }, {u6 , y4 }, {u4 , y5 }, {u3 , y6 } 4.{u2 , y1 }, {u5 , y2 }, {u1 , y3 }, {u6 , y4 }, {u4 , y5 }, {u3 , y6 }

ed

For each of these decompositions, the value of the objective function JDC is identical (= 97). When the agglomerative clustering procedure is applied to all four decompositions, the unique optimal decomposition on the basis of maximum modularity is determined to be {u1 , u2 , u5 , y1 , y2 , y3 }, {u3 , u4 , u6 , y4 , y5 , y6 },

pt

with the modularity of 0.1783. The optimal decomposition is shown in figure 5.

Ac ce

4.2. Equation Graph Decomposition The following clusters are obtained from the modularity based decomposition algorithm: g g g g l l l l h c {u1 , u2 , u5 , CA1 , CA1 , TA1 , TA1 , CS1 , CS1 , TS1 , TS1 , TH1 , TH1 , y1 , y2 , y3 } g g g g l h c l l l {u3 , u4 , u6 , CA2 , CA2 , TA2 , TA2 , CS2 , CS2 , TS2 , TS2 , TH2 , TH2 , y4 , y5 , y6 }

where C represents concentration of the component being absorbed, T is the stream temperature, subscripts Ai, Si and Hi represent that the variable is associated with the absorber, stripper or heat exchanger of index i, and the superscripts l and g represent the liquid or gaseous phase of the stream.

16

Page 17 of 28

ip t cr us an

Ac ce

pt

ed

M

Figure 5: Optimal decomposition of the absorption-desorption network

Figure 6: System equation graph with the clusters and inter-cluster interactions

It can be seen that the decomposition obtained from this algorithm is identical to that obtained from the input/output clustering, except that it provides more information regarding the decomposition (figure 6). The first cluster obtained contains all the state, input and output variables associated with the first-stage absorber and stripper columns, and the sec17

Page 18 of 28

ond one contains all the state, input and output variables associated with the second-stage capture absorber and stripper column. These are integrated via four pairs of variables: g g g g g g g h {TH01 , TA2 }, {CA1 , CA2 }, {CS2 , CS1 }, {TS2 , TS1 }

y4 3 5 5 4 4 2

y5 5 7 6 2 2 1

y6 5 7  5  4  4 2

us

y3 1 3 3 4 4 5

an

y2 1 2 5 3 3 4

pt

ed

M

u1 u5 u2 u4 u6 u3

y1 2 4  4  5  5 6

cr

ip t

4.3. Implications of the Decomposition To provide more insight on the decompositions obtained, figure 7 shows the hierarchical clustering levels obtained from the input/output clustering algorithm, starting from the first optimal decentralized decomposition with a SIM given by:

Ac ce

Figure 7: Hierarchical clustering of the first decentralized decomposition

The modularities corresponding to each level of clusters are listed below: Level Modularity 1 0.052 2 0.116 3 0.153 4 0.178 5 1.04×10−16

The modularity of the decomposition at level 1 is much smaller than that of the decompositions at levels 2, 3 and 4, indicating that the interactions among inputs and outputs are too strong to favor a fully decentralized decomposition. In level 3, the inputs and outputs 18

Page 19 of 28

M

an

us

cr

ip t

related to the second stage of absorption-stripping are clustered together causing the modularity to almost double. The modularity rises further as the variables associated with the first stage of absorption-stripping are clustered together in level 4. However, it is observed that the modularity of level 5 (fully centralized control) is much smaller than the modularity of level 4 (the optimal decomposition). This can be attributed to the large values of SIPs of outputs related to the one of the stages of absorption-stripping and inputs related to the other stage of absorption-stripping as marked with bold in the optimal SIM above. The optimal decomposition obtained from both the algorithms implies that the process units connected through a liquid solvent stream (such as the first stage absorber and stripper columns) have closer interactions than those connected via other streams (such as the two absorbers). The variables associated with the units connected by the solvent stream (such as {u1 , u2 , u5 , y1 , y2 , y3 } which are all associated with the first stage absorber and stripper columns), thus, belong to the same cluster. This is because the reaction rates and heat released due to reaction are dependent on the bulk phase concentrations of the components in the liquid phase (solvent). The reactions affect the composition of the streams, and the heat released in the reactions has a dominating effect on the temperatures of the streams in the columns. The effect of the solvent stream on the degree of the interactions among the process variables is not necessarily obvious from the process flow sheet. Both of the algorithms successfully capture this effect, and include it in determining the optimal decomposition. 5. Conclusions

Ac ce

pt

ed

This paper addresses the problem of obtaining control relevant process network decompositions based on the structural interconnections among the process variables for networks containing systems modeled by first-order hyperbolic PDEs interconnected with those modeled by ODEs. A parameter called the structural interaction parameter is defined that quantifies the strength of the structural interactions among the variables of a first-order hyperbolic PDE system. An equation graph representing the network is developed, which aids in the calculation of the SIPs. From an input/output clustering algorithm, which is a bottom-up procedure, several candidate decompositions are obtained from which an optimal decomposition is selected based on a modularity measure. In an equation graph decomposition algorithm, which is a top-down approach, hierarchical repeated bisection of the network variables into smaller blocks with increased modularity results in an optimal decomposition. The latter also captures the interaction among the blocks by identifying variable interactions across clusters. On a case study of an absorber-desorber network, these two algorithms are used to obtain the optimal process network decomposition. Both the algorithms produce a decomposition that separates the two stages of absorption-desorption, identifying the structural proximities caused by a shared solvent stream. The proposed approach is generically applicable, scales well with the size of the network, and is straightforward to implement and automate. This paper focused on obtaining the decomposition of networks of ODE systems interconnected with solely first-order hyperbolic PDE systems, which are derived under the assumption that diffusion/dispersion effects are not as important. In future work, we plan to 19

Page 20 of 28

generalize these results to include parabolic PDEs which model general transport-reaction processes. Future research will also address the model-based distributed control of generic chemical process networks, for example using the well-developed geometric control methods for ODEs and hyperbolic PDEs, or distributed MPC approaches.

ip t

Acknowledgment

cr

Partial financial support from the Petroleum Institute and the National Science Foundation is gratefully acknowledged. Lixia Kang was supported by the China Scholarship Council.

us

References

Ac ce

pt

ed

M

an

[1] H. Cui, E. W. Jacobsen, Performance limitations in decentralized control, Journal of Process Control 12 (4) (2002) 485–494. [2] A. Kumar, P. Daoutidis, Nonlinear dynamics and control of process systems with recycle, Journal of Process Control 12 (4) (2002) 475–484. [3] E. Camponogara, D. Jia, B. H. Krogh, S. Talukdar, Distributed model predictive control, IEEE Control Systems 22 (1) (2002) 44–52. [4] R. Olfati-Saber, R. M. Murray, Distributed cooperative control of multiple vehicle formations using structural potential functions, IFAC Proceedings Volumes 35 (1) (2002) 495–500. [5] F. Bullo, J. Cortés, S. Martinez, Distributed control of robotic networks: a mathematical approach to motion coordination algorithms, Princeton University Press, 2009. [6] G. Weiss, Multiagent systems: a modern approach to distributed artificial intelligence, MIT press, 1999. [7] K. M. Passino, Biomimicry of bacterial foraging for distributed optimization and control, IEEE Control Systems 22 (3) (2002) 52–67. [8] J. Liu, D. Muñoz de la Peña, P. D. Christofides, Distributed model predictive control of nonlinear process systems, AIChE Journal 55 (5) (2009) 1171–1184. [9] M. J. Tippett, J. Bao, Dissipativity based distributed control synthesis, Journal of Process Control 23 (5) (2013) 755–766. [10] R. Scattolini, Architectures for distributed and hierarchical model predictive control–a review, Journal of Process Control 19 (5) (2009) 723–731. [11] B. T. Stewart, A. N. Venkat, J. B. Rawlings, S. J. Wright, G. Pannocchia, Cooperative distributed model predictive control, Systems & Control Letters 59 (8) (2010) 460–469. [12] G. Antonelli, Interconnected dynamic systems: An overview on distributed control, IEEE Control Systems 33 (1) (2013) 76–88. [13] P. D. Christofides, R. Scattolini, D. Muñoz de la Peña, J. Liu, Distributed model predictive control: A tutorial review and future research directions, Computers & Chemical Engineering 51 (2013) 21–41. [14] N. Motee, B. Sayyar-Rodsari, Optimal partitioning in distributed model predictive control, in: Proceedings of the American Control Conference, 2003, Vol. 6, IEEE, 2003, pp. 5300–5305. [15] W. Al-Gherwi, H. Budman, A. Elkamel, Selection of control structure for distributed model predictive control in the presence of model errors, Journal of Process Control 20 (3) (2010) 270–284. [16] P. Daoutidis, C. Kravaris, Structural evaluation of control configurations for multivariable nonlinear processes, Chemical Engineering Science 47 (5) (1992) 1091–1107. [17] X. Yin, K. Arulmaran, J. Liu, J. Zeng, Subsystem decomposition and configuration for distributed state estimation, AIChE Journal 62 (6) (2016) 1995–2003. [18] K. Hangos, Z. Tuza, Optimal control structure selection for process systems, Computers & Chemical Engineering 25 (11) (2001) 1521–1536. [19] S. Heo, W. A. Marvin, P. Daoutidis, Automated synthesis of control configurations for process networks based on structural coupling, Chemical Engineering Science 136 (2015) 76–87.

20

Page 21 of 28

pt

ed

M

an

us

cr

ip t

[20] S. Heo, P. Daoutidis, Control-relevant decomposition of process networks via optimization-based hierarchical clustering, AIChE Journal 62 (9) (2016) 3177–3188. [21] L. Kang, W. Tang, Y. Liu, P. Daoutidis, Control configuration synthesis using agglomerative hierarchical clustering: A graph-theoretic approach, Journal of Process Control 46 (2016) 43–54. [22] T. Schné, K. M. Hangos, Decentralised controller structure design and retrofit of process systems based on graph theory, International Journal of Systems Science 42 (6) (2011) 1023–1033. [23] M. Ellis, P. D. Christofides, Selection of control configurations for economic model predictive control systems, AIChE Journal 60 (9) (2014) 3230–3242. [24] M. E. Newman, M. Girvan, Finding and evaluating community structure in networks, Physical Review E 69 (2) (2004) 026113. [25] S. Jogwar, P. Daoutidis, Community-based synthesis of distributed control architectures for integrated networks, Submitted to Chemical Engineering Science, 2016. [26] E. A. Leicht, M. E. Newman, Community structure in directed networks, Physical review letters 100 (11) (2008) 118703. [27] D. B. Pourkargar, A. Almansoori, P. Daoutidis, The impact of decomposition on distributed model predictive control: A process network case study, Submitted to Industrial & Engineering Chemistry Research, 2017. [28] P. K. Gundepudi, J. C. Friedly, Velocity control of hyperbolic partial differential equation systems with single characteristic variable, Chemical Engineering Science 53 (24) (1998) 4055–4072. [29] P. D. Christofides, P. Daoutidis, Feedback control of hyperbolic pde systems, AIChE Journal 42 (11) (1996) 3063–3086. [30] A. Maidi, M. Diaf, J.-P. Corriou, Boundary control of a parallel-flow heat exchanger by input–output linearization, Journal of Process Control 20 (10) (2010) 1161–1174. [31] A. Maidi, J.-P. Corriou, Boundary control of nonlinear distributed parameter systems by input-output linearization, IFAC Proceedings Volumes 44 (1) (2011) 10910–10915. [32] A. Bedelbayev, T. Greer, B. Lie, Model based control of absorption tower for co2 capturing, Proceedings, 49th International Conference of Scandinavian Simulation Society (SIMS 2008). [33] B. Mandal, S. S. Bandyopadhyay, Simultaneous absorption of co2 and h2s into aqueous blends of nmethyldiethanolamine and diethanolamine, Environmental Science & Technology 40 (19) (2006) 6076– 6084. [34] D. A. Glasscock, G. T. Rochelle, Approximate simulation of co2 and h2s absorption into aqueous alkanolamines, AIChE Journal 39 (8) (1993) 1389–1397.

Ac ce

Appendix A. Theorem 1 Proof

For notational simplicity, we drop the subscripts of the SIP (σij ) and the length of the path (lij ). The length of the shortest path in the digraph from an input vβ to an output yα is l, if the path traverses exactly l − 1 state variables between vβ to yα and hence, there exist a set of distinct indices {α, k2 , . . . , kl−1 , β} such that: ∂xβ 6= 0 (A.1) hαk2 bk2 k3 . . . bkl−1 β ∂z z=L and for any p < l − 1, for all possible sets of indices {α, k1 , . . . , kp , β}, ∂xβ hαk1 bk1 k2 . . . bkp β =0 ∂z z=L

(A.2)

21

Page 22 of 28

Given that the output is y = Hx|z=L

(A.3)

∂x ∂x =A + Bx ∂t ∂z

(A.4)

and the system dynamics are

yα =

nx X

ip t

the αth element of y, yα is given by hαk1 xk1 |z=L

(A.5)

cr

k1 =1

us

The first time derivative of yα is given by nx X ∂xk1 dyα = hαk1 dt ∂t z=L k =1

an

1

(A.6)

Substituting for ∂xk1 /∂t in equation (A.6) from equation (A.4), we get (A.7)

∂xk1 6= 0 ∀k1 ∈ {1, nx } ∂z z=L

(A.8)

M

  nx nx X dyα ∂xk1 X hαk1 vk1 bk1 k2 xk2 = + dt ∂z z=L k =1 k =1 1

2

ed

Assuming,

pt

the SIP of yα with respect to xβ is 1 if

hαβ 6= 0

(A.9)

Ac ce

If hαβ = 0, then we calculate the second time derivative of yα : X  nx nx X d2 yα ∂xk2 = hαk1 bk1 k2 vk2 + other terms dt2 ∂z z=L k =1 k =1 1

(A.10)

2

The SIP of yα with respect to vβ is 2 if ∂xβ hαk1 bk1 β 6= 0 ∂z z=L

(A.11)

Generalizing from the above derivations, the SIP of yα with respect to vβ is σ if there is a set of distinct indices {k2 , ...., kσ } such that, ∂xβ hαk2 bk2 k3 . . . bkσ β 6= 0 (A.12) ∂z z=L 22

Page 23 of 28

But then, from equation (A.1) it can be concluded that the length of the shortest path is σ + 1, which establishes the result. Appendix B. Theorem 2 Proof

n

(B.1)

us

∂xi X + bij xj fi (x) = aii ∂z j=1

cr

ip t

Consider a SISO first-order hyperbolic PDE system with distributed actuation for which the input is u(z, t) and the output is y(z, t). Hence, G and H are a column and a row vector respectively, henceforth represented as g and hT . Defining an operator fi (x) as:

an

where aii are the diagonal elements of A, and bij are the (i, j)th elements of B, yields the following representation of the system dynamics: ∂xi = fi (x) + gi u(z, t) ∂t

∀i ∈ {1, n}

M

y(z, t) = hT x

(B.2)

pt

ed

It must be noted that, as was the case for velocity actuation, if the length of the shortest ˆ (t) to y ˆ (t) is l, then the path traverses exactly l − 1 state variables path in the digraph from u ˆ (t) to y ˆ (t) and hence, there exists a set of distinct indices {µ1 , µ2 , µ3 , . . . , µl−1 } between u such that: ∂fµl−1 ∂fµl−2 ∂fµ2 ... gµ 6= 0 hµl−1 (B.3) ∂xµl−2 ∂xµl−3 ∂xµ1 1

and

Ac ce

ˆ (t) and u ˆ (t) is σ if [29], The SIP between y

h

T



h

T



σ−1 ∂ g 6= 0 A +B ∂z

σ−k ∂ A +B g=0 ∂z

∀ k ∈ {2, σ}

From the definitions of A, B and equation (B.1), a matrix J can be defined as:  ∂f1 ∂f1 ∂f1  . . . ∂x ∂x1 ∂x2 n    ∂f2 ∂f2 ∂f2  ∂  ∂x1 ∂x2 . . . ∂xn  J= A +B = . .. ..  .. ∂z  .. . . .  ∂fn ∂fn ∂fn . . . ∂xn ∂x1 ∂x2

(B.4)

(B.5)

(B.6)

23

Page 24 of 28

Substituting J into equations (B.4) and (B.5), implies that if the SIP is σ then: hT (J)σ−1 g 6= 0 hT (J)σ−k g = 0

(B.7)

∀ k ∈ {2, σ}

(B.8)

cr

ip t

Initially, it is assumed that only one element in h and g is non-zero. That is, there are indices α and β such that h = [0 0 . . . hα . . . 0 0 ]T , g = [0 0 . . . gβ . . . 0 0 ]T . This implies that the manipulated input affects the dynamics of only one state variable (namely, xβ (z, t)), and the controlled output is a function of only one state variable (xα (z, t)). It is subsequently demonstrated that the results are true for arbitrary h and g too.

an

us

If for the input-output pair {u(z, t), y(z, t)}, the SIP σ = p + 1, for an arbitrary p, then from the definition of SIP, and equations (B.7) and (B.8), the following equations are satisfied: hT (J)p g 6= 0 (B.9) and hT (J)p−k g = 0



k ∈ {1, p}

(B.10)

M

Lemma: The pth power of an nx × nx matrix of the form of J is such that its (i, j)th term is given by: nx nx X nx nx nx X X X X ∂fkp−1 ∂fkp ∂fi ∂fk2 (B.11) ... ··· ∂xk2 ∂xk3 ∂xkp ∂xj k =1 k =1 =1 =1 k k =1 k p−1

p−2

3

2

ed

p

Ac ce

pt

The lemma is proved using mathematical induction. For p=2,  ∂f1 ∂f1 ∂f1   ∂f1 ∂f1 . . . ... ∂x1 ∂x2 ∂xn ∂x1 ∂x2 ∂f ∂f ∂f ∂f ∂f 2 2 2 2 2   . . . ∂xn     ∂x1 ∂x2 . . . J 2 =  ∂x. 1 ∂x. 2 .  ..   . .. .. .. ..  .. . .  .. . ∂fn ∂fn ∂fn ∂fn ∂fn . . . ∂xn ... ∂x1 ∂x2 ∂x1 ∂x2 P Pn Pn n ∂f1 ∂fk2 ∂f1 ∂fk2 ... k2 =1 ∂xk2 ∂x1 k2 =1 ∂xk2 ∂x2 k2 =1 Pn Pn Pn ∂f2 ∂fk2 ∂f2 ∂fk2  ... k2 =1 ∂xk2 ∂x2 k2 =1  k =1 ∂x ∂x =  2 . k2 1 . ..  .. .. . P Pn Pn n ∂fn ∂fk2 ∂fn ∂fk2 . . . k2 =1 ∂xk ∂x2 k2 =1 k2 =1 ∂xk ∂x1 2

2

∂f1  ∂xn ∂f2  ∂xn 

..  . 

∂fn ∂xn

∂f1 ∂xk2 ∂f2 ∂xk2

.. .



∂fk2 ∂xn ∂fk2   ∂xn 

∂fn ∂fk2 ∂xk2 ∂xn

  

(B.12)

Hence, the (i, j)th term of J2 is

n X ∂fi ∂fk2 ∂xk2 ∂xj k =1 2

So the lemma is, indeed, true for p=2.

24

Page 25 of 28

Assuming that the lemma is true for an arbitrary p, implies that the (i, j)th term of Jp is n n X X

n X

kp =1 kp−1 =1 kp−2

n X n X ∂fkp−1 ∂fkp ∂fi ∂fk2 ... ··· ∂xk2 ∂xk3 ∂xkp ∂xj =1 k =1 k =1 3

(B.13)

2

∂x1  ∂f2  ∂x1

Jp × J = Jp ×  .  ..

∂fn ∂x1

∂f1 ∂x2 ∂f2 ∂x2

... ... .. .

∂f1  ∂xn ∂f2  ∂xn 

∂fn ∂x2

...

∂fn ∂xn

.. .

..  . 

cr

 ∂f1

ip t

Then, Jp+1 =

us

From the rules of matrix multiplication, it can be concluded that the (i, j)th term of Jp+1 is n n n X X X

3

2

(B.14)

an

kp+1 =1 kp =1 kp−1

n X n X ∂fkp ∂fkp+1 ∂fi ∂fk2 ··· ... ∂xk2 ∂xk3 ∂xkp+1 ∂xj =1 k =1 k =1

So, if the lemma is true for an arbitrary p, it is true for p + 1. Hence, it must be true for all p > 2. Therefore, the (i, j)th term of Jp is given by:

kp =1 kp−1 =1 kp−2

3

M

n n X X ∂fkp−1 ∂fkp ∂fi ∂fk2 ··· ... ∂xk2 ∂xk3 ∂xkp ∂xj k =1 k =1 =1

n X

n n X X

(B.15)

2



(B.16)

pt 3

2

Ac ce

and hα

nx nx X X ∂fkp−1 ∂fkp ∂fα ∂fk2 ··· ... gβ 6= 0 ∂xk2 ∂xk3 ∂xkp ∂xβ k =1 k =1 =1

nx nx X X kp =1 kp−1

ed

For σ = p + 1, equations (B.9)-(B.11) imply the following:

nx X

nx X nx X ∂fkp−l−1 ∂fkp−l ∂fα ∂fk2 ··· ... gβ = 0 ∂xk2 ∂xk3 ∂xkp−l ∂xβ =1 k =1 k =1

nx X

kp−l =1 kp−l−1

3



l ∈ {1, p} (B.17)

2

Equation (B.17) implies that, in equation (B.16) the indices (k2 , k3 , . . . , kp , α, β) are all distinct. Since the left-hand-side of equation (B.16) is a summation of several terms, and is not equal to zero, hence, there is at least one term on the left-hand-side which is non-zero. This implies that there exists at least one set of distinct indices (k2 , k3 , . . . , kp ) such that: hα

∂fkp−1 ∂fkp ∂fα ∂fk2 ... gβ 6= 0 ∂xk2 ∂xk3 ∂xkp ∂xβ

(B.18)

It is observed that equation (B.18) is identical to equation (B.3). But equation (B.3) implies that the shortest path from the input to the output variable has length l, and there are l − 1 25

Page 26 of 28

ip t

nodes corresponding to state variables in the path from the input node to the output node. Since the left hand side of equation (B.18) involves nodes corresponding to p + 1 state ˆ to variables, namely, xα , xβ , xk2 , xk3 , . . . , xkp , it is concluded that the shortest path from u ˆ is of length p + 2. Hence, σ = l − 1, for σ = p + 1. But since p is arbitrary, it can be y concluded that, for any value of σ, σ =l−1 (B.19)

cr

Note that, it is possible that for some systems there are several sets of indices (k2 , k3 , . . . , kp ) which satisfy equation (B.18), but do not satisfy equation (B.16), since the sum of all those terms is zero. This corresponds to a singularity which does not invalidate the generic validity of the result.

nx X nx X

hα [(α, β)th term of Jσ−1 ]gβ

an

hT (J)σ−1 g =

us

Remark: In the case of general forms of h and g, the right hand side of equation (B.7) will be as below:

α=1 β=1

In case of a MIMO system given by:

M

∂x ∂x =A + Bx + Gu(z, t) ∂t ∂z

(B.20)

(B.21)

hi (J)

σij −1

ed

the right hand side of equation (B.7) will be as below: gj =

nx nx X X

hiα [(α, β)th term of Jσ−1 ]gβj

(B.22)

pt

α=1 β=1

Ac ce

where hi is the ith row of the matrix H and gj is the j th column of matrix G. The rest of the proof remains identical. Appendix C. Absorber-Desorber Network Model The following are the model equations of the process systems in the absorber-desorber network: Absorber Column: ∂CAg i ∂CAg i gi = −vgi + awi NAi (C.1) ∂t ∂z ∂C l ∂C l li Ai = vli Ai − awi NAi (C.2) ∂t ∂z ∂Tig ∂TIg awi NAi ∆Hi = −vgi + (C.3) gi ∂t ∂z mgi cˆpgi

26

Page 27 of 28

∂CAg i ∂CAg i gi = −vgi + awi NAi ∂t ∂z ∂CAl i ∂CAl i = vli − awi NAi li ∂t ∂z ∂Tig ∂Tig awi NAi ∆Hi gi = −vgi + ∂t ∂z mgi cˆpgi ∂Til ∂T l aw NA ∆Hi = vli i − i i ∂t ∂z mli cˆpli

(C.5) (C.6)

(C.7)

us

li

(C.4)

ip t

Stripper Column:

∂T l aw NA ∆Hi ∂Til = vli i − i i ∂t ∂z mli cˆpli

cr

li

Heat Exchanger:

(C.9) (C.10)

M

an

∂THi U Aˆ ∂THi = vHi − (THi − TCi ) ∂t ∂z mH cˆpH i U Aˆ ∂TCi ∂TCi = −vCi + (THi − TCi ) ∂t ∂z mC cˆpC i

(C.8)

Ac ce

pt

ed

where CA is the concentration of component A, T is the temperature, v is the flow velocity, aw is the interfacial area per unit volume available for mass transfer, N is the molar flux,  is the voidage, ∆H is the heat of mass transfer (due to reaction in the liquid phase, if any), U is the overall heat transfer coefficient, Aˆ is the area available for heat transfer or cross sectional area of tank, F is the volumetric flow-rate, z is the spatial coordinate, t is the time, mˆ cp is the heat capacity of the fluid in the stream. The superscript l stands for the liquid side, g for the gaseous side, and the subscript H for the hot stream in the heat exchanger, C for cold stream, and i for the index of the unit.

27

Page 28 of 28