- Email: [email protected]

Computing invariants in graphs of small bandwidth AndreÂ PoÈnitz* Hochschule Mittweida, Fachbereich Mathematik/Physik/Informatik, Technikumplatz 17, Mittweida D-09648, Germany Abstract Many graph invariants (chromatic number, rook polynomial, Tutte polynomial, etc.) are known to be computable for general graphs in exponential time only. Algorithms for their computation usually depend on special properties of the invariants and are not extendable to slightly different problems. Some general framework (called composition method) for the construction of algorithms for the solution graph of related problems has been developed during the last two years. In this paper we will demonstrate the power of this method by automating and applying it to a class of well-known problems covering e.g. the chromatic and the Tutte polynomial. The obtained algorithms are comparable to existing hand optimized ones in running time and memory usage. # 1999 IMACS/ Elsevier Science B.V. All rights reserved. Keywords: Chromatic polynomial; Recurrence relations; Composition method; Polynomial invariants

1. Preface In everyday life we often encounter problems that can be described mathematically as graphs. Solutions are often obtained by computing a certain invariant in the graph, i.e. some value mainly depending on the structure of the graph. If this invariant happens to be well known (e.g. edge connectivity of a graph) we are able to access a rich library of standard algorithms and the problem reduces to just picking the right one. Unfortunately, if we are to solve some ``real world problem'' the corresponding graph invariant tends to be non-standard. Standard algorithms are no longer applicable and most often not expandable. Usually, we would either start the lengthy and costly development of a completely new algorithm or simplify the model. Both approaches might not be admissible. The composition method sets a general framework for the development of graph related algorithms ± especially for non-standard cases. The user of the method only has to fill in the gaps in some algorithm template. In this paper we focus on invariants where even this can be done automatically given some set of recurrence relations describing the invariant in question. ÐÐÐÐ * Corresponding author. 0378-4754/99/$20.00 # 1999 IMACS/Elsevier Science B.V. All rights reserved. PII: S 0 3 7 8 - 4 7 5 4 ( 9 9 ) 0 0 0 5 2 - X

180

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

Table 1 Some polynomial invariants and their describing recurrence relations. Invariant

Recurrence relations

Termination

Rook polynomial Residual connectedness reliability

uG x uGÿv x xuGÿN v x deg v SG 1 ÿ pv SGÿv pv SGv_ pv 1 ÿ pv jVj ÿ Pv SGÿN v 1 ÿ Pv G x Gÿe x ÿ Ge x Kn x xn yTGÿe x; y if e loop TG x; y xTGÿe x; y if e bridge TGe x; y TGÿe x; y else TKn x; y 1 RG x xRGe 1 ÿ xRGÿe RKn x 0 if n52 RG re RGe 1 ÿ re RGÿe RKn 0 if n52

u;(x)1 SK1 1

Chromatic polynomial Tutte polynomial

Reliability polynomial All terminal reliability

;(x)1 TK1 x; y 1

RK1 x 1 RK1 1

Such recurrence relations have to exhibit a certain structure to be usable by the composition method: They must relate the invariant in a graph to some linear combination of the same invariant in certain minors, usually obtained from the original graph by deletion or contraction of edges or nodes in some local neighborhood. Additionally, some stopping rule giving the invariant for any edgeless graph Kn fv1 ; . . . ; vn g; ; must be known. See [11,13] and Tables 1 and 2 for some examples. Algorithms generated with the composition method are especially fast when applied to graphs of small bandwidth when a bandwidth realization is given. This is not surprising given other recent results in the field (cf. [2,15] and others). 2. Definitions Generally, naming and notation in the paper follow recent literature. Since the letter P is heavily used by many authors to denote e.g. the chromatic polynomial, the reliability, the reliability polynomial, and probabilities and partitions in general, it was replaced in this paper by other unique notations to avoid unnecessary confusion. 2.1. Graphs A graph G V; E is an ordered pair consisting of a set of nodes V with cardinality n and a set E of m edges. For ease of presentation we assume our graphs to be simple and undirected, i.e. Table 2 Operations used in Table 1 Operation

Meaning

Gÿe Ge Gÿv GÿN v Gv_

Delete edge e Contract edge e Delete node v Delete the closed neighborhood of v Delete node v, contract all neighbors

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

181

E fu; v : u; v 2 V; u 6 vg although the method can be extended easily to some more general settings. The generic notation for an invariant of a graph G is fG. V; ffu; vg : u; v 2 Vg ÿ E. The complement of some graph G V; E is the graph G The set N v fu 2 V : 9fu; vg 2 Eg of nodes adjacent to v is called the open neighborhood of v, the set N v : N v [ fvg is called the closed neighborhood of v. M v : fe 2 E : v 2 eg denotes the set of incident edges of node v. The deletion of an edge e 2 E in a graph G V; E is an operation resulting in the graph Ge V; E ÿ feg (Table 2). Contracting an edge e fu; vg 2 E in a graph G V; E results in the graph Ge V ÿ fug; E ÿ M u ÿ M v [ ffv; xg : x 2 N v [ N u ÿ fu; vgg. H is referred to as minor of G if H is the result of a series of edge deletions or contractions in G. This is denoted by H4G. If G V; E is a graph and J V a subset of its node set then the contraction of J in G is the graph GjJ obtained by contracting all nodes of J to a single node. 2.2. Node order and bandwidth Let G V; E be a graph. A bijective function : V ! 1; . . . ; n, v 7! v from the vertex set of a graph to a suitable set of integers is referred to as node order. The bandwidth of G is defined as bwG : min max ju ÿ v j:

fu;vg2E

A node order * that realizes the minimum will be called a bandwidth realization. We are aware that this definition of ``bandwidth'' may differ from others to be found elsewhere by a constant of 1. Since we use bandwidth only as a concept to describe ``well behaved graphs'' with respect to the performance of the method the exact definition does not really matter. 2.3. Partitions A set partition (partition for short) p of a base set U is a set of disjoint subsets J1, . . . , J_|p| of U such that J1 [ [ J|p|U. The Jk are called blocks of the partition. The set of all partitions of some set U is denoted by P(U). A partition of a proper subset of U is referred to as an incomplete partition of U. The block of p containing a certain element v 2 U is denoted by pv. Let p be a partition of a set U and J 2 p a block of p. By pÿJ we denote the partition p-{J} of UÿJ. If I is a set, and U \ I ; then pjI denotes the partition p [ I of U [ I. Define pj; p. Let p be a partition of U and v 2 U. The removal pÿv of an element v from a partition p is a partition of Uÿ{v} defined as p ÿ v : p ÿ pv j pv ÿ v. The fusion p(uv) of two elements u,v 2 U is a partition of U defined as p(uv):(pÿ{pu, pv})|(pu [ pv). If u and v are nodes incident to some edge e, we may write pe instead of p(uv). = U into a block J 2 p is the partition (pÿJ)|(J[{v}) of U[{v}. The insertion pv!J of an element v 2 3. Method In the examples in Table 1 we notice two kinds of recurrence relations: One kind of rather ``node oriented'' ones (using e.g., node deletion and contraction of entire neighborhoods) and a kind of ``edge

182

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

oriented'' recurrence (using e.g. edge deletion and contraction). Likewise, the general composition method comes in two guises: node composition and edge composition. Since the former could be considered as a special case of the latter we will mainly discuss edge composition and conclude with a remark on node composition. 3.1. RMA Assume an edge oriented problem for the time being. The rich man's approach1 to compute an invariant in some graph given one of our recurrence relations is to pick at a first stage an edge, create two new graphs by deleting and contracting this edge, respectively, thus arriving at a second stage, pick new edges in each of these two ``children'' until at stage m |EG| no edge is left. The invariants in all of the 2m children in stage m can be computed using the stopping rule. Computation now ascends in the decomposition tree resulting in the invariant of the graph when arriving in the root of the tree. Thus the whole process basically consists in the creation of 2m1ÿ2 graph minors in the decomposition step and 2mÿ1 evaluations of the recurrence rule in the composition step. If m is moderately sized (say about 40), there is no chance either to store all the minors or to perform the evaluations in the naive way. This algorithm will be referred to as RMA from now on in order to sound more technical. 3.2. IRMA On the left-hand side of Fig. 1, RMA is used to compute the chromatic polynomial for the graph K3. We observe the following: going down the decomposition tree leads to classes of isomorphic graphs. Since the invariants of isomorphic graphs are identical, performance can be increased by grouping minors in classes of isomorphic minors. Unfortunately, establishing graph isomorphism is not a trivial task in general, but as we will see in the next section it is fairly easy in our special environment. Basing on this idea, an improved RMA ± called IRMA ± can be implemented along the following lines: The computation again splits into two parts: Decomposition and Composition. The decomposition part builds up a (directed) graph of rules R (it will not be a tree anymore!) in a top-down level-by-level fashion. In the composition part R is traversed bottom-up to perform the actual computation of the invariant. Each node of R is associated with three different items: 1. A class of isomorphic minors given e.g. by some representative H of this class. 2. During the decomposition: A rule how to compute the invariant of the minors associated with the node. Currently there are only two possibilities: In the first mÿ1 levels of R we will use the recurrence relation to compute the invariant given those of the minors associated with the children of the node; in the last level we will use the stopping rule. Of course we will not store a whole operation or equation in every node of R, we rather store them just once and use a tag (requiring O(1) memory) to indicate which operation is meant.

1

Originally named ``layman's approach'' until discovered that no layman ever will invest in supercomputers just for fun.

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

183

Fig. 1. RMA and IRMA to compute the chromatic polynomial of K3.

3. During the composition: An invariant. This is actually the invariant of the minors associated with the node. Depending on the actual nature of the invariant this of course may take more than O(1) memory. The decomposition part can be implemented as follows. For l < m, let Ml be the set of nodes at depth l in R. This level corresponds to some edge el 2 E that is processed in this level. Ml1 can be obtained from Ml as follows: Initially, Ml1 is empty. Loop through Ml, let us call the iterator the minor associated to the current iterator H. Tag H with the tag for ``use the recurrence relation''. Create from H a new minor, H 0 : Hÿel , and check whether it is already isomorphic to some member of Ml1. If not, create a new node of R and associate it with H0 as a representative of a new class of isomorphic minors. Insert an edge from H to H0 in R indicating that H0 is the first child of H. Repeat this process with a second minor H 00 : Hel . If all H 2 Ml are processed proceed at level l1. When level m is reached, simply tag all nodes with the tag for ``use the stopping rule''. The composition part simply parses the rule graph R in reverse order: Every time a node corresponding to some minor H (read ``class of minors'') is hit, the invariants of its children are known, so using the operation indicated by the tag of the node, fH can be computed. The tag is no longer needed, but fH needs to be stored in some way. So we drop the tag and allocate memory for fH. Consider memory requirements: Note that when some level l1 is reached, no minor associated to levels up to lÿ1 will be used again in any way, so the memory for storing them in fact could be freed.

184

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

Thus we need minors from at most two consecutive levels Ml and Ml1 in the decomposition process in addition to the constructed graph of rules R. Each minor is at most the same size O(nm) as the original graph. The tag requires O(1) memory. Thus maximum memory utilization during the decomposition part is memdecomp jRjO 1 maxfjMl j Ml1 j : 14l < ngO n m: If the bandwidth of G is bounded, an appropriate decomposition order is chosen and the invariant obeys some restrictions, jMl j can be proven to be bounded by a constant. Since ÿP general jM j , jRj is O(m) in this case and thus jRj O m l l1 memdecomp O n m: During the composition part, similarly we have to store fH at most for all minors in two consecutive levels. Assuming O(fH)4O(fG)8H4G we obtain memcomp 4memdecomp O fG : Since numerical invariants are O(1) memory-wise, univariate polynomial invariants are O(nm) and the Tutte polynomial is O(nm), all the computation of all invariants mentioned can be performed in polynomial memory.2 Similar considerations lead to the same result concerning running times. Fig. 1 should give some visual impression of the performance gain when using IRMA instead of RMA. 3.3. Edge composition The composition method provides an easy way to check minor isomorphism while guaranteeing a constant bound for jMl j in graphs of bounded bandwidth with a given appropriate node order thus guaranteeing an overall O(nm) bound on memory usage during the decomposition by amending IRMA only in a few details. (1) We need to derive a couple of functions from the given set of recurrence relations. This includes a function INS : f VH ;EH 7!f VH [fvg;EH that expresses the invariant in a graph with a node v of degree 0 in terms of the invariant of a graph where this node was removed. Strictly speaking, we need this function only for a special set of minors of G, but there is no harm in a larger domain. The name INS stems from ``Insert Node as Singleton''. (2) We fix the order in which edges are processed. We choose a node order that is a bandwidth realization of the graph and associate each node v of the graph with the set E(v) of edges that are incident to v and nodes u with a higher number (u) > (v). As mentioned before, any node order is sufficient, but choosing anything other than a bandwidth realization usually decreases performance. (3) Edges are now processed group by group, starting with E1. The order of edges within a group is arbitrary. The processing of a whole group will be called a metastage, that of a single edge is referred to as a stage. This yields a one-to-one correspondence between edges and stages as well as a correspondence between nodes and metastages. 2 Actually, this is only true for fixed size coefficients. Normally, we would use arbitrary precision integer arithmetic for the coefficients so there is another (but at most polynomial) factor in the memory/time bound.

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

185

(4) We create slightly different sets of minors of G during the decomposition step. Note that a contraction basically deletes an edge and identifies its terminal nodes with each other. So after processing all edges in E1, these all have been deleted and certain subsets of nodes of the closed neighborhood of v1 have been identified creating equivalence classes of nodes. The set of these G v1 where the blocks of p represent the equivalence classes could be represented as a partition p of N classes of identified nodes. In the single case where all edges in E1 are deleted and none is contracted we will get a minor H0 : VG ; EG ÿ E1 . Node v1 is isolated in H0, and thus fH0 INS fH00 where H00 : H0 ÿ v1 VG ÿ fv1 g; EG ÿ E1 Gÿv1 . In all other cases at least one contraction has been performed. Therefore the equivalence class J of nodes to which v1 belongs has more than one element. Now with G0 : Gÿv1 and J0 :Jÿ{v1} we have GjJ G0 jJ 0 and thus fGjJ fG0 jJ 0 . Now we are able to use the invariants minors of G0 for the computation of fG instead of minors of G as we did in IRMA. Of course we have to keep track when to apply INS and when to merely ``copy'' invariants somehow. We will handle these operations by introducing a special ``node removal stage'' at the end of each metastage. Repeating the whole process for each metastage, we obtain minors of Gk : G ÿ fv1 ; . . . ; vk g containing only nodes vk 1; . . . ; vn at the end of metastage k. (5) We use a compact form to store minors. Instead of storing each minor as a whole graph structure, we will S in metastage k use a partition of the active node set Ak VG defined by Ak : ki1 NG vk ÿ fv1 ; . . . ; vk g. Note that a bounded bandwidth of G implies the existence of a sequence of active node sets of likewise bounded sizes. A partition p 2 P Ak is related to a minor H4Gk as follows: Start with Gk. For each block J of p contract all nodes in J to a single node. The resulting graph is the minor H. Conversely, not every minor of Gk is related to a partition of Ak, but ± and this is most important ± every minor of Gk created in the decomposition process at the end of a metastage is. This is easy to be seen: At the end of metastage k, no edge in Gk has been deleted, so every minor of Gk constructed in the decomposition process must contain all edges of Gk (so the minor is a ``contraction only'' minor). Furthermore, nodes not adjacent to nodes in v1 ; . . . ; vk (i.e. nodes not in Ak) cannot have participated in any contraction so far. This information does not have to be stored explicitly. The set of the remaining nodes (those adjacent to {v1, . . . , vk} but not v1, . . . , vk themselves) equals Ak by definition. So the minor can be described by the equivalence classes of nodes in Ak and thus by a partition of Ak. Given Gk and some partition p we will use Gk jp to denote H. (6) We expand this result from the end of each metastage to the end of each stage by considering partitions of Ak [ Akÿ1 throughout metastage k and introducing an auxiliary stage at the beginning of each metastage to create partitions of Ak [ Akÿ1 out of partitions of Akÿ1 simply by adding blocks containing single nodes from Ak ÿ Akÿ1 and consequently working with this extended active node set. We have thus reached the situation depicted on the left-hand side of Fig. 2. Blocks of partitions of the extended active node sets are surrounded by solid curves in the figure. Decomposition goes top-down on the left-hand side, thereby creating the computation rules in the middle columns. Composition goes bottom-up on the right-hand sides, leading to the final result K3 x x3 ÿ 3x2 2x. (7) So far choosing an edge order has been a bit inflexible, since it had to be derived from a node order. In order to use some ``edge order generating'' preprocessors like spectral methods based on Fiedler's bisection [8] or the algorithm of Kernigan and Lin [9] we extend the edge composition method by allowing any edge order while computing the respective node order ``on the fly''. To do so, we loop

186

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

Fig. 2. Basic edge composition for computing K3 x and improvements by introducing the special operations SS, SI and SD.

in the decomposition part through all edges of the graph (with respect to the order given), and check whether its incident nodes are already in the set of active nodes. If so, everything is fine. If not, we activate the node by simply putting it in the set of active nodes and include a block containing this node only into every active partition ± thereby implementing the auxiliary stage mentioned under (6). (8) Further improvements concern a reduction of the number of operations during both the decomposition and composition parts. Note that some node removal stages are preceded by an edge stage. Such a pair of stages can be combined to some extended edge stage, thus circumventing the creation of lots of intermediate partitions (during decomposition) and invariants (during composition) typical in this place. Three situations may occur depending on the actual minor/partition, that may be handled using the special operations SS, SI and SD, respectively. Let be e(uv) the edge processed in the edge stage, v the node to be dropped and p some active partition. 1. Case SS (``Special: insert edge, when pv is a singleton''): jpv j 1. To compute fGjp in some partition p we need fGÿe jp and fGÿe jpe , and since e was the last edge in this metastage, we can compute fGÿe jp INS fGÿv jpÿv and fGÿe jpe INS fGÿv jpÿv . Thus we are able to express fGjp in terms of fGÿv jpÿv . The function establishing this relation is referred to SS. 2. Case SI (``Special: insert edge, when pu and pv are identical''): pu pv. To compute fGjp we need fGÿe jp and fGÿe jpe , but p pe; in this case so we can relate fGjp directly to fGÿv jpÿv . 3. Case SD (``Special: insert edge, when pu and pv are different''): pu 6 pv and |pv| > 1. Similarly, we relate fGjp to fGÿv jpÿv and fGÿv jpe ÿv .

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

187

With all these changes edge composition can be implemented as follows:3 (1) Input is a simple, connected, undirected graph G given as a list of its edges. Additionally we are given four ``external'' functions that provide all necessary information on the invariant. The functions take zero, one or two parameters. Parameters and return values are invariants of minors of G: 1. START() is to be derived from the stopping rule and simply returns fK1 . 2. IE(p, q) can be derived from the recurrence relation immediately by substituting p for Gÿe and q for Ge. 3. INS(p) can be derived from the stopping rule. It expresses fGv in terms of fG. 4. INB(p) is usually the identity function. In case of the Tutte polynomial this extra degree of freedom can be used to work around the case distinction in the recurrence relation. Since this part is a bit tricky and cannot be done automatically, the Tutte polynomial should actually have been excluded from this paper ± although it fits in nicely everywhere else. (2) In order to take full advantage of the improvements by using the special functions mentioned above, one can provide hand-optimized functions SS, SI and SD. If not provided, they are derived automatically from IK, INS and INB (but possibly in some non-optimal fashion) as follows: SS p : IE INS p; INB p; SI p : IE INB p; INB p; SD p; q : IE INB p; INB q: Note that the composition method does not depend on any special data structure for storing invariants. In fact, all operations using the invariants themselves are encapsulated in the external functions START, IE, INS and INB (and of course SS, SI and SD ± if provided). This feature can be used to store the Tutte polynomial as ordered pair of some polynomial t(x,y) and an integer d where T x; y t x; y y ÿ 1d , thus approximately halving memory usage when computing TK13 . (3) Initialize the extended active node set A:; and the stage number k:0. The graph of rules R starts with a single node v0 associated to the empty partition R:({;}, ;). (4) Make the first edge from the edge list the current edge e, drop e from the edge list. Let Z be the set of active rule nodes, i.e. the nodes in the last level of R constructed so far. Start a new level of nodes in R. Make this level the current level of R. (5) If e 6 A then loop over the active nodes. Replace the associated partitions by partitions of A : A [ e by inserting the nodes from A-e as singletons. Make A : A [ e the new extended active node set. (6) Make B the subset of nodes of A that are not incident to any edge left in the edge list. This is the set of nodes that can be dropped from the extended active node set after the edge e has been taken care of. (7) If B;, we are in an ``ordinary'' edge stage. Loop through the set of all active partitions (i.e. partitions associated with active rule nodes of R). Call the node at the current position of the iterator w. Let p be the partition associated with w. If there is no node corresponding to p in the current level of R, 3 Be aware of the fact that there are two different sets of active nodes in the algorithm that must not be confused: the set of active rule nodes, that are nodes of the graph of rules, and the extended set of active nodes, that are nodes of G.

188

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

add an isolated node corresponding to p to the current level of R. Repeat this if necessary for the contracted partition pe. Add edges from w to the nodes w0 and w0 associated with p and pe at the current level of R, respectively. Tag w with the tag IE to indicate that during composition the invariant of w is obtained by calling IE with the invariants of w0 and w0. (8) Otherwise, B6;, and we are in an ``extended edge stage''. Let w and p be as in step 7. Create a partition x of A±B by kicking the nodes of B from p. Construct y likewise from pe. Add isolated nodes corresponding to x and y to the current level of R if necessary. Let w0 and w0 be the nodes corresponding to x and y, respectively. Deduce which special rule applies: If pv{v} then tag w with SS and add an edge from w to w0 in R. If pupv tag w with ``SI'' and add an edge from w to w0 in R. Otherwise we have pu6pv: Tag w with the tag SD and add edges from w to w0 and w0 in R. (9) Increment stage number k : k 1. If the edge list is not empty go to step 4 to the next stage. (10) At this point there is exactly one node v in the current level of R. All other nodes of R are tagged with tags from the set {IE,SS,SI,SD}. Set the invariant of v to the invariant of an empty graph (obtained by calling START()). (11) Work through R in any suitable order, using in each node the function indicated by the tag of the node to combine the invariants of its children. In the end, v0 is associated with the invariant of G. We are done. Some points remain to be clarified: What is a ``suitable order''? In order to compute the invariant of some node, those of its children must be known, i.e. R has to be traversed in a child-before-parent manner. This is always possible since R is acyclic, moreover there is a considerable amount of freedom in choosing this order. Unfortunately we have not been able to use this freedom to gain performance compared to the standard implementation where the nodes are traversed according to the reversed order of their creation. The latter strategy can be used to apply a couple of tricks to save memory ± the bottleneck of any ``serious application'' of the composition method. Working Maple and C code including these tricks can be found at http://mathematik.htwm.de. 3.4. Node composition Node composition usually leads to simpler but not necessarily faster algorithms. It is not as widely applicable as edge composition since recurrence relations in non-standard cases are more difficult to construct. Still most of the ideas mentioned in the last sections are applicable. The basic difference is that all edges within a group are processed together with their associated node in a single stage or metastage (the terms stage and metastage coincide when talking about node composition). Node composition allows us to take care of some graph operations like node deletion and contraction of neighborhoods automatically, that have not been mentioned so far. These new operations are implemented using incomplete partitions. A node contained in the active set but not in the subset thereby corresponds to a deleted node. 3.5. Outlook In some situations even hybrid node/edge composition algorithms are imaginable. Actually, both node and edge composition originally stem from an algorithm to compute the K-terminal reliability in communication networks that uses edge stages to take care of edge failures and node stages to handle node failures and the terminal property.

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

189

Edge composition even can be extended to tackle the flow probability problem in graph with up to 50 edges ± often waived as being to hard in graphs of that size [1,10,12]. Some basic ideas of the method can probably be extended to more general settings. Invariants described by logic formulae as in [5] might offer an interesting line of research. Improved performance should be expected when deviating from our strict level-by-level policy and/ or introducing more complex ``special functions''. The method heavily depends on the node order chosen. Since finding an optimal order is hard, some time could be spent to investigate how difficult it is to find nearly optimal node orders. There are some signs that the method might be extensible to graphs of small treewidth instead of bandwidth without too much overhead by exploiting recent publications in this field like [4,3] or [7]. 4. Experimental results Table 3 shows running times and memory usage of an implementation of the composition method using C on a Pentium 200 machine with 512 MByte of RAM running Linux. The first column indicates a test graph ± Kn being the complete graph on n nodes, Gn the nn grid, and Bn the graphs shown in Fig. 3. This set of examples covers the typical cases of very dense graphs and planar structures. The next columns indicate running time (including searching a suitably edge order) and memory consumption for four edge-oriented and one node-oriented problems. The polynomial invariants (columns 2±4, and 6) are obtained by applying the standard technique, in case of the Tutte polynomial using the compact version. Unfortunately, even using this compact version, our C code is less efficient in storing polynomials than Maple, so we cannot compute TK14 within 512 MBytes of RAM. We conjecture a running time of about 40 min in 2.5 GBytes of RAM in C (compared to 14 days and 410 MBytes in Maple), thus reaching about the same performance as Sekine et al. [14]. Note however that RAM consumption of any edge composition could be reduced by grouping partitions in Table 3 Running times and memory usage (in MBytes) for some examples. A hyphen indicates a memory usage of more than 512 MBytes, the asterisks indicate a result obtained with Maple rather than C. Graph

K10 K11 K12 K13 K14 G8 G9 G10 G11 B1 B2

Tutte polynomial

Chromatic polynomial

Reliability polynomial

Reliability

Rook polynomial

Nodes

Edges

Time

Memory Time

Memory Time

Memory Time

Memory Time

Memory

10 11 12 13 14 64 81 100 121 46 52

45 55 66 78 91 112 144 180 220 121 104

4s 18 s 90 s 450 s 14d* 100 s 910 s ± ± 38 s 19 s

4 15 72 390 410* 48 309 ± ± 24 10

1.3 4 15 68 270 4.5 18 72 327 <1 <1

3 12 61 463 ± 7 27 107 450 4 1.2

1.3 1.4 1.5 1.6 1.6 1.2 3.2 9.0 28 <1 <1

<1 <1 <1 <1 <1 <1 1 1 1 <1 1

1.3 s 5.7 s 26 s 125 s 630 s 23 s 83 s 0.2 h 0.5 h 6.5 s 9s

3s 18 s 110 s 750 s ± 34 s 75 s 0.2 h 1h 12 s 10 s

1.5 s 1.7 s 2.0 s 2.3 s 2.7 s 15 s 47 s 160 s 580 s 0.8 s 8s

<0.1 s <0.1 s <0.1 s <0.1 s <0.1 s 0.2 s 0.2 s 0.3 s 0.4 s <0.1 s 0.2 s

190

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

Fig. 3. Two example graphs.

each stage corresponding to their number of blocks. In order to compute all partitions with b blocks at some stage, we only need all partitions with b or bÿ1 blocks from the previous stage, so all intermediate results concerning partition with less then bÿ1 or more then b blocks could be kept in external memory. In the TK14 case, this could cut down the RAM usage to the conjectured 800 MByte. Column 5 contains computations of an approximation of the all-terminal reliability using some modified edge composition. The basic difference is some more complicated node handling to allow for non-terminal and failing nodes (a feature not used in the tests below, but allowing us to tackle the Kterminal reliability successfully), and the fact that the algorithm is allowed to drop states of very low probability. The total probability of dropped states in our examples is less then 10ÿ10, leading immediately to an accuracy of 10 decimal digits of the final result ± probably more than enough for any practical application. Nevertheless, we note a dramatical speed up while using less memory in comparison to the ``exact'' result and we surpass easily the results of Carlier and Lucet given in [6]. References [1] K.K. Aggarwal, Y.C. Chopra, J.S. Bajwa, Capacity considerations in reliability analysis of communication systems, IEEE transactions on reliability 31(2) (1982) 177±181. [2] A. Andrzejak, A polynomial-time algorithm for computation of the polynomials of graphs of bounded treewidth. [3] H.L. Bodlaender, A linear time algorithm for finding tree-compositions of small treewidth. [4] H.L. Bodlaender, Treewidth: Algorithmic techniques and results. [5] R.B. Borie, R.G. Parker, C.A. Tovey, Automatic generation of linear-time algorithms from predicate calculus description of problems on recursively constructed graph families, Algorithmica 7 (5±6) (1992). [6] J. Carlier, C. Lucet, A decomposition algorithm for network reliability evaluation, Discrete Applied Mathematics 65 (1996) 121±138. [7] N. Chandrasekharan, S.T. Hedetniemi, T.V. Wimer, Enumeration techniques for certain k-terminal families of graphs, Journal of Combinatorics, Information and System Science 19 (3±4) (1994) 121±138. [8] M. Fielder, Laplacian of graphs and algebraic connectivity, combinatorics and graph theory, Banach Center Publications 25 (1989). [9] B.W. Kernigan, S. Lin, An efficient heuristic procedure for partioning graphs, The Bell System Technical Journal 29(2) (1970) 291±307. [10] S.H. Lee, Reliability evaluation of a flow network, IEEE transactions on reliability R-29 (2) (1980) 24±25. [11] S. Negami, Polynomial invariants of graphs, Transactions of the American Mathematical Society 299 (2) (1987).

A. PoÈnitz / Mathematics and Computers in Simulation 49 (1999) 179±191

191

[12] W.J. RuÈger, Reliability analysis of networks with capacity constraints and failures at branches and nodes, IEEE transactions on reliability R-35 (5) (1986) 523±528. [13] A. Satyanarayana, R. Tindell, Chromatic polynomials and network reliability, Discrete mathematics 67 (1987) 57±79. [14] K. Sekine, H. Imai, S. Tani, Computing the Tutte polynomial of a graph and the Jones polynomial of a knot of moderate size, Extended abstract. http://naomi.is.s.u-tokyo.ac.jp/~sekine/. [15] J.A. Telle, A. Proskurowski, Algorithms for vertex partitioning problems on partial k-trees, SIAM Journal of Discrete Mathematics 10(4) (1997) 259±555.