- Email: [email protected]

Application of regional earthquake mitigation optimization Atsuhiro Dodoa,∗ , Rachel A. Davidsonb , Ningxiong Xub , Linda K. Nozickb a Swiss Reinsurance Company, 1-5-1 Otemachi, Chiyoda-ku, Tokyo 100-0004, Japan b School of Civil and Environmental Engineering, Hollister Hall, Cornell University, Ithaca, NY 14853-3501, USA

Available online 14 March 2006

Abstract A linear program was developed to help seismically active communities decide: (1) how much to spend on pre-earthquake mitigation that aims to reduce future losses versus waiting until after an event and paying for reconstruction, and (2) which of the many possible mitigation activities to fund so as to minimize overall risk. The mitigation alternatives considered are structural upgrading policies for groups of buildings. Beneﬁts of mitigation are losses avoided in future earthquakes, including structural, non-structural, contents, and time-related losses, and casualties. The model is intended to be used as a tool to support the public regional mitigation planning process. In realistic applications, the model includes millions of variables, thus requiring a special solution method. This paper focuses on two efﬁcient solution algorithms to solve the model—a Dantzig–Wolfe decomposition algorithm and a greedy heuristic algorithm. A comprehensive numerical study compares the two algorithms in terms of solution quality and solution time. The study shows that, compared to the Dantzig–Wolfe algorithm, the heuristic algorithm is much faster as expected, and provides comparable solution quality. 䉷 2005 Elsevier Ltd. All rights reserved. Keywords: Optimization; Investment planning; Earthquake mitigation; Decomposition algorithms

1. Introduction There are many ways to manage regional earthquake risk, including, for example, upgrading buildings to reduce their vulnerability to damage, applying land use zoning that limits construction in hazardous areas, and waiting until an earthquake happens then paying for reconstruction. The best combination of strategies to manage risk depends on the magnitude and character of the regional risk, the costs and beneﬁts associated with each possible mitigation alternative, the available budget, and the speciﬁc regional objectives for risk management. Dodo et al. [1] developed a linear program to help communities at risk undertake systematic mitigation planning to decide: (1) how much to spend on pre-event mitigation that aims to reduce future losses versus waiting until after an event and paying for reconstruction, and (2) which of the many possible mitigation activities to fund so as to minimize overall risk. The mitigation alternatives considered in the optimization model are structural upgrading policies for groups of buildings, where each group is deﬁned by its census tract location, structural type (e.g., mid-rise steel braced frame), occupancy type (e.g., single-family dwelling, hospital), and design levels (i.e., built to a low, moderate, or high seismic code). One of the main challenges associated with the model is that it includes an extremely large number of decision variables. For example, for the 86 sq mi area of Central and Eastern Los Angeles, the model requires more than 7 million variables. For a realistic application, such as Los Angeles County, the model would require approximately 56 million variables. ∗ Corresponding author. (A. Dodo).

0305-0548/$ - see front matter 䉷 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2005.09.016

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

2479

Further, the model is intended to be used as a planning tool by running it many times under different assumptions (e.g., different budgets, values of life, and discount rates), so speed is important. Although linear programs are generally relatively easy to solve, an application that requires so many decision variables and must be run multiple times demands a special solution technique. Dodo et al. [1] introduced the linear program and applied it to a small, illustrative example. This paper focuses on (1) a more realistic case study including life loss, soil ampliﬁcation, and a larger study area, and (2) developing and comparing two efﬁcient solution algorithms that make it possible to apply the model to realistic situations. The solution algorithms are developed by exploiting the model’s special decomposable structure and its unique properties based on mitigation investment return. The decomposable structure allows the use of the Dantzig–Wolfe decomposition algorithm [2], which restructures the model into a master problem and numerous subproblems, thus minimizing the storage requirements. Given the nature of the optimization, it is also possible to construct a greedy heuristic algorithm. The next section describes previous work in resource allocation for natural disaster risk management. This is followed by a discussion of potential users and uses of the model, and the formulation of the regional earthquake mitigation problem. An example application of the model to Central and Eastern Los Angeles illustrates the model and the type of results it can provide. The two solution algorithms are described in the following section. A computational study is used to systematically compare the algorithms based on solution quality and computational time. The paper concludes with a discussion of the algorithms, insights offered by the model, and possible future extensions of it.

2. Literature review Previous work relating to resource allocation for natural disaster risk management can be grouped into four main areas: deterministic net present value (NPV) analysis, stochastic NPV analysis, multiattribute utility models, and optimization models. Most previous studies compare a small number of predeﬁned mitigation alternatives to choose the single best alternative. Few have used an optimization modeling approach in which one can choose the set of mitigation alternatives that maximizes the expected NPV or some other speciﬁed objective subject to a limited budget or other constraints. Shah et al. [3] performed integer programming with a budget constraint to maximize the NPV of earthquake mitigation investment. Four structural mitigation alternatives for each of 15 buildings at Stanford University were considered. Beneﬁts were estimated assuming a single earthquake scenario. The group also performed a dynamic investment optimization model that had three two-year stages. Augusti et al. [4] used dynamic programming to select structural mitigation alternatives for bridges in a highway network in Italy. The objective function was to maximize highway system reliability conditional on a given seismic intensity and expressed as the connectivity of one source and one destination node, given a limited budget. Researchers at the International Institute for Applied Systems Analysis (IIASA) developed a spatial-dynamic stochastic optimization model to select the insurance policy design (e.g., insurance premiums) that best satisﬁes stated decision criteria, such as maximizing insurance company proﬁts and minimizing the probability of insurance company insolvency [5]. Versions of the model have been applied to the case of earthquake risk in Irkutsk, Russia [6], earthquake risk in Italy [7], and ﬂood risk near the Upper Tisza River, Hungary [8]. The problem the IIASA researchers address is heavily focused on ﬁnancial instruments, and their formulation yields an intractable analytical structure for the objective. They rely, therefore, on simulation for their analysis. Our model centers on mitigation decisions and a public decisionmaking perspective. This yields a very different model formulation, one with a much closer tie to regional loss estimation methodologies. The structure of this model makes it amenable to analytic analysis. Hayek and Ghanem [9] also aim to maximize insurance company proﬁt while minimizing risk of company insolvency or instability. In their model, the decision variables are coverage ratios (percentage of property value to be insured) for each location and structural type. While the formulation is more similar to the model presented in this paper, Hayek and Ghanem [9] deals with the large size of the problem by aggregating decision variables, and thus compromising the quality of the solution. The paper does not demonstrate how to solve the problem for realistic cases. With the exception of Shah et al. [3], past studies have not considered decision timing, assuming instead that all investments are made at the present time. Dynamic investment scheduling is preferable because risk managers make investments periodically, and the return on the investments will depend on when during the time horizon earthquakes occur (if they do). The optimization model that is the focus of this paper is novel as an optimization-based approach to

2480

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

public earthquake risk management resource allocation decisions that incorporates spatial correlation among spatially distributed mitigation alternatives, the range of possible earthquakes with their associated probabilities, and decision timing.

3. Potential model users and uses This model is intended to be used as a tool to support the public regional mitigation planning process. First, some pre-earthquake mitigation investment is desirable, because, especially when the full range of losses are considered, avoiding damage can often be less expensive than repairing it. In addition, mitigation can help minimize human losses, whereas post-earthquake spending cannot. Nevertheless, if no earthquake occurs in a reasonable time horizon, the pre-earthquake investment may have been wasted, in the sense that the beneﬁts are never realized (although there may be psychological beneﬁts to feeling safer). Even if an earthquake occurs, if it happens in the distant future, the discounting effect may render pre-earthquake investment less desirable than post-earthquake spending. Therefore, neither unlimited investment nor zero investment in pre-earthquake investments is likely to be optimal. The optimization models can help determine the optimal balance between the two. Second, disaggregating the model results could help decisionmakers understand how the many relevant factors interact to determine the relative appeal of different mitigation strategies. When different possible structural upgrading efforts are compared, for example, those factors include the ground shaking each structure is expected to experience over time, the vulnerability of that structural type to different types of damage, the losses experienced if it is damaged, the square footage available to be mitigated, the extent to which the mitigation would achieve a reduction in damage, the cost to mitigate, and the decisionmaker’s speciﬁc objectives. The model’s solutions can also provide insight into which strategy is optimal for a given community. In addition to general earthquake risk reduction resource allocation support, the model aims to help local and state risk managers undertake speciﬁc mitigation planning tasks. The Disaster Mitigation Act of 2000 now requires that to be eligible for Hazard Mitigation Grant Program (HMGP) funds, each state and local government must submit a mitigation plan to the Federal Emergency Management Agency (FEMA) describing how it is prioritizing mitigation actions so that its overall mitigation strategy is cost-effective and maximizes overall wealth [10,11]. The model could provide guidance as governments respond to this mandate. After an earthquake, many applications for HMGP grants are submitted, and need to be evaluated quickly and rationally. By systematically comparing regional strategies (e.g., upgrade all unreinforced masonry buildings in a census tract), the model can help streamline administration of the HMGP by providing a mechanism for “block grant” approval, just as the RAMP program did after the 1994 Northridge earthquake [12]. The Small Business Administration’s Pre-Disaster Mitigation Loan Program makes low-interest, ﬁxedrate loans to small businesses, so they can implement mitigation measures. To be eligible, the business must “conform to the priorities and goals of the mitigation plan for the community, as deﬁned by FEMA, in which the business is located” [13]. The model could help the local or state mitigation ofﬁcial decide which applications to approve. Berkeley, California’s Seismic Retroﬁt Incentive Program provides ﬁnancial incentives to encourage homeowners to seismically retroﬁt their homes [14]. The model might be useful to local governments that are similarly trying to encourage mitigation by helping to identify where they should focus their efforts.

4. Model formulation 4.1. Objectives and scope The optimization model was designed to be compatible with HAZUS [15], the Federal Emergency Management Agency’s publicly available, standardized national loss estimation modeling software, so that HAZUS could be used to calculate regional earthquake losses when necessary (avoided losses are the beneﬁts of mitigation investment). HAZUS combines hazard, structural inventory, and vulnerability information in a geographic information systems (GIS) environment to estimate the expected loss in a region due to a speciﬁed earthquake. The mitigation alternatives considered in the model are structural upgrading policies for groups of buildings. Buildings are grouped into categories based on their census tract locations, structural types (e.g., mid-rise steel braced frame, low-rise concrete shear wall),

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

2481

occupancy types (e.g., single-family dwelling, hospital), and design levels (i.e., built to a low, moderate, or high seismic code). For example, one mitigation alternative considered is to upgrade some square footage of buildings of a particular structural and occupancy type in a census tract from one design level to another. The set of mitigation alternatives considered is created by all possible combinations of structural types, occupancy types, census tracts, and design levels. The unit of census tract is used because it is the smallest one available in HAZUS, but another area unit, such as block group, could be used without altering the model formulation or the solution procedures. Square footage was chosen as the metric of study because regional loss estimation models, including HAZUS, typically use square footage, not the number of buildings, to calculate loss. Square footage, therefore, is most appropriate for the data available. From a computational perspective, modeling the decision variables as continuous variables (square footage) instead of integers (number of buildings) produces a much simpler model. In the remainder of this section, the equations that make up the linear program are developed. 4.2. Formulation development Let xijc kt be the square footage of buildings at the end of period t in census tract k that is of structural type i and occupancy type j designed to seismic code c. To simplify the notation, we deﬁne m as a class of buildings in census c . A key decision to be made in tract k that are of structural type i and occupancy type j , so xijc kt then becomes xmt

cc is the square footage each time period is which buildings should be mitigated to a higher seismic design level. If zmt of buildings in class m that were in seismic design code c at the end of period t − 1 and were mitigated to seismic code c at the beginning of period t, the following ﬂow balance equation must hold: c cc xm,t−1 = zmt ∀m, t, c. (1) c c

Mitigation has occurred only when c > c. Eq. (2) requires that the square footage in building class m that has been mitigated to seismic code c at the beginning of period t equals the square footage in building class m designed to the seismic code c at the end of time period t: cc c zmt = xmt ∀m, t, c . (2) c c

Finally, suppose that in each period, a limited budget is available to be allocated to mitigation investments. This is represented by the following constraint: cc cc Fmt zmt Bt ∀t, (3) m

c

c >c

cc is the per square foot cost to where Bt is the user-deﬁned maximum mitigation budget to be spent in period t, and Fmt mitigate buildings in class m in time period t that were upgraded from seismic code c to code c . All costs are assumed to be present value. The following nonnegativity requirements must also hold for the decision variables: c xmt 0

cc zmt 0

∀m, t, c,

(4)

∀m, t, c, c .

(5)

The objective of the model is to minimize the present value of the total mitigation and expected post-earthquake reconstruction expenditures. To calculate those reconstruction expenditures, we ﬁrst deﬁne the expenditures associated drc be the per-period probability with property loss, then expand the deﬁnition to include deaths and injuries as well. Let am that buildings in class m that were designed to seismic code c enter damage state d of damage type r. For example, damage types may include structural and drift-sensitive nonstructural damage, and damage states may include none, slight, and moderate. The expected square footage of buildings in damage state d of damage type r that were designed to seismic code c is then given by the following expression: drc c am xmt .

(6)

2482

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

The event that no earthquake has occurred can be represented as an earthquake for which all buildings are in the damage state “no damage.” Assuming all damage is repaired to the pre-earthquake code level, the reconstruction expenditure is equivalent to drc be the present value of the per square foot what a loss estimation model like HAZUS would call loss [15]. Let Rmt reconstruction cost for buildings in class m during time period t that were designed to seismic code c, enter damage state d of damage type r, and are repaired to the same condition. Reconstruction cost in class m in time period t designed to seismic code c after mitigation is drc drc c Rmt am xmt . (7) r

d

drc drc c am xmt . From Eq. (2), this reconstruction cost is Total reconstruction cost in class m in time t is c r d Rmt drc drc cc R a z . The total mitigation and expected reconstruction expenditures for buildings is then m c r d mt c mt cc drc drc cc Fmt + zmt Rmt am . (8) t

m

c c c

r

d

reconstruction cost To account for deaths and injuries in addition to property loss, we deﬁne Gdrc mt to be a modiﬁed s sdr drc drc that includes a cost for life loss. The modiﬁed reconstruction cost is Gmt = Rmt + Kt m s m , where Kt is the assumed cost per life lost in time t, m is the initial population density (people per sq ft) in buildings in class m, sdr m is the probability that a person enters casualty severity level s when residing in buildings in class m that have damage type r in damage state d, and s ∈ (0, 1) is a multiplication factor that indicates the cost of casualty severity s relative to death. A life lost obviously cannot be “reconstructed” and we do not presume to know the value of a life. There are many ways to estimate the value of life (e.g., willingness to pay, foregone earnings). The various methods can produce very different estimates, and there are widely discussed challenges associated with each method (e.g., [16]). By including in the analysis avoided deaths and injuries as a beneﬁt of earthquake mitigation, we simply aim to enable the model user to examine the effect of varying the assumed value of life if desired. The objective function with life loss is then cc drc drc cc min Fmt + zmt Gmt am . (9) t

m

c c c

r

d

The ﬁnal optimization model is a linear program in which the objective is given in Expression (9) and the constraints are given in Expressions (1)–(5). The model results indicate how to allocate the mitigation budget among mitigation alternatives, and what the total mitigation and expected post-earthquake reconstruction expenditures will be.

5. Case study 5.1. Scope This section presents a case study to illustrate the input required and the type of results the optimization model can provide. This analysis was conducted for the 86 sq mi Central and Eastern part of the City of Los Angeles, as deﬁned by the Los Angeles Area Planning Commission [17] Fig. 1. HAZUS was used as the basis for most of the required data. Seventeen residential, commercial, governmental, and educational occupancy types were selected for the analysis. Eleven structural types appear in these occupancy types in the 201 census tracts that make up this case study area. Within these 201 census tracts, there are about 167 million sq ft of building space, of which about 16%, 56%, and 28% are designed to low, moderate, and high seismic code, respectively. The most prevalent structural types are low-rise reinforced masonry bearing wall buildings with wood or metal deck diaphragms (RM1L), low rise concrete shear wall buildings (C2L), and precast concrete tilt-up wall buildings (PC1), which make up 26%, 21%, and 13% of the building area, respectively. The most prevalent occupancy types are multi-family dwelling (RES3), wholesale trade (COM2), and retail trade (COM1), which make up 28%, 17%, and 15% of the building area, respectively. The case study

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

2483

Fig. 1. Case study census tract locations in Los Angeles County.

considered the same design levels (low, moderate, high); damage types (structural, nonstructural drift-sensitive, and nonstructural acceleration-sensitive damage); damage states (no, slight, moderate, extensive, and complete damage); and injury severity levels (injury not requiring hospitalization, injury requiring hospitalization but not life-threatening, life-threatening injury, death) deﬁned by HAZUS. Structural, non-structural, contents, inventory, and time-related losses, and deaths and injuries are considered in the estimation of reconstruction costs, but not, for example, indirect economic losses. Time-related losses include relocation expenses, loss of proprietor’s income, rental income loss, and inventory loss. This example requires more than 7 million variables. 5.2. Input data In the case study analysis, the regional seismicity was assumed to be represented by a set of 47 earthquake scenarios, each with an associated annual “hazard-consistent” probability of occurrence [18]. Each scenario is deﬁned by an epicentral location and magnitude, or as the maximum credible earthquake on a speciﬁed fault. The “hazard-consistent” probability of a particular earthquake scenario represents the likelihood of earthquakes like that scenario occurring, where “earthquakes like that scenario” are similar in terms of severity and spatial distribution of ground motion they cause. The analysis thus assumes that only those 47 earthquakes are possible and that together they approximate the regional seismicity caused by all possible earthquakes. Chang et al. [18] developed this method of representing regional seismicity and identiﬁed these 47 particular scenarios for a risk analysis in Los Angeles and Orange Counties. While the Chang et al. [18] approach is sound and appropriate for use with this optimization model, if another suitable method becomes available, another set of earthquake scenarios could easily be substituted in the optimization. Soil data was based on Wills et al. [19], assuming the soil classes deﬁned as combinations of the ﬁve NEHRP-UBC categories have modiﬁcation factors that are the average of those for the two NEHRP-UBC categories. It was assumed that the time step is one year, the valuation period is 30 years, the annual budget is $5 million, the annual interest rate is 5%, and the value of a life is $5 million. The costs to mitigate ﬂoor area from low to moderate code and from moderate to high code are assumed half of the cost to mitigate from low to high code. The case study cc ), reconstruction costs (R drc ), uses HAZUS default inventory data, which is from 1994. The mitigation costs (Fmt mt drc ), casualty rates (sdr ), and initial indoor population densities ( ) were estimated based on damage probabilities (am m m drc values were estimated by determining information from HAZUS data tables and simulation results. In particular, the am annual damage probabilities for each of the 47 earthquakes and for no earthquake using HAZUS, then taking a weighted average of them, where the weights are the earthquakes’“hazard-consistent” probabilities of occurrence. The population in a given occupancy type was estimated as a weighted average of the number of people in that occupancy type at different time periods of the day (e.g., night, day, commuting), where the weights are the lengths of each time period.

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

70

Mitigation Expenditures ($ M)

30

65 20 Mitigation with life

60

Mitigation w/o life

10

Reconstruction with life

55

Reconstruction w/o life 0

50 0

10

20

Reconstruction Expenditures ($ M)

2484

30

Year

9

7 6

0.6 0.5

5

0.4

4

0.3

3

0.2

2

RM2L

RM1L

PC2L

PC1

C2L

0.0 C1L

0 S3

0.1 S2L

1

Per sq. ft. mitigation expenditure ($/sq.ft.)

0.7 Without life With life Without life (per sq.ft.) With life (per sq.ft.)

8

S1L

PV of mitigation expenditure ($M)

Fig. 2. Mitigation and reconstruction expenditures over time (with and without life loss).

Structure type Fig. 3. Recommended mitigation expenditures by structural type.

It was assumed that the costs of the injury severity levels are 0, 5%, 10%, and 100% of the cost of a death, respectively. Mitigation and reconstruction costs are estimated for the year 2003. All costs are discounted to the present. 5.3. Results The linear program was solved using the Dantzig–Wolfe decomposition algorithm described in the next section. This section illustrates the type of results it can provide. We ﬁrst present results assuming the value of life is $5 million, then describe the effect of omitting life loss by assuming the value of life is $0. Fig. 2 shows the recommended mitigation expenditures and the expected reconstruction expenditures over time. Mitigation spending occurs in years 1–5. After year 5, the model recommends no mitigation spending, since there are no remaining mitigation alternatives that would provide positive net beneﬁt. Most of the mitigation—about 0.5 million sq ft per year—upgrades from low to moderate seismic code. A very small portion of the square footage is mitigated from moderate to high code, and there is no mitigation from low to high code since the mitigation cost for that upgrade is assumed to be relatively expensive. Under the optimal mitigation strategy, the present value of the expected reconstruction expenditure is $7.7 million less than when no mitigation is done. Fig. 3 shows total mitigation expenditure and per square foot mitigation expenditure recommended for each structural type. More mitigation spending is recommended for reinforced masonry bearing walls with wood or metal deck diaphragms, RM1L ($8.5 million), and low-rise concrete shear wall buildings, C2L ($4.6 million), than any other structural types. This is mostly because these two types have large initial low code ﬂoor areas available for mitigation.

2485

Without life 15

2.0

With life Without life (per sq.ft.)

1.5

With life (per sq.ft.)

10

1.0 5

0.5

Per sq. ft. mitigation expenditure ($/sq.ft.)

2.5

20

0.0 EDU2

EDU1

COM9

COM8

COM7

COM6

COM5

COM3

COM2

COM1

RES6

RES5

RES4

0 RES3

PV of mitigation expenditure ($M)

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

Occupancy class

Fig. 4. Recommended mitigation expenditures by occupancy type.

Per square foot mitigation expenditure is highest for low-rise steel braced frames, S2L ($0.61 per sq ft), and low-rise steel moment frames, S1L ($0.47 per sq ft), because according to the HAZUS model, their vulnerability to damage is reduced a relatively large amount if mitigated from low to moderate or high code. Similarly, the mitigation strategy recommended by the model can be disaggregated by occupancy type (Fig. 4). The results indicate that most mitigation funds should be spent on the entertainment and recreation occupancy type, COM8 ($16 million), because mitigating it brings large beneﬁts from avoided time-related reconstruction cost and there is a lot of it available to mitigate. The entertainment and recreation (COM8), hotels (RES4), and nursing homes (RES6) are recommended for the largest per square foot mitigation expenditures, $2.1, $1.0, and $0.8 per sq ft, respectively. They provide great beneﬁts if mitigated because they have relatively large life-loss cost. The structural and occupancy types not shown in Figs. 3 and 4, respectively, will not be mitigated at all in the recommended mitigation strategy. Examining maps of recommended mitigation expenditures by census tract suggests that high total mitigation expenditures occur in tracts that have both large initial low code areas and large per square foot mitigation beneﬁt. The highest per square foot mitigation expenditures are found in tracts that include occupancy types that will experience large beneﬁts from mitigation and experience severe ground motion. By comparing the model results with assumed values of life of $5 million and zero, we can begin to see the effect of considering deaths and injuries. As Fig. 2 demonstrates, when life loss is not considered, the optimization recommends mitigation spending only for 3 years, rather than the 5 when life loss is included. As expected, considering avoided deaths and injuries makes mitigation more appealing, and therefore results in more mitigation spending and more savings in reconstruction expenditures. While mitigation leads to a $7.7 million decrease in the present value of the expected reconstruction expenditures when life loss is considered, it results in only a $6.1 million decrease when life loss is not included. The total recommended mitigation spending increases 36%, from $16.5 million to $22.5 million, when life loss is included. Most of that increased spending goes towards mitigating multi-family dwellings (RES3) and hotels (RES4), which have relatively high densities of people (Fig. 4). Without life loss, 94% of the total mitigation expenditures go to entertainment and recreation (COM8), and no more than 3% to any other occupancy type. With life loss, 71% goes to entertainment and recreation, 13% to multi-family, and 9% to hotel. On a per square foot basis, nursing homes (RES6), which represent a small amount of total area, become the third highest value when life loss is considered (Fig. 4). In terms of structural types, the increase in mitigation spending caused by including life loss goes primarily to low-rise concrete shear wall buildings (C2L) and reinforced masonry bearing walls with wood or metal deck diaphragms (RM1L).

6. Solution algorithms This section describes two efﬁcient solution algorithms developed to solve the linear program—the Dantzig–Wolfe decomposition algorithm and the greedy heuristic algorithm.

2486

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

6.1. Dantzig–Wolfe decomposition algorithm The model formulation has a special structure that suggests that Dantzig–Wolfe decomposition [2] is an appropriate solution procedure. Notice that if the budget constraint (Expression (3)) were relaxed, the formulation would decompose into M × C independent linear programs, where M and C are the total number of building classes and seismic code levels, respectively. This circumstance can be exploited to create an efﬁcient solution procedure. The idea of Dantzig–Wolfe decomposition is that, rather than representing the impact of each of these independent linear programs on the optimization through the constraints, as described in the formulation, one can explicitly use the set of extreme points to deﬁne the feasible region generated by the constraints. Since the feasible region of a bounded linear program (as ours is, because of the budget constraint) is a convex set, any feasible point can be represented as a convex combination of the extreme points in that set. Dantzig–Wolfe decomposition speciﬁes the use of a master problem and a series of subproblems. There is one subproblem for each class, m, and initial seismic design code, c. The master problem is composed of the linking constraints and a collection of additional variables that represent a subset of the extreme points of each of the subproblems. In each iteration, if appropriate, a new extreme point is generated for each of the subproblems and these are added to the master problem. The algorithm terminates when the optimal solution has been identiﬁed. That optimal solution is expressed as a convex combination of the extreme points that satisfy the linking constraints. The advantage of the decomposition algorithm is that it requires a lot less storage than a standard LP solver. The algorithm is described in the following four steps.

cc1 = x c and zcc 1 = 0 for c = c. The notation zcc n represents Step 1. Initialize the solution. For all classes m, let zmt mt mt m0 cc n square feet should be mitigated from the decision in the nth solution to the subproblem for class m that at time t zmt seismic code c to code c . This initial basic feasible solution for each subproblem corresponds to no mitigation. cc h : ∀m, t, c, c } for h = {1, . . . , n} that are obtained in the past Step 2. At iteration n, there are n extreme points {zmt iteration. Solve the master problem described in Expressions (10)–(13). The dual prices t and corresponding to the constraints in Expressions (11) and (12) are obtained and used to modify the coefﬁcients in the subproblems in the next iteration: n cc drc drc cc h h Fmt + zmt Gmt am (10) min

m

h=1 t

such that

h=1 m n h

c

c

n c

c

r

cc cc h h Fmt zmt Bt

d

∀t,

(11)

= 1,

h=1 h

0

(12)

∀h.

(13)

Step 3. At iteration n, solve the subproblem described by Expressions (15)–(18) for each class m and initial seismic code c: ˜ Wmc˜ − (14) min W

c˜

m

Wmc˜ = min z

c c˜ c c

t

c such that xm,t−1 =

c c

c c

cc

Fmt +

r

cc n zmt

cc n c zmt = xmt

cc n zmt 0

∀t, c,

∀t, c ,

∀t, c, c .

drc

dc r

Gmt am

cc

− t Fmt

cc n zmt

(15)

d

(16) (17) (18)

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

2487

High code Moderate code Low code t=1

t=2

t=3

Fig. 5. Minimum cost network ﬂow structure of the optimization.

Note that while Expressions (15)–(18) are written as a single optimization, the formulation is separable by class m and initial seismic code c. ˜ In the case study, there are about (201 × 3 × 17 × 11) = 112,761 subproblems. Each subproblem can be viewed as a minimum cost network ﬂow problem. Fig. 5 shows the network created by a single class m for which all square footage is built to moderate code at the beginning of the planning horizon (t = 0). Nodes in the network represent a speciﬁc seismic code level at the beginning of a particular time period, arcs represent decisions of whether or not to upgrade, and the ﬂow through the network is the square footage. For example, suppose there was 100,000 sq ft in this class m, all designed to moderate code at the beginning of the planning horizon, and half was upgraded to high code in the ﬁrst period. The arc from the node moderate at t = 1 to high at t = 2 would have a ﬂow of 50,000 sq ft. The other 50,000 sq ft would ﬂow through the arc from moderate at t = 1 to moderate at t = 2. cc n that will be added to the master problem in the next iteration. When Expression Find the nth new extreme points zmt (14) is negative, the mitigation decisions obtained from the nth subproblems might lower the objective value of the master problem. If Expression (14) is negative, the process continues by returning to Step 2. If Expression (14) is positive, the decisions do not lower the objective value the optimal solution has been found. Stop and go to Step 4. n and cc h h cc = Step 4. The optimal solution is given by zmt h=1 zmt for all m, t, c, c . It is useful to notice that there is an economic interpretation to the subproblems. The dual prices t are the marginal beneﬁt of an extra dollar of mitigation budget in period t. These dual prices are then used as a “hurdle” in the subproblems. For a given building class and initial seismic code in each period, each mitigation investment selected will produce cost savings which exceed this “hurdle” rate. The solution to the subproblem will be those investments that create the largest return. As the dual prices change, the threshold that makes an investment worthwhile changes, producing different corner point solutions. Once there is a sufﬁcient reduction in loss across the subproblems for the budget available, the procedure ends. The Dantzig–Wolfe subproblems described in Expressions (15)–(18) can be solved faster by modifying the algorithm to eliminate the use of an LP solver for some of the subproblems. The modiﬁed algorithm uses the investment return criterion for the heuristic algorithm described in the next section. The LP solver is applied only to those subproblems for which there is a mitigation strategy with a positive return. This pre-calculation of returns is effective since, based on the data in HAZUS, in many realistic applications there are negative returns for mitigation of most classes m and initial seismic codes c. For example, in the case study in the last section, only about 5% of the subproblems required an LP solver; no mitigation is optimal for the rest of the subproblems. Note that the objective value of the master problem is a non-increasing function, and that the Dantzig–Wolfe decomposition algorithm is guaranteed to terminate in a ﬁnite number of iterations, but still may require a long time to converge for large-scale problems. For a problem with a very large number of decision variables, the limitation in available physical RAM may become an issue because cc h : ∀m, t, c, c } for h = {1, . . . , n} to ﬁnd the optimal the algorithm requires storage of all the past extreme points {zmt solution. Hence, dropping extreme points that have not been used in many iterations may be important. 6.2. Greedy heuristic algorithm The greedy heuristic algorithm developed in this paper tries to achieve the minimum objective value in Expression (9) by starting from the beginning of the planning horizon and moving forward through time selecting the mitigation alternatives with the highest returns on investment. The return on investment, REcc mt , for mitigating class m, from seismic code c to code c in time t, assuming mitigation takes place immediately and is effective until the end of planning

2488

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

period T is deﬁned as

REcc mt =

cc Lcc mt − Lmt cc Fmt

,

(19)

T c c cc where Lcc mt = Pmt + t =t+1 Pmt for all m, t, c, c is the sum of the net contribution to the objective value from time drc drc cc = (F cc + t to T by mitigating class m from seismic code c to code c in time t; and Pmt mt d Gmt am ) for all r m, t, c, c is the net contribution to the objective value in time t for mitigating a class m from seismic code c to code c in time t. Using this deﬁnition of return, the algorithm can be described in the following three steps:

cc = 0 for c = c , Step 1. Initialize the solution. For all m, t, c, c , initial feasible solutions are assigned such that zmt c cc and zmt = xm0 . This solution corresponds to no mitigation for all periods. c = xc Step 2. Select mitigation alternatives. For time t = {1, . . . , T }, set b = Bt and ym m,t−1 for all class m and code c. The annual budget and ﬂoor area for each code level are initialized, and the values will be reduced as mitigation dollars are spent on mitigation options. Calculate REcc mt for all m, t, c, and c . a. Select mitigation class and codes. If b > 0, select for mitigation combination of class m and code c and code c that c cc cc satisﬁes REcc mt > 0 and ym > 0 and has the highest REmt . If all REmt < 0, there is no mitigation class that can be used to reduce the objective value, so go to Step 3. cc y c b, the available budget is adequate to mitigate all of the ﬂoor area in b. Calculate mitigation ﬂoor area. If Fmt m cc = y c . If F cc y c > b, the current budget is not sufﬁcient to class m from code c to code c . Assign mitigation zmt mt m m cc = b/F cc . The ﬂoor area is partially mitigated. mitigate all the ﬂoor area. Assign mitigation zmt mt c. Update the available budget and ﬂoor area. The budget available in the current period is reduced by the mitigation cc zcc . The ﬂoor area in class m and code c is reduced by cost used for Step 2b. Now the budget is b = b − Fmt mt c = y c − zcc . ym mt m d. Return to Step 2a until b = 0, or all of classes and codes have been examined. Assign no mitigation to combinations cc = x c cc ), and calculate the initial ﬂoor area for time t + 1 (x c = cc ). − z z remaining area (zmt mt c c mt c c mt m,t−1 Move to the next period t = t + 1 and go to the beginning of Step 2. If t = T , then move to Step 3. cc for all t, m, c and c . Step 3: Return the decision variable zmt

This heuristic procedure is very similar to the subproblems in the decomposition. The subproblems maximize the total “return” subject to a “minimum allowable” return on each investment, whereas the heuristic orders the mitigation alternatives by return, as calculated in Eq. (19). To better understand how the heuristic works and interpret the results it provides, it is useful to notice two of its key characteristics. If we assume that mitigation and reconstruction costs are constant over time, we can prove that: (1) the return on investment for mitigating class m from code c to c decreases over cc time (i.e., REcc mt > REm,t+1 ∀t < T ); and (2) if mitigating class m1 from code c1 to c1 is better than mitigating class m2 c1 c

c2 c

c1 c

c2 c

1 2 from code c2 to c2 at some time t, then it is better at all t (i.e., if REm1 t1 > REm2 t2 for t < T , then REm1 ,t+1 > REm2 ,t+1 ). See Appendix for proofs. These two characteristics explain why the heuristic algorithm often works well. This constant cost assumption is reasonable since it is impossible to predict the cost changes for the long planning horizon in the model (i.e., 30 years), and even if there were a visible long-term trend in costs, they could be included in the model by adjusting discount rate (not by changing costs).

7. Computational study To investigate application of the two solution algorithms, a computational study was conducted. The scope and input data are the same as those described in the Case Study section, except that the study includes some smaller test cases. The algorithms were programmed and implemented in Matlab 6.0 on a computer with a 2.4 GHZ processor and 620 MB physical RAM. The LP solver used was the standard Matlab solver. Matlab is a relatively easy language to use, but it is slower than CPLEX. The comparisons that follow are meaningful because they were programmed in the same language. The relative speeds should be approximately the same for another language. The development of the test instances used in the study is discussed in the next section, and the two algorithms are compared in terms of solution quality and computational time in the following sections.

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

2489

Table 1 Generated test instances Test case

Number of tracts Annual budget ($ million)

I

II

III

IV

V

10 0.5

10 1

50 2.5

50 5

201 5

Table 2 Mitigation cost rate sets Mitigation cost rate

Set A

Set B

Set C

Set D

r low,moderate r moderate,high

0.75 0.75

0.50 0.50

0.75 0.25

0.25 0.75

7.1. Test case deﬁnitions Five test cases were used for the computational experiments (Table 1). Three characteristics were thought to potentially affect the solution quality or computational time and were therefore used to deﬁne the test cases: (1) number of census tracts, (2) annual budget, and (3) relative cost of mitigating to higher codes. All other characteristics are the same across test instances. There are three types of problem instances with respect to the number of tracts: 10, 50, and 201 tracts. For each of the 10- and 50-tract test cases, 10 samples were created by randomly choosing tracts from the 1652 tracts in Los Angeles County. The random samples were used to minimize the inﬂuence of the speciﬁc tract selections c and damage probabilities in comparing test cases since different census tracts have different building inventory xm0 drc . The 201-tract test case is the same Central and Eastern Los Angeles area described in the Case Study section. am Two values of annual budget are used for the 10-tract test case ($0.5M, $1M), two for the 50-tract test case ($2.5M, $5M), and one for the 201-tract test case ($5M). The effect of the relative costs of mitigating to higher seismic code levels is explored by deﬁning two terms called the low,high low,moderate mitigation cost rates, r low,moderate and r moderate,high , such that for all m and t, Fmt = r low,moderate Fmt and moderate,high low,high Fmt = r moderate,high Fmt . They describe the cost of mitigation from low to moderate code and moderate to high code as fractions of the cost of mitigating from low to high code. For example, r low,moderate = 0.75 indicates that the cost of mitigating from low to moderate code is 75% of the cost of mitigating it from low to high. The mitigation cost rates range from zero to one, and are constant over time and across building classes. The four sets of mitigation cost rates used in the computational study are shown in Table 2. They represent a range of reasonable values to illustrate sensitivity of the algorithms to the relative mitigation costs. Each test case in Table 1 is run for all four mitigation cost rate sets in Table 2. Comparisons of Test Cases I and II or III and IV provide insight into the algorithms’ sensitivity to the annual budget. Comparing the results of Test Cases I, III, and V shows the algorithms’ sensitivity to the number of tracts. These test cases create large-scale optimization problems. For instance, with 10 tracts, 30 periods, 6 code c and c combinations cc n . (c c ), and 50 Dantzig–Wolfe iterations, there are over 18 million decision variables, zmt 7.2. Solution quality comparison The solutions obtained by the heuristic and Dantzig–Wolfe algorithms are compared using the quality ratio (QR) parameter, deﬁned as the average of QRs where QRs = (Us0 − UsSub )/(Us0 − UsDW ), Us0 is the objective value without mitigation for sample s, UsDW is the ﬁnal optimal objective value achieved by the Dantzig–Wolfe algorithm for sample s, and UsSub is the objective value of a suboptimal solution for sample s. The value UsSub can be either the ﬁnal solution obtained by the heuristic UsH , or the best solution found by Dantzig–Wolfe within the time it takes the heuristic to run DW(H) . The QR shows the suboptimal reduction in objective value as a percentage of the ﬁnal Dantzig–Wolfe reduction Us in objective value. Table 3 includes average QR values for all test cases and mitigation cost rate sets, for suboptimal

2490

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

Table 3 Average solution quality ratio (QR) for the best Dantzig–Wolfe solution found within the time taken by the heuristic (QRDW(H) ) and for the ﬁnal heuristic solution (QRH ) Test casea

Mitigation cost rate setsb Set A (0.75/0.75)

I (10, $0.5M) II (10, $1M) III (50, $2.5M) IV(50, $5M) V (201, $5M) a Test

Set B (0.5/0.5)

Set C (0.75/0.25)

Set D (0.25/0.75)

QRDW(H) (%)

QRH (%)

QRDW(H) (%)

QRH (%)

QRDW(H) (%)

QRH (%)

QRDW(H) (%)

QRH (%)

64.5 89.4 38.3 71.7 19.1

99.7 99.3 99.7 99.8 100

41.5 76.5 26.5 53.3 15.7

98.9 99.2 99.3 99.8 100

26.6 53.3 16.5 32.4 12.2

98.7 99.2 98.2 99.6 99.9

35.8 75.6 23.9 49.5 8.0

99.9 100 100 100 100

case deﬁned by (number of tracts, annual budget). cost rate sets deﬁned by (r low,moderate /r moderate,high ).

b Mitigation

Table 4 Average solution time in seconds: average and (minima) Test casea

I (10, $0.5M) II (10, $1M) III (50, $2.5M) IV (50, $5M) V (201, $5M) a Test

Heuristic Mitigation cost rate setsb

Dantzig–Wolfe Mitigation cost rate setsb

A

B

C

D

A

B

C

D

5 (3) 5 (3) 25 (21) 27 (21) 112 –

10 (7) 11 (7) 54 (44) 58 (47) 336 –

14 (10) 15 (11) 76 (58) 85 (70) 388 –

19 (15) 21 (16) 112 (105) 122 (114) 758 –

74 (36) 52 (26) 633 (385) 416 (303) 3647 –

200 (95) 133 (69) 1520 (1148) 1106 (654) 17,751 –

368 (215) 258 (135) 2843 (1644) 2163 (1432) 14,137 –

473 (297) 283 (182) 3752 (3059) 2469 (1734) 64,610 –

case deﬁned by (number of tracts, annual budget). cost rate sets deﬁned by (r low,moderate /r moderate,high ) are A (0.75/0.75), B (0.5/0.5), C (0.75/0.25), D (0.25/0.75).

b Mitigation

DW(H)

solutions UsH and Us , called QRH and QRDW(H) , respectively. The best solutions found by Dantzig–Wolfe within the time taken by the heuristic vary greatly across test cases. They are better for test cases with larger budgets (II and IV) and for test cases with fewer census tracts (I and II). For all test cases, the ﬁnal heuristic and Dantzig–Wolfe solutions are quite similar. The Dantzig–Wolfe provides slightly better quality for mitigation cost rate Sets B (0.5/0.5) and C (0.75/0.25), and within the set, when there is a smaller annual budget. These results conﬁrm that the heuristic algorithm works well. 7.3. Computational time comparison A comparison of solution times between the two algorithms provides insights about computation cost. Table 4 lists the average solution time and its minimum value (in seconds), by test case and mitigation cost rate set, for the heuristic and Dantzig–Wolfe algorithms. For both algorithms, the solution time is longer when there are more census tracts, because there are more subproblems to consider. For the Dantzig–Wolfe algorithm, it is also longer when the annual budget is smaller because the cases with smaller budget choose mitigation alternatives in later planning periods. For both algorithms, the solution time is also longer for Sets C and D than the others, because when at least one of the mitigation cost rates is 0.25, there are a larger number of possible mitigation candidates that have a positive return and a correspondingly larger number of subproblems to solve.

100%

100%

80%

80%

60%

Convergence Ratio

Convergence Ratio

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

Set A Set B

40%

Set C Set D

20% 0% 25

(a)

50 Time (sec)

75

100

Set A Set B

40%

Set C Set D

20%

0

25

(b)

100%

100%

80%

80%

60%

Convergence Ratio

Convergence Ratio

60%

0% 0

Set A Set B

40%

Set C Set D

20% 0%

50 Time (sec)

75

60%

100

Set A Set B

40%

Set C Set D

20% 0%

0 (c)

2491

250

500 Time (sec)

750

1000

0

250

(d)

500 Time (sec)

750

1000

Fig. 6. Convergence time for Test Cases (a) I, (b) II, (c) III, and (d) IV.

In each case, the heuristic algorithm ﬁnds a ﬁnal solution 10–100 times faster than the Dantzig–Wolfe algorithm, even after eliminating the use of an LP solver for most of the subproblems in Dantzig–Wolfe. The ratio of Dantzig–Wolfe solution time divided by heuristic solution time is greatest for cases with more census tracts and for the mitigation cost rate Set C. The coefﬁcient of variation among samples is two to eight times larger for the Dantzig–Wolfe algorithm than for the heuristic algorithm. While the Dantzig–Wolfe algorithm clearly has longer solution times than the heuristic, by examining the convergence functions for Dantzig–Wolfe, one can see how the improvement in solution quality evolves over time. The convergence of the Dantzig–Wolfe algorithm is summarized by the relationship between the reduction of the objective value and its execution time. Fig. 6 plots the convergence ratio (CR ) versus run time for Test Cases I–IV and for all mitigation cost DW() )/(Us0 − UsDW ), rate sets. The convergence ratio CR is deﬁned as the average of CRs , where CRs = (Us0 − Us DW() is the objective value obtained by the Dantzig–Wolfe algorithm at time for sample s. The parameter CR , and Us therefore, represents the average percentage reduction in objective value achieved at time over all test case samples. The nonlinear relation between computational time and CR indicates that the rate of convergence slows as time passes. Fifty percent convergence is achieved in an average of 10–14% of the total time. Convergence of 90% and 99% are achieved in less than 30% and 50% of the time, respectively. Thus, half of the total time is spent achieving the last 1% of the toal reduction in the objective value. The computational time of the Dantzig–Wolfe algorithm depends, therefore, on how much suboptimization the decisionmaker ﬁnds acceptable. For both budget groups, mitigation cost rate Set D (0.25/0.75) requires the longest time to converge because the inexpensive mitigation cost for mitigating from low to moderate code for Set D generates a larger number of mitigation candidates with positive returns than the other mitigation cost rate sets. The test cases with the smaller budgets (Test Cases I and III) take more time to converge than those with larger budgets (Test Cases II and IV), since balancing mitigation spending over longer periods requires a larger number of iterations. The test cases with large numbers

2492

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

of census tracts (Test Cases III–V) require signiﬁcantly more time to converge due to larger number of mitigation alternatives with positive returns. Starting the Dantzig–Wolfe algorithm with the solution to the heuristic produces substantial reductions in computation time. For Test Case IV the algorithm converged to the optimal solution in about one-ﬁfth of the time. Therefore, if it is important to identify the actual optimal solution or to prove that the optimal solution has been obtained using the heuristic solution with the Dantzig–Wolfe algorithm is a potentially viable option. 8. Conclusions This paper describes two solution methods—based on Dantzig–Wolfe decomposition and a heuristic—for a linear program that models regional earthquake mitigation investment decisions. For realistic applications, the model includes a very large number of variables, and therefore requires a special solution method. Los Angeles County, for example, would require approximately 56 million variables. Computational experiments presented in this paper illustrate that the heuristic developed could be effectively used to solve that problem. The model and solution methods offer a new tool to support regional earthquake mitigation investment decisions related to both how much to spend on pre-earthquake mitigation (versus for post-earthquake reconstruction) and how to allocate a speciﬁed budget among structural upgrading mitigation alternatives. The model is novel as an optimizationbased approach to public sector earthquake risk management resource allocation decisions that incorporates spatial correlation among mitigation alternatives, the range of possible earthquakes with their associated probabilities, and decision timing. There are many possible future extensions to the model, such as incorporating other objectives (e.g., minimizing the probability of an unacceptably large loss or maximizing equity), including other mitigation alternatives besides structural upgrading (e.g., non-structural upgrading, buying insurance), or applying the approach to other hazards (e.g., hurricanes). Notation drc am

Bt c C d cc Fmt Gdrc mt h i j k Kt m M n r r cc drc Rmt t T c xmt

per-period probability that buildings in class m that were designed to seismic code c enter damage state d of damage type r maximum mitigation budget to be spent in period t index of seismic design code number of seismic design code levels index of damage state per square foot cost to mitigate buildings in class m in time period t that were upgraded from seismic code c to code c modiﬁed per square foot reconstruction cost that includes a cost for life loss index of extreme points in solution iteration index of structural type index of occupancy type index of census tract assumed cost per life lost in time t index of class of buildings in census tract k that are of structural type i and occupancy type j number of building classes index of solution iteration index of damage type mitigation cost rate for mitigating from seismic code c to code c per square foot reconstruction cost for buildings in class m during time period t that were designed to seismic code c, enter damage state d of damage type r, and are repaired to the same condition c index of period number of periods in planning horizon square footage of buildings in class m at the end of period t that are designed to seismic code c

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

2493

square footage of buildings in class m that were in seismic design code c at the end of period t − 1 and were mitigated to seismic code c at the beginning of period t initial population density (people per sq ft) in buildings in class m probability that a person enters casualty severity level s when residing in buildings in class m that have damage type r in damage state d multiplication factor that indicates the cost of casualty severity s relative to death

cc zmt

m sdr m s

Acknowledgements The authors would like to thank the National Science Foundation (CMS-0074686 and CMS-00408577) for partial ﬁnancial support of this research. This support is gratefully acknowledged, but the authors take sole responsibility for the content of the paper.

Appendix

cc =vF cc , Assume that mitigation and modiﬁed reconstruction costs are constant over time. The assumption gives Fmt+1 mt cc cc vGdrc Gdrc mt , and Wmt+1 = vW mt for all m, t, c, c , where v = 1/(1 + ), is an effective interest rate, and mt+1 = cc = drc drc drc drc cc Wmt r d (Gmt am − Gmt am ) for all m, t, c, c . Wmt is the difference of the per square foot reconstruction cost averaged over damage states between seismic code c and code c for a class m in time t. Then the return becomes REcc mt

=

cc + (W cc + · · · + W cc ) −Fmt mt mT cc Fmt

=

cc + W cc (1 + v + · · · + v T −t ) −Fmt mt cc Fmt

.

cc and A mitigation project can be regarded as a capital budgeting process that requires an initial investment of Fmt cc provides a stream of constant payments Wmt starting immediately in time t until time T in annuity form.

cc > 0 for some t < T , then REcc > Proposition 1. Mitigation alternatives become less appealing over time: If Kmt mt REcc ∀t < T . mt+1

Proof 1. Note that for t < T T REcc mt

− REcc mt+1

cc t =t Wmt cc Fmt

=

T −

cc t =t+1 Wmt cc Fmt+1

T

cc t =t Wmt cc Fmt

=

T −1 t =t

−

cc > 0 for some t < T , K cc > 0. This proves the proposition. Since Kmt mT

vW cc mt

vF cc mt

=

cc WmT cc Fmt

.

c1 c

c2 c

c1 c

c2 c

1 2 > REm2 t+1 . Proposition 2. Mitigation priorities stay same over time: If REm1 t1 > REm2 t2 for t < T , then REm1 t+1

Proof 2. Note that c1 c1 m1 t

RE

− RE

c2 c2 m2 t

T =

t =t

c1 c Fm1 t1

T

c1 c

Wm1 t1

−

t =t

c2 c

Wm2 t2

c2 c Fm2 t2

⎛ =⎝

c1 c

Wm1 t1 c1 c Fm1 t1

c2 c

−

Wm2 t2 c2 c Fm2 t2

⎞ ⎠ (1 + v + · · · + v T −t ) > 0.

The last inequality follows from the hypothesis of the proposition. Then c1 c

Wm1 t1 c1 c

Fm1 t1

c2 c

−

Wm2 t2 c2 c

Fm2 t2

> 0.

(20)

2494

A. Dodo et al. / Computers & Operations Research 34 (2007) 2478 – 2494

Also note that c1 c1 REm1 t+1

c2 c2 − REm2 t+1

T −1 t =t

= ⎛ =⎝

This proves the proposition.

T −1

c1 c

vW m1 t1 c1 c

vF m1 t1 c1 c

Wm1 t1 c1 c Fm1 t1

−

− c2 c

Wm2 t2 c2 c Fm2 t2

t =t

⎞

c2 c

vW m2 t2 c2 c

vF m2 t2

⎠ (1 + v + · · · + v T −1−t ) > 0.

(21)

References [1] Dodo A, Xu N, Davidson RA, Nozick LK. Optimizing regional earthquake mitigation investment strategies. Earthquake Spectra 2005;21(2): 305–27. [2] Bertsimas D, Tsitsiklis JN. Introduction to linear programming. Massachusetts: Athena Scientiﬁc; 1997. [3] Shah HC, Bendimerad MF, Stojanovski P. Resource allocation in seismic risk mitigation. In: Proceedings, 10th world conference on earthquake engineering, vol. 4. Madrid, Spain; 1992. p. 6007–11. [4] Augusti G, Borri A, Ciampoli M. Optimal allocation of resources in reduction of the seismic risk of highway networks. Engineering Structures 1994;16(7):485–97. [5] Ermoliev YM, Ermolieva TY, MacDonald GJ, Norkin VI, Amendola A. A system approach to management of catastrophic risks. European Journal of Operations Research 2000;122:452–60. [6] Amendola A, ErmolievY, Ermolieva TY, Gitis V, Koff G, Linnerooth-Bayer J. A systems approach to modeling catastrophic risk and insurability. Natural Hazards 2000;21:381–93. [7] Amendola A, Ermoliev Y, Ermolieva TY. Earthquake risk management: a case study for an Italian region. In: Proceedings, EuroConference on global change and catastrophic risk management: earthquake risks in Europe. Laxenburg, Austria: International Institute for Advanced Systems Analysis; 2000. [8] Ermolieva TY, Fischer G, Obersteiner M. Integrated modeling of spatial and temporal heterogeneities and decisions induced by catastrophic events. IIASA Interim Report IR-03-023, International Institute for Advanced Systems Analysis, Laxenburg, Austria; 2003. [9] Hayek C, Ghanem, R. Natural hazards: optimal insurance portfolio and coverage resolution. In: Proceedings, international conference on structural safety and reliability ICOSSAR’05. Rotterdam: Millpress; 2005. [10] Federal Emergency Management Agency (FEMA). State and local plan interim criteria under the Disaster Mitigation Act of 2000. Washington DC; FEMA; 2002. [11] Federal Emergency Management Agency (FEMA). State and local mitigation planning how to guide. Report No. 386-3, Washington DC; FEMA; 2003. [12] Seligson HA, Blais NC, Eguchi RT, Flores PJ, Bortugno E. Regional beneﬁt-cost analysis for earthquake hazard mitigation: application to the Northridge earthquake. In: Proceedings, 6th US national conference on earthquake engineering. Seattle, Washington: EERI; 1998. [13] US Small Business Administration (US SBA). http://www.sba.gov/; 2003 [November 2003]. [14] Association of Bay Area Governments (ABAG). Berkeley’s Seismic Retroﬁt Incentive Program. http://www.abag.ca.gov/bayarea/eqmaps/ berkeley.html; 2005 [July 9, 2005]. [15] Federal Emergency Management Agency (FEMA). HAZUS earthquake loss estimation methodology. Technical Manual, Service Release 1, Washington DC; FEMA; 1999. [16] Graham J, Vaupel J. Value of life: what difference does it make?. Risk Analysis 1981;1(1):89–95. [17] Los Angeles Department of City Planning Demographics Research Unit (LADRU). Statistical information. http://cityplanning. lacity.org/dru/HomeC2K.htm; 2004 [October 12, 2004]. [18] Chang S, Shinozuka M, Moore II J. Probabilistic earthquake scenarios: extending risk analysis methodologies to spatially distributed systems. Earthquake Spectra 2000;16(3):557–72. [19] Wills CJ, Petersen M, Bryant WA, Reichle M, Saucedo GJ, Tan S, Treiman J. A site-conditions map for California based on geology and shear-wave velocity. Bulletin of the Seismological Society of America 2000;90(6B):S187–208.