Life-cycle cost optimal design of passive dissipative devices

Life-cycle cost optimal design of passive dissipative devices

Structural Safety 31 (2009) 508–522 Contents lists available at ScienceDirect Structural Safety journal homepage: www.elsevier.com/locate/strusafe ...

967KB Sizes 0 Downloads 14 Views

Structural Safety 31 (2009) 508–522

Contents lists available at ScienceDirect

Structural Safety journal homepage: www.elsevier.com/locate/strusafe

Life-cycle cost optimal design of passive dissipative devices Alexandros A. Taflanidis a,*, James L. Beck b a b

Department of Civil Engineering and Geological Sciences, University of Notre Dame, 156 Fitzpatrick Hall, Notre Dame IN 46556, United States Engineering and Applied Science Division, California Institute of Technology, 1200 E California Blvd. MC 104-44, Pasadena CA 91125, United States

a r t i c l e

i n f o

Article history: Received 2 June 2008 Received in revised form 7 January 2009 Accepted 30 June 2009

Keywords: Life-cycle cost optimization Seismic design Robust stochastic design Viscous dampers Performance-based engineering Stochastic subset optimization

a b s t r a c t The cost-effective performance of structures under natural hazards such as earthquakes and hurricanes has long been recognized to be an important topic in the design of civil engineering systems. A realistic comprehensive treatment of such a design requires proper integration of (i) methodologies for treating the uncertainties related to natural hazards and to the structural behavior over the entire life-cycle of the building, (ii) tools for evaluating the performance using socioeconomic criteria, as well as (iii) algorithms appropriate for stochastic analysis and optimization. A systematic probabilistic framework is presented here for detailed estimation and optimization of the life-cycle cost of engineering systems. This framework is a general one but the application of interest here is the design of passive dissipative devices for seismic risk mitigation. A comprehensive methodology is initially presented for earthquake loss estimation; this methodology uses the nonlinear time-history response of the structure under a given excitation to estimate the damage in a detailed, component level. A realistic probabilistic model is then presented for describing the ground motion time history for future earthquake excitations. In this setting, the life-cycle cost is uncertain and can be quantified by its expected value over the space of the uncertain parameters for the structural and excitation models. Because of the complexity of these models, calculation of this expected value is performed using stochastic simulation techniques. This approach, though, involves an unavoidable estimation error and significant computational cost, features which make efficient design optimization challenging. A highly efficient framework, consisting of two stages, is discussed for this stochastic optimization. An illustrative example is presented that shows the efficiency of the proposed methodology; it considers the seismic retrofitting of a four-story non-ductile reinforced-concrete building with viscous dampers. Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction The knowledge about a planned system in engineering design applications is never complete. For an effective design, all uncertainties associated with future excitation events, as well as the modeling of the system should be explicitly accounted for. A probability logic approach [1] provides a rational and consistent framework for quantifying these uncertainties. In this approach, probability models are interpreted as a means of describing uncertainty due to missing information about the system under consideration. They characterize the relative plausibility of the different possible values of parameters and variables that define properties of the system and its future excitations. Because a set of possible models is used to represent the system behavior, with each model weighted by it probability (i.e. relative plausibility), a robust predictive analysis results and the system design process is therefore called robust stochastic system design. The associated design optimization problem is called stochastic design optimization [2,3]. Such * Corresponding author. E-mail address: a.tafl[email protected] (A.A. Taflanidis). 0167-4730/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.strusafe.2009.06.010

design problems are challenging because of the complex coupling between (i) system modeling (including quantification of model uncertainties), (ii) performance quantification, (iii) stochastic analysis (propagation of uncertainties), and (iv) design optimization. This general stochastic framework can be used to implement the concept of performance-based engineering that has emerged as a tool for evaluation and design of civil engineering structures to meet objectives that are meaningful for the structure’s stakeholders (expressed in socio-economic terms, such as deaths or losses from repairs and downtime) [4]. In this case a probabilistic treatment is necessary for describing the uncertainties related to future natural hazards (wind, earthquakes, etc.) when evaluating the damage and repair costs over the lifetime of the structure. The consideration of both this lifetime cost as well as the initial cost of the structure leads to the so called life-cycle cost stochastic design. The optimal design configuration is then the one that balances these two types of cost according to the utility function that has been selected for expressing the overall performance of the built system. In the last few decades, many researchers have developed design approaches that consider, explicitly or implicitly, the life-cycle cost performance. Initial efforts focused on design problems for

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

which future costs due to external loads could be approximated as a linear function of the system probability of failure (see for example, [5,6]). This is an approach that is frequently adopted even for more complex design problems (see for example, [7]). Other studies developed more detailed approaches in terms of estimating the life-cycle cost. Kong and Frangopol [8] discussed the optimal design of the maintenance schedule for bridges and estimated the life-cycle cost performance using a general reliability framework. Ang and Lee [9] analyzed reinforced concrete buildings and for each level of intensity for the ground motion they described repair costs using a median global (over the whole structure) Park–Ang damage index. The design aspect of their study was limited to comparison between a small number of different conventional designs; it did not involve an explicit design optimization. Liu et al. [10] discussed the life-cycle cost optimal seismic design of steel frame buildings. They used genetic algorithms to perform a multi-objective optimization, treating initial cost and lifetime cost as different objectives. Complex design/construction topics were taken into account for estimating the initial cost but to quantify future earthquake losses, they adopted an approximate approach. This approach uses the static pushover response to describe the nonlinear behavior of the structure and considers a small number of hazard levels for addressing the uncertainty in the excitation. This approximation was deemed necessary by the authors in order to reduce the computational cost associated with the optimization process. A similar methodology in terms of estimating repair losses was adopted by Fragiadakis et al. [11]. Most of these life-cycle cost design methodologies suggested so far include some level of approximation with respect to (i) estimating repair costs, (ii) taking into account the uncertainties of the external excitation, and (iii) performing an explicit optimization with respect to the design variables for the system. Although such approximations are generally adequate for preliminary design purposes, their accuracy, and thus their effectiveness for final design decisions, cannot be known a priori and, in general, depends on the characteristics of each design problem. Since the design process involves an optimization for the ‘‘best” design configuration, errors in the evaluation of the performance objective, due to approximations in the system modeling, the evaluation of the performance, or the uncertainty quantification, may lead to a design choice that does not correspond to a favorable solution with respect to the behavior of the actual system. Such a design choice will only correspond to an optimum for the approximate design problem. Preferably, the designer should carefully select models and methodologies that are appropriate for the problem considered; models that strike a balance between computational tractability and modeling accuracy. A systematic stochastic design framework is presented here for performance-based earthquake engineering that explicitly optimizes the expected life-cycle cost of infrastructure systems. The basis of this framework is a novel approach to stochastic design problems that has been recently developed [2]. This approach uses stochastic simulation for evaluation of the system performance and a highly efficient framework for performing the design optimization that reduces significantly the computational burden for this task. These features ultimately allow for consideration of more complex, and potentially more accurate, models for evaluating the structural performance. In this paper, attention is restricted to the optimal design of passive dissipative devices for earthquake risk mitigation. Under extreme loading conditions these devices are inherently nonlinear because of the constraints related to their force capabilities. Thus, their effect on the life-cycle cost performance of the structure cannot be adequately described by many of the existing approximate techniques. The contribution of the current work is that it presents a complete probabilistic framework for a detailed estimation and

509

optimization of the life-cycle cost for such applications. All important aspects of the design process are considered. A comprehensive methodology is presented for estimating the repair cost due to future damage from earthquake excitations; this methodology uses the nonlinear response of the structure under a given seismic excitation to estimate the damage in a detailed, component level. A realistic probabilistic model is presented for describing earthquake excitations. This model establishes a direct link between the probabilistic seismic hazard description of the structural site and the acceleration time history of future ground motions and can efficiently describe all important characteristics of the latter. In this setting, the uncertain life-cycle cost is quantified by its expected value over the space of the uncertain parameters for the structural and excitation models. Because of the complexity of the models, this expected value is evaluated by means of stochastic simulation techniques. This approach, though, involves an unavoidable estimation error and significant computational cost, and both of these features make the associated design optimization challenging. An efficient framework, consisting of two stages, is presented here for the optimization in such stochastic design problems. The first stage implements a novel approach, called stochastic subset optimization (SSO), for iteratively identifying a subset of the original design space that has high plausibility of containing the optimal design variables [2,3]. The second stage, if needed, adopts some other stochastic optimization algorithm to pinpoint the optimal design variables within that subset. Note that the methodologies described here, especially with regard to loss estimation and stochastic optimization, can be extended to other types of applications. In the next section the formulation of the design problem is discussed. This involves selection of stochastic structural and excitation models and a loss estimation methodology that lead to quantification of the objective function, the expected life-cycle cost. This cost is expressed as a stochastic, multi-dimensional integral over the uncertain model parameter space. Then computational tools are presented in Section 3 for evaluating the objective function and performing the challenging design optimization, so that the design problem can be efficiently solved. Finally, an illustrative example is presented that shows the efficiency of the proposed methodology. This example considers the seismic retrofitting of a four-story non-ductile reinforced-concrete building with viscous dampers.

2. Life-cycle cost evaluation: stochastic system model and performance quantification 2.1. Stochastic system model The life-cycle cost optimal design methodology that is described in this study is based on a knowledge-based interpretation of probability [1] that considers all system uncertainties. In this framework, there is no invocation of the unprovable concept of inherently random events. Instead, the probability models characterize the relative plausibility of the different possible values of parameters and variables that define properties of the system and its future excitations. This interpretation of probability was put on a firm foundation by Cox [12,13] who derived the probability axioms by extending Boolean logic, which assumes complete information, to the case of incomplete information. Jaynes also made an important contribution with his Principle of Maximum (Shannon) Entropy for the selection of probability models as state-of-knowledge models that give the largest amount of uncertainty conditional on what we want to specify as known [1]; for example, if only the mean and variance of an uncertain parameter are prescribed (i.e. are treated as known), then the maximum entropy probability distribution function is Gaussian.

510

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

In this setting, consider a structural system that includes some controllable parameters that define the system design, referred to herein as design variables, u ¼ ½u1 ; u2 ; . . . ; unu  2 U  Rnu , where U denotes the bounded admissible design space. For the applications discussed later, u consists of the design characteristics of the passive dissipative devices. Let M represent the model class chosen to describe the system and its environment (representing future excitations). M is comprised of the individual system, excitation and performance evaluation models, as indicated in Fig. 1. Each model in the class is specified by a nh-dimensional vector h = [h1,h2,. . .,hnh] lying in H  Rnh , the set of possible values for the model parameters. Because there is uncertainty in which model in this set best represents the system behavior, a PDF (probability density function) p(h|u), which incorporates our available prior knowledge about the system and its environment, is chosen as part of M; all probabilistic results are implicitly conditional on the chosen model class M. Nonparametric modeling uncertainties may also be incorporated into the system description as a model prediction error, i.e. an error between the response of the actual system and that of a chosen model. This error can be modeled probabilistically [14] and augmented into h to form an uncertain parameter vector, comprised of both the uncertain model parameters and the model prediction error. The favorability of the system response, given the values of the model parameters, is evaluated by the performance measure hðu; hÞ : Rnu  Rnh ! Rþ , representing the expected utility from a decision-theoretic point of view. In this study the performance measure is:

hðu; hÞ ¼ C in ðu; hÞ þ C lif ðu; hÞ

ð1Þ

where Cin(u,h) corresponds to the initial cost and Clif(u,h) to the additional cost over the lifetime of the structure. The expected life-cycle cost C(u) is then simply given by:

CðuÞ ¼ Eh ½hðu; hÞ ¼

Z

hðu; hÞpðhjuÞdh

ð2Þ

H

where Eh[.] denotes expectation with respect to the PDF for the model parameter vector h. The expected value in (2) takes into account uncertainties about the initial status or design of the structural system, as well as about its uncertain properties and performance over the specified lifetime, including the uncertainties about the characteristics of natural hazards and of the structural system. If vector h1 denotes all the model parameters that are pertinent to the initial status or design of the structural system, then Eq. (2) may be simplified to:

CðuÞ ¼

Z

C in ðu; h1 Þpðh1 juÞdh1 þ

H1

Z

C lif ðu; hÞpðhjuÞdh

ð3Þ

H

Finally, the life-cycle cost design problem requires determination of the optimal design u*:

u ¼ arg min fCðuÞjf C ðuÞ P 0g

ð4Þ

u2Rnu

where fC(u) is a vector of deterministic constraints, related, for example, to location or space constraints for the dampers. Note that

in this formulation, performance requirements against future natural hazards are directly incorporated in the life-cycle cost of the structure (the objective function) as described later and do not need to be separately considered. Finally, the constraints in (4) may be incorporated into the definition of admissible design space U, which leads to the simplified expression:

u ¼ arg min CðuÞ ¼ arg min Eh ½hðu; hÞ u2U

ð5Þ

u2U

For calculation of the objective function in (5), estimation of the two cost components, Cin(u,h) and Clif(u,h), is required. Evaluation of the first one for passive dissipative devices is relatively straightforward; it can be performed using data for existing commercial devices. The second cost component requires evaluation of the structural response and implementation of appropriate methodologies for quantification of economic losses based on that response, as indicated in Fig. 1. The complexity of the models and methodologies that need to be considered for this task depends on the characteristics of the design problem. For the specific application discussed here, it should first be acknowledged that most types of dissipative devices used for earthquake applications (for example, viscous dampers, tuned mass dampers and so forth) have limits on their capabilities (related to force saturation, maximum stroke and so forth) that inherently make their behavior nonlinear under extreme loading conditions. Their effect on the structural performance can be accurately described only in terms of the non-linear time history response. This indicates that the seismic vulnerability needs to be estimated by time history analysis and so one must select appropriately the structural model, the loss estimation methodology and the type of excitation model. The selections for the latter two are discussed in detail next. The choice for the structural model depends on the available computational tools for structural analysis, for example one can use a fiber model, a plastic hinge model [4] and so forth. The model selected needs to give information about all response variables relevant to the loss estimation methodology, which may include peak values, such as peak transient drifts, peak acceleration and maximum plastic hinge rotation, or cumulative values, such as the energy dissipated by a specific element. Thus, one criterion for selecting the structural model should be the type of response quantities needed for performing the loss estimation. This depends of course on the structural configuration, contents and usage of the building, since these characteristics define which response variables are important for estimating, in detail, its vulnerability. Additionally, it is important that the structural model describes adequately the nonlinear structural behavior under high intensity ground motions, since this behavior is expected to have an important contribution to the expected life-cycle cost. Finally, all model properties that influence the structural performance need to be probabilistically quantified, so that their influence on the lifetime performance of the structure is appropriately evaluated. The study by Porter et al. [15] provides a detailed discussion of how major structural uncertainties influence the life-cycle cost performance; the results reported in that study can be used as guidelines about which types of uncertainties are important. 2.2. Loss estimation methodology

Fig. 1. Augmented system model.

Various approaches have been developed for estimation of the lifetime cost in earthquake engineering. Initial methodologies expressed this cost in terms of the global reliability of the structural system. Recent advances in performance-based engineering quantify, more accurately, economic losses, casualties, and downtime in relation to the structural response, using fragility curves to develop such a relationship. In this context, approaches have been developed that approximately describe the nonlinear structural

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

behavior by the static pushover response (for example [11] or the popular HAZUS model [16]) and/or estimate earthquake losses in terms of global response characteristics (for example [9]). Other researchers (e.g., [17,4]) have developed more thorough analytical tools to evaluate seismic vulnerability using non-linear time-history analyses and estimate damage on a more-detailed, component level. As discussed in the previous section, the latter approach is preferred for estimating earthquake losses for the design problem considered here, so the comprehensive methodology described in [17] and [4] is adopted here. According to this methodology, the nonlinear time-history response of the structure under seismic excitation is used to calculate the damage in a component level, as indicated in Fig. 2. This methodology is briefly summarized next. Repair and replacement cost: For estimating these direct losses, the components of the structure are grouped into nas damageable assemblies. Each assembly consists of components of the structural system that have common characteristics with respect to their vulnerability and repair cost. Such assemblies may include, for example, beams, columns, wall partitions, contents of the building, and so forth. For each assembly j = 1,. . ., nas, nd,j different damage states are designated and a fragility function is established for each damage state dk,j, k = 1,. . ., nd,j. These functions quantify the probability Pe[dk,j|EDPj,u,h] that the component has reached or exceeded its kth damage state, conditional on some engineering demand parameter (EDPj) which is related to the time-history response of the structure (for example, peak transient drift, peak acceleration, maximum plastic hinge rotation, etc.). Damage state 0 is used to denote an undamaged condition. A repair cost Ck,j is then assigned to each damage state, which corresponds to the cost needed to repair the component back to the undamaged condition. The expected losses in the event of the earthquake are given by:

Lðu; hÞ ¼

nd;j nas X X

P½dk;j ju; hC k;j

ð6Þ

j¼1 k¼1

where P[dk,j|u,h] is the probability that the assembly j will be in its kth damage state and the explicit dependence on EDPj has been dropped since in the framework considered here knowledge of the design and model parameter values leads to estimation of EDPj. The probability P[dk,j|u,h] may be readily obtained from the information from the fragility curves:

P½dk;j ju; h ¼ P e ½dk;j ju; h  P½dkþ1;j ju; h; P½dnd;j ;j ju; h ¼ Pe ½dnd;j ;j ju; h

ð7Þ

This approach can be extended to evaluating the cost of injuries and fatalities, though in this case estimating the ‘‘unit repair and replacement” cost is less trivial. Downtime cost: This cost refers to loss of use/revenue while the building is being repaired. For this purpose the structure is separated into nu operational units. These units correspond to parts of the building that can be considered as independent pieces when evaluating the contributions to its total worth. Independency here refers to both the type of income of the unit as well as to its vulner-

Generate time-history ground motion record

Excitation Model

511

ability, i.e. how its functionality is influenced by damage to the structure. The time Rm needed to bring each of these units to functional condition is then calculated based on the type and extent of the damage to it. The latter is calculated using information from the structural response. Note that the time Rm does not necessarily correspond to the time required to completely repair all the damage to the mth operational unit since in some cases functionality may be established while the repairs continue. Some examples of how this time can be estimated may be found in [17,18]. Finally, losses due to downtime are calculated as:

Lðu; hÞ ¼

nu X

U m Rm

ð8Þ

m¼1

where Um is the daily income from operational unit m. 2.3. Ground motion model Structural performance in this study is evaluated using timehistory analysis. This requires development of a probabilistic model of the entire ground motion time history that will adequately describe the uncertainty in future earthquake events. There are two common alternatives for representing this uncertainty [19]. The first one is based on adopting a parameter (or vector of parameters [20]) known as an intensity measure (IM) that represents the dominant features of the ground motion. A probabilistic seismic hazard analysis is then performed to characterize IMs for different hazard levels; this analysis takes into account all important sources of modeling uncertainty for the ground motions. Typically only a small number of hazard levels are considered. Then, for each of these levels, ground motion records consistent with the corresponding IMs are selected from some strong-motion database by performing a seismic hazard disaggregation [21]. These recorded ground motions are taken to represent samples of possible future ground motions for each hazard level (each value of IM) and are used in structural analyses to give samples of the structural response, in our case the EDPj needed in the fragility functions. A probability model, usually log normal, is then fit to these samples. The probabilistic structural performance is assessed by combining these probability distributions conditional on IM with a site-specific estimation of the probability that ground shaking with a given value of the IM occurs. The advantage of this approach is that this probabilistic structural performance is established using a fairly small number of recorded earthquake time histories. However, any selected IM will only partially describe the future ground motion characteristics and the response uncertainty due to the remaining features is somewhat arbitrarily described probabilistically by a lognormal probability model which is estimated by the computed response to only a few samples of recorded ground motions. Alternatively, a probabilistic description of the ground motion can be established by adopting a stochastic ground motion model that includes, as model parameters, the properties of the regional seismicity and local site conditions. Quantification of our knowl-

Perform nonlinear structural analysis

Estimate repair cost and losses because of downtime

Calculate at locations of interest important response variables (peak transient drifts and acceleration, energy dissipation,…)

Estimate damage state for each assembly and repair time for each operational unit

Structural Model

Loss Estimation Model

Fig. 2. Steps for the loss estimation methodology (given the model parameters).

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

edge about these latter properties, employing a probability logic approach, leads then to a complete probabilistic ground motion description and hence to an implied probabilistic structural response description. This methodology was proposed in [22]. Jalayer and Beck [19] provide a thorough comparison between the probabilistic structural response under these two ground motion representations. In the current study we adopt the second methodology since it gives a more complete probabilistic description of the ground motion time history and it does not involve significant approximation in describing the uncertainty about future excitations. In particular, the stochastic ground motion model described in detail in [23] along with the point-source spectrum developed in [24] for California seismicity, are selected. According to this model, the time-history for a specific event magnitude, M, and source-to-site distance, r, is obtained by modulating a white-noise sequence Zw = [Zw(iDt): i = 1,2,. . .,NT] through a time-domain envelope function and a frequency-domain amplitude spectrum. This is established through the following steps: (i) the sequence Zw is multiplied by an envelope function e(t;M,r); (ii) this modified sequence is then transformed to the frequency domain; (iii) it is normalized by the square root of the mean square of the amplitude spectrum; (iv) the normalized sequence is multiplied by a radiation spectrum A(f;M,r); and finally (v) it is transformed back to the time domain to yield the desired acceleration time history. The characteristics for A(f;M,r) and e(t;M,r) used in this study are presented in Appendix A. For given local site conditions, the model parameters consist of the seismological parameters hg = [M, r], and the high-dimensional white-noise sequence, Zw, so hq = [M, r, Zw]. Fig. 3a shows functions A(f;M,r) and e(t;M,r) for different values of M and r = 15 km. It can be seen that as the moment magnitude increases, the duration of the envelope function also increases and the spectral amplitude becomes larger at all frequencies with a shift of dominant frequency content towards the lower-frequency regime. Fig. 3b shows a sample ground motion for M = 6.7 and r = 15 km.

3. Life-cycle cost optimal design: stochastic analysis and optimization 3.1. Stochastic analysis: probabilistic performance evaluation The system, excitation and performance evaluation models described earlier lead to quantification of the performance measure h(u,h) (as in Fig. 1), conditional on the values for the model parameters h and the design configuration u. The optimization in (5) requires, additionally, the evaluation of the integral corresponding to the objective function. Since the models considered are complex and include a large number of uncertain model parameters, this high-dimensional integral cannot be calculated, or efficiently approximated, analytically. Rather, estimation through stochastic simulation techniques is considered. Using a finite number, N, of

1

102 101 100 M=6 M=6.7 M=7.5

10-1 10-2 10-3

10-1

100 101 f (rad/sec)

N X ^ ^h ½hðu; XN Þ ¼ 1 hðu; hi Þ CðuÞ ¼E N i¼1

^h ½hðu; XN Þ u ¼ arg min E

ð10Þ

u2U

The estimate in (9) involves an unavoidable estimation error. The existence of this error, which may be considered as random noise in the objective function, contrasts with classical deterministic optimization where it is assumed that one has perfect information. Such problems, encountered in decision making under uncertainty, are typically referred to as stochastic optimization problems. In [3,28], a thorough review of appropriate techniques for these problems is given, such as usage of common random numbers and exterior sampling approaches, along with optimization algorithms such as simultaneous-perturbation gradient based methods and genetic algorithms. All of these algorithms require, though, a large number of evaluations of the objective function, that is, a large number of stochastic analyses. Since each stochastic analysis requires N potentially time-consuming simulations of the nonlinear structural model, performing these analyses a large number of times would be a challenging task. An efficient twostage framework [2] for performing this stochastic optimization is discussed next. This first-stage of the framework uses the novel algorithm presented recently in [2] called stochastic subset optimization (SSO) for efficiently converging to a smaller subset of U involving the optimal design variables, while the second stage pinpoints more precisely the optimal solution (within the identified subset) using a different optimization algorithm. 3.2. Stochastic subset optimization SSO is a highly efficient approach for exploring the sensitivity of Eh[h(u,h)] to u using a small number of stochastic analyses. The basic idea in SSO is the formulation of an augmented stochastic problem where the design variables are artificially considered as uncertain with uniform distribution p(u) = 1/VU over the design

0.6 0.4

(b) 200 100 0 -100

0.2 0 0

ð9Þ

where XN = [h1,. . .,hN] is defined as the sample set, and vector hi denotes the sample of the uncertain parameters used in the ith simulation. Note that each stochastic analysis requires N evaluations of the performance measure, that is, N simulations of the nonlinear structural model response. N could be chosen so that the coefficient of variation of the estimate in (9) is ‘‘small enough” (for example, smaller than 5%) since this indicates that the estimate is a good approximation of the actual objective function [25]. Polak and Royset [26] provide an adaptive scheme for choosing N in general stochastic optimization problems whereas discussion for the a priori selection of N for some special types of stochastic problems is given in [27]. Finally, using the estimate in (9) the optimal design choice is then given by the stochastic optimization:

M=6 M=6.7 M=7.5

0.8 e(t;M,r)

A( f;M,r) (cm/sec)

(a)

random samples of h drawn from p(h|u), an estimate for the objective function is given by performing the stochastic analysis:

cm/sec2

512

-200 10

20 t (sec)

30

0

5

10 t (sec)

15

Fig. 3. (a) Radiation spectrum and envelope function for various M and r = 15 km and (b) sample ground motion for M = 6.7, r = 15 km.

513

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

space U. In the context of this augmented stochastic design problem, an auxiliary PDF p(u,h) is defined as:

pðu; hÞ ¼

hðu; hÞpðu; hÞ ; Eu;h ½hðu; hÞ

where Eu;h ½hðu; hÞ ¼

Z Z U

hðu; hÞpðu; hÞdhdu

ð11Þ

H

with p(u,h) = p(u)p(h|u). The integral in the denominator of p(u,h) corresponds to the expected value in the augmented uncertain space. Knowledge of this integral is not explicitly needed (it is simply a normalization constant). In the context of the augmented stochastic problem, the objective function Eh[h(u,h)] may be expressed as:

Eh ½hðu; hÞ ¼

Eu;h ½hðu; hÞ pðuÞ where pðuÞ ¼ pðuÞ

Z

pðu; hÞdh

ð12Þ

H

Since Eu,h[h(u,h)] and p(u) are constants, the marginal PDF p(u) expresses the sensitivity of the objective function Eh[h(u,h)] with respect to the design variables. Samples of this PDF can be obtained through stochastic simulation techniques [29]; for example, by using direct Monte Carlo simulation (MCS) or Markov Chain Monte Carlo (MCMC) sampling. Appendix B provides a brief description of two such algorithms. These algorithms give sample pairs [u,h] that are distributed according to the joint distribution p(u,h) and their u component corresponds to samples from the marginal distribution p(u). A sensitivity analysis can be efficiently performed by exploiting the information in these samples; this is established in the SSO setting by looking at the ratio of the average values for Eh[h(u,h)] over any subset of the design space I  U and over the initial design space U. This ratio H(I), as well as an estimate of it based on the samples from p(u) obtained as described previously, are given, respectively, by:

R Z Eh ½hðu; hÞdu=V I VU R HðIÞ ¼ I ¼ VI I U Eh ½hðu; hÞdu=V U

^ ¼ pðuÞdu; HðIÞ

NI =V I NU =V U

ð13Þ

where NI and NU denote the number of samples from p(u) belonging to the sets I and U, respectively, and VI is the volume of the subset I. The coefficient of variation (c.o.v.) for the estimate of H(I) depends on the simulation technique used for obtaining the samples from p(u). For a broad class of sampling algorithms this c.o.v. may be expressed as:

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  N I =NU ^ c:o:v: HðIÞ  N  NI =NU

^Ik ¼ arg min HðIÞ ¼ arg min N I =V I ; ð14Þ

where N = NU/(1 + c), c P 0, is the effective number of independent samples. If direct Monte Carlo techniques are used then c = 0, but if Markov Chain Monte Carlo (MCMC) sampling is selected then c > 0 because of the correlation of the generated samples (an estimate for c when the MCMC algorithm described in Appendix B is used may be found in [22]). A subset sensitivity analysis can be performed by considering the stochastic subset optimization:

^I ¼ arg min HðIÞ ^ I2A

[3]), it is evident that smaller values of H(Î) correspond to greater plausibility for the set Î to include u*. Note that a value for H(Î) close to unity implies nearly the same average value for the objective function for subsets I and U. This additionally indicates small sensitivity of the objective function to the design choices within the search space. Since only estimates of H(Î) are available in the stochastic identification, the quality depends, ultimately, on both: (a) ˆ (Î) and (b) its coefficient of variation (defining the the estimate H accuracy of that estimate). Smaller values for these parameters correspond to better quality of identification. An important issue for the efficiency of SSO is the appropriate selection of the size of the class of admissible subsets. A detailed discussion is provided in [2]. A choice that allows for direct control ˆ (Î) (see (14)) and thus of the quality of SSO is of the c.o.v. of H Aq = {I  U: q = NI/NU}. In this scheme, the volume (size) of the admissible subsets is adaptively chosen so that the ratio of samples in the identified set is equal to q. Also, instead of trying to pinpoint a small region for the optimal solution in one step, an iterative process is applied that greatly improves the efficiency of the algorithm. At iteration k, additional samples in Îk1 (where Î0 = U) that are distributed according to p(u) are obtained. A region Îk  Îk1 for the optimal design parameters is then identified. The samples from p(u) (and p(u,h)) available from the previous iteration that are inside set Îk1 are exploited for improving the efficiency of the sampling process. In terms of the algorithms described in Appendix B this may be established for example by (a) forming better proposal PDFs or (b) using the samples already available as seeds for MCMC simulation (since these samples already follow the target distribution). The SSO algorithm consists of the following steps (Fig. 4 illustrates some important steps when the shape of admissible subsets is chosen as ellipses): Initialization: Define the bounded design space U, and the desired geometrical shape (for example, hyper-rectangles, or better still, hyper-ellipses) for the subsets I. Decide on the desired number of samples N and on the value for the constraint q (a value 0.1–0.2 leads to good efficiency). Step k: Use some sampling procedure, such as MC simulation for the 1st step and MCMC simulation for subsequent steps, in order to simulate N samples from p(u) inside the subset Îk1. Identify subset Îk as:

ð15Þ

where A is a family of admissible subsets of U defined by each subset having to satisfy a specified geometrical shape and size constraint [2]. Optimization (15) identifies the subset that belongs in A and has the smallest estimated average value of Eh[h(u,h)]. This subset has the largest likelihood, in terms of the information available through the obtained samples, of including u*. Nhis likelihood defines the quality of the identification and ultimately depends on H(I); taking into account the fact that the average value of Eh[h(u,h)] in the neighborhood of the optimal solution is expected to be the smallest in U (if the characteristics of A are appropriately chosen

I2Aq;k

I2Aq;k

ð16Þ

Aq;k ¼ fI  ^Ik1 : q ¼ NI =Ng Keep only the N^Ik samples that belong to the subset Îk (exploit these samples in the next step). Stopping criteria: At each step, estimate ratio:

^ ^Ik Þ ¼ Hð

N^Ik =V ^Ik N^Ik1 =V ^Ik1

ð17Þ

Based on this ratio and the desired quality of the identification (see discussion next), decide on whether to stop or proceed to step k+1. In a small number of iterations, this algorithm will adaptively converge to a sub-region containing the optimal design variables u* that has small sensitivity with respect to all design variables ˆ (Î) close to unity), and thus it consists of near-optimal (value of H design solutions. SSO also gives information about the local behavior of the objective function. As long as the shape of the admissible subsets is close to the contours of the objective function near the optimal design point, the subset identified in the last stage of SSO provides a good estimate for these contours. Note that for each iteration of the SSO algorithm, a single stochastic analysis is performed; thus the computational cost is comparable to the cost required for a single evaluation of the objective function for the

514

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

ϕ2 4

4

2

2 50

100

150

200

50

100

150

200

4

ϕ2 4

3 2

2 50

100

150

200

40

60

80

100

120

140

160

180

ϕ2

40

60

80

100

ϕ1

120

140

160

180

50

100 ϕ1

150

Fig. 4. Illustration of some key steps in SSO for an example with a two-dimensional design space. The X in the plots corresponds to the optimal solution.

stochastic optimization (10). But in SSO, information about the global behavior (meaning in the whole set of the design variables U) of the objective function is obtained. This is what contributes to the high efficiency of SSO compared to other stochastic optimization algorithms [2]. Additionally, it can be proven that, as long as optimization (15) can be efficiently performed (more details about this sub-optimization may be found in [3]), the computational burden of SSO increases only linearly with the dimension of the design variables. A sensitivity analysis of the performance objective to the model parameters is also established by SSO. The concept is similar to the sensitivity analysis with respect to the design variables; the samples available from p(u,h) are projected in this case onto the space of the model parameters H. These samples are distributed according to the marginal distribution p(h) (defined similarly to p(u) in (12)), that is, proportionally to the integrand of the objective function when the design variables lie in ISSO. Nheir distribution, compared to the prior distribution p(h|u), expresses the sensitivity of the performance measure to the model parameters; bigger discrepancies between these distributions indicate greater significance of the corresponding model parameters in affecting the system performance. Such information can be used to get a better understanding of the importance of the model parameters to the expected life-cycle cost of the building. 3.3. Two-stage framework for stochastic optimization When SSO has converged, all designs in ISSO give nearly the same value for the objective function and so can be considered near-optimal. The center of the set ISSO, uSSO, can be taken as an approximation for u*. There is, though, no rigorous measure of the quality of the identified solution, i.e. how close uSSO is to u* that can be directly established through the SSO results. To pinpoint more precisely the optimal solution within the ISSO subset, a second optimization stage can be implemented. This leads to a two-stage optimization framework. Optimization (10) is solved in this second stage using any appropriate stochastic optimization algorithm and exploiting all information available from SSO about the sensitivity with respect to both the design variables as well as the uncertain model parameters (see discussion later on).

The specific algorithm selected for the second stage determines the level of quality that should be established in the SSO identification. If a method is chosen that is restricted to search only within ISSO (typically characteristic of gradient-free methods), then better quality is needed. The iterations of SSO should stop at a larger size set, and establish greater plausibility that the identified set includes the optimal design point. If, on the other hand, a method is selected that allows for convergence to points outside the identified set, lower quality may be acceptable in the identification. The information for the sensitivity of the objective function to the design variables available from the SSO stage can be used to readily tune the characteristics of the algorithms used in the second stage. The search may be restricted only within ISSO which has smaller size (volume) than the original design space U. Also, it is established that the sensitivity of the objective function with respect to all components of u is small and, depending on the selection of the shape of admissible subsets, the correlation, or interaction, between the design variables may be identified by the orientation and relative size of ISSO in U. This allows for efficient normalization of the design space (in selecting step sizes and blocking criteria), selection of the starting point for iterative algorithms (the center uSSO can be selected for this purpose), or choice of interpolating functions (for example, for response surface methodologies). In particular, combination of SSO with the efficient gradientbased simultaneous-perturbation stochastic approximation (SPSA) [28,30] algorithm has been reported [2,3] to be highly efficient; usually the second stage requires only a small number of objective function evaluations to converge to the optimal solution. SPSA is an efficient gradient-based algorithm for stochastic search. It uses numerical differentiation for obtaining derivative information, which makes it appropriate for complex design problems where analytic differentiation might be impractical, but avoids the high computational cost associated with finite difference methods. It is based on the observation that one properly chosen simultaneous random perturbation in all components of u provides as much information for optimization purposes in the long run as a full set of one at a time changes of each component. Thus, it uses only two evaluations of the objective function, in a direction randomly chosen at each iteration, to form an approximation to the gradient

515

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

vector. It also implements the notion of stochastic approximation which can significantly improve the computational efficiency of stochastic search methods. This can be established by applying an equivalent averaging across the iterations (by appropriate selection of step sizes) of the algorithm instead of establishing higher accuracy estimates (for the ‘‘noisy” objective function) at each iteration. More details about the SPSA algorithm and about how the characteristics of the ISSO set can be used to fine-tune its parameters are provided in Appendix C. Additionally, the information available from SSO about the sensitivity of the performance objective to the model parameters can be exploited to reduce the variance of the stochastic-simulation based estimate for the objective function Eh[h(u,h)] at the second stage of the optimization framework. This can be established by introducing an importance sampling (IS) density pis(h) [29] to simulate more samples in regions of the model parameter space where the integrand of the objective function (stochastic integral) has higher values. Since the samples available from SSO for h are distributed proportional to that integrand they can be used to create such a density to improve the efficiency of the estimation of the integral by stochastic simulation. A method for creating IS densities using samples of the model parameters is presented in [31]. Since the set ISSO is relatively small, the suggested IS densities are expected to be efficient for all u in ISSO. The estimate for the objective function is then: N X pðhi juÞ ^h ½hðu; XN Þ ¼ 1 E hðu; hi Þ N i¼1 pis ðhi Þ

ð18Þ

where the samples hi are simulated according to pis(h). If the importance sampling densities are efficiently selected then application of IS may lead to significant reduction of the c.o.v. of the estimate in (18); thus, accurate estimates of the objective function may be established using smaller values for N. For problems with a highdimensional vector h, IS densities should be formulated only for the set of the influential model parameters [32] (in terms of the discussion in Appendix B this corresponds to set H2). This circumvents problems that may occur when establishing IS densities for estimation of high-dimensional stochastic integrals (see [33] for a detailed discussion).

4. Illustrative example: retrofitting of an office building The retrofitting of a symmetric four-story office building with linear viscous dampers is considered. The building is a non-ductile reinforced-concrete, perimeter moment-frame structure. The dimension of the building is 45  45 m and the height of each story is 3.9 m. The perimeter frames in the two building directions are separated from each other, which allows structural analysis in each direction to be done separately. Because of the symmetry of the building, analysis of only one of the directions is necessary. Nhe design variables in this problem correspond to the maximum force capacities for the dampers in each story Fud,i, i = 1,. . .,4. The viscosity of these dampers is selected assuming that the maximum force capacity is established at a velocity of 0.2 m/sec. Probability models are established for all model parameters for both the structural system and the excitation models. As discussed in Section 2.1, these probability models are chosen to reflect our available prior knowledge about the behavior and properties of the true system and its environment over their entire life cycle. For most model parameters knowledge of only their mean value and variance is assumed, so Gaussian probability models are chosen (based on the maximum entropy principle discussed earlier).

4.1. Structural model A class of shear-building frame models with hysteretic behavior and deteriorating stiffness and strength is chosen to represent the structure. A distributed-element model is used for the hysteretic deteriorating behavior of the restoring force for each story which has piecewise linear restoring force and breaking elements (similar to the ideas discussed in [34]). According to this model the restoring force per story is modeled as bilinear function with a maximum force capacity while the reduction of the strength during excessive loading of a story is attributed to the breaking of a fraction of the distributed elements of the story (that fraction is proportional to the fractional strength reduction). The failed elements no longer contribute to the stiffness and strength of the story for future unloading and loading cycles. Additionally, a residual strength is assumed equal to 10% of the maximum strength. Fig. 5 illustrates some of the characteristics of the model including the backbone of the restoring force and an example of hysteresis loop for excessive drift. The model parameters that define the backbone curve for the restoring force of the ith story are selected as: the elastic interstory stiffness ki, the strain-hardening coefficient ai, the overstrength factor ci, the yield displacement dy,i, the displacement coefficient gi and the stiffness deterioration coefficient bi (see also Fig. 5). Based on these parameters, the following resultant characteristics can be defined: yield force Fy,i, ultimate force Fu,i, displacement when restoring force reaches maximum strength dp,i, capping displacement du,i, which is defined as the maximum ductility at peak strength, strain-hardening stiffness kh,i, and the initial postcapping stiffness kp,i. These characteristics are given, respectively, by the following relationships:

F y;i ¼ dy;i ki ; F u;i ¼ ð1 þ ci Þdy;i ki     c c dp;i ¼ 1 þ i dy;i ; du;i ¼ 1 þ i þ gi dy;i

ai

kh;i ¼ ai ki ;

ai

ð19Þ

kp;i ¼ bi ki :

The lumped mass of the top story is 935 metric ton while it is 1215 metric ton for the bottom three. The initial inter-story stiff^i h , i = 1,2,3, nesses ki of all the stories are parameterized by ki = k k,i ^ ] = [700.0, 616.1, 463.6, 281.8] MN/m are the most probwhere [k i able values and hk,i are non-dimensional uncertain parameters, modeled as correlated Gaussian variables with mean value one and covariance matrix with elements Rij = (0.1)2exp[(i  j)2/22], that roughly imply significant correlation between inter-story stiffness’ within two stories apart and a coefficient of variation (c.o.v.) of 10%. For each story, the post-yield stiffness coefficient ai, stiffness deterioration coefficient bi, over-strength factor ci, yield displacement dy,i, and displacement coefficient gi have mean values 0.1, 0.2, 0.3, 0.22% of story height, and 2, respectively. All these parameters are treated as independent Gaussian variables with coefficient of variation (c.o.v.) 10%. The structure is assumed to be modally damped. The damping ratios for all modes are treated similarly as Gaussian variables with mean values 5% and c.o.v. 10%. Note that the main variables identified in [15] to influence the earthquake loss estimation are probabilistically taken into account here. The nominal, i.e. corresponding to most probable values for the model parameters, fundamental period for the linear structural model is calculated as 0.76 sec. 4.2. Probabilistic seismic hazard and ground motion models Seismic events are assumed to occur following a Poisson distribution and so are independent of previous occurrences. The uncertainty in moment magnitude M is modeled by the Gutenberg–Richter relationship truncated on the interval

516

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

viscous damper

Fi

m4 m3

αi ki

Fy,i δ4

m2

δp,i

δy,i

m3

Fu,i=(1+γi) Fy,i

ki

δi ith story restoring force

δu,i

-Fu,i

Fi

retrofitting scheme

Fu,i

βiki m1

δy,i

δp,i

δi

δu,i -Fr

δ1

Fr

δu,i=δp,i+ηiδy,i

Illustration of deteriorating stiffness and strength characteristics

-Fu,i

Fig. 5. Nominal structural model assumed in the study.

[Mmin, Mmax] = [5.5, 8], leading to the PDF and expected number of events per year given, respectively, by [35]:

pðMÞ ¼

b expðb  MÞ ; expðb  Mmin Þ  expðb  M max Þ

v IM ðimÞ ¼ v

v ¼ expða  bMmin Þ  expða  bMmax Þ

ð20Þ

Only events with magnitude greater than M > 5.5 are considered since earthquakes with smaller magnitude are not expected to lead to significant damage to the structure and thus will not contribute significantly to the expected life-cycle cost (see also discussion later on, in terms of formulation of IS densities). The regional seismicity factors are selected as b = 0.9 loge(10) and a = 4.35 loge(10), leading to v = 0.25. For the uncertainty in the event location, the logarithm of the epicentral distance, r, for the earthquake events is assumed to follow a Gaussian distribution with mean log(20) km and standard deviation 0.4. Fig. 10a illustrates the PDFs for M and r. For describing the ground motion time-history, the stochastic-model approach described in Section 2.3 is adopted. These seismicity and ground motion models completely define the probabilistic seismic hazard for the structural site. In earthquake engineering terminology this hazard can be also represented in terms of the rate of occurrence for some intensity measure (IM) of interest rather than the abstract PDF and stochastic model characterization. To provide a better understanding of this hazard this representation is also provided here; the mean total rate of seismic

P½IM P imjhq pðhq Þdhq

ð21Þ

where hq = [M, r, Zw]. Because of the Poisson model for earthquake occurrences, the probability that at least one seismic event of interest will occur over time tdur which has IM P im is:

P½IM > imjtdur  ¼ 1  expv im ðimÞtdur

ð22Þ

Fig. 6 shows the mean rate for IM P im (Fig. 6a) and the probability of occurrence for tdur = 1 year (Fig. 6b) and tdur = 60 years (Fig. 6c) for two families of intensity measures. The first family corresponds to the peak ground acceleration and the second to the peak spectral acceleration for a SDOF oscillator with damping ratio 5% and four different periods, 1 sec, 0.76 sec (which is equal to the period of the linear structural model), 0.5 sec, and 0.25 sec. Stochastic simulation has been used for evaluating the integral in (21). Note that only events in the range [Mmin, Mmax] = [5.5, 8] are considered for these plots. 4.3. Expected life-cycle cost The uncertain parameter vector in this design problem consists of the structural model parameters, hs, the seismological parame-

PGA T=1sec T=0.76sec T=0.5sec T=0.25sec

0.2 0.15

IM=

0.1 0.05 0

Z

(a) Rate of occurrence of IM=im

0.25

vIM(im)

events, IM P im, corresponding to some intensity measure IM that exceeds some threshold im, may be calculated as:

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

PSA

0.8

0.9

1

im=acceleration (g) (b) Probability of occurrence for tdur=1 year (c) Probability of occurrence for tdur=60 years P[IM im⏐tdur]

P[IM im⏐tdur]

100

10-1 10-2 10-3 10-4 10-5

0

0.5

1

im=acceleration (g)

1.5

100

10-1 10-2 10-3 10-4 10-5

0

0.5

1

1.5

im=acceleration (g)

Fig. 6. (a) Mean occurrence rate, and probability of occurrence for (b) tdur = 1 year and (c) tdur = 60 years, for PGA (peak ground acceleration) and PSA (peak spectral acceleration). Legend is common for all plots.

517

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

ters hg = [M, r], and the white noise sequence, Zw, for the stochastic ground motion model, so h = [hs, hg, Zw]. The initial cost corresponds to the cost of the dampers Cinit(u) = Cd(u) and it is estimated based on their maximum force capacity Fud,i as Cd,i(u) = $ 80(Fud,i)0.8. This simplified relationship comes from fitting a curve to the cost of some commercially-available dampers. The lifetime cost corresponds to the present value of the losses from future earthquake excitations which is given by:

C lif ðu; hÞ ¼ Lðu; hÞ



v tlife

1  erd tlife r d t life

 ð23Þ

where rd equals the discount rate (taken here as 2.5%), tlife is the lifetime of the structure (taken here 60 years) and L(u,h) is the expected cost given the occurrence of an earthquake event of interest and the pair [u,h]. The term in the brackets is the present worth factor, which is used in order to calculate the present value of the expected future earthquake losses [36]. The earthquake damage and loss are calculated assuming that after each event the building is quickly restored to its undamaged state.

Table 1 Characteristics of fragility functions and expected repair costs for each story. dk,j

xm

bm

nel

$/nel

1 2 3 4 5

Structural components 1.4dy,i (dy,i + dp,i)/2 dp,i du,i 3%

0.2 0.35 0.4 0.4 0.5

22 22 22 22 22

2000 9625 18,200 21,600 34,300

1 (Damage)

Contents 0.6 g

0.3

100

3000

1 (Patch) 2 (Replace)

Partitions 0.33% 0.7%

0.2 0.25

500 500

180 800

1 (Damage)

Acoustical ceiling 1g

0.7

103 m2

25

1 (Damage)

Paint 0.33%

0.2

3500 m2

25

CðuÞ ¼ C d ðuÞ þ

  1  erd tlife pðhjuÞdh Lðu; hÞ v tlife r d t life H

Z

(a) Fragility functions for structural components Probability of exceeding damage state

0.8 0.6 0.4

light moderate significant severe collapse

0.2 0

0

0.5

1

1.5 2 drift (%)

2.5

3

3.5

4

3.5

4

(b) Restoring force-drift backbone curve 1 0.8 0.6 0.4 0.2 0

significant moderate light

severe

collapse

0

0.5

1

ð24Þ

The earthquake losses are calculated according to the methodology described in Section 2.2. Losses because of (a) fatalities and (b) building downtime are ignored in this study. Thus, only repair cost due to damage is consider for evaluating L(u,h). Each fragility function is a conditional cumulative log-normal distribution with median xm and logarithm standard deviation bm, as presented in Table 1. This table also includes the expected cost per element $/ nel, where nel corresponds to the number of elements that belong to each damageable assembly in each direction of each floor. For the structural contents and the acoustical ceiling, the maximum story absolute acceleration is used as the EDP and for all other assemblies the maximum inter-story drift ratio is used as the EDP. For estimating the total wall area requiring a fresh coat of paint, the simplified formula developed in [4] is adopted. According to this formula, a percentage of the undamaged wall area is also repainted, considering the desire of the owner to achieve a uniform appearance. This percentage depends on the extent of the damaged area and is chosen here based on a lognormal distribution with median 0.25 and logarithmic standard deviation 0.5. The fragility curves adopted are similar to the ones selected in [4] for all damageable assemblies apart from the structural components. For the latter, the fragility curves have been chosen in the current study according to the characteristics of the backbone curve for the restoring force in each story. In this setting, a direct link is established between the fragility curves and the stiffness and strength characteristics of the corresponding structural model, considering their associated uncertainties. Fig. 7 illustrates this concept for the nominal values of the structural model parameters. Fig. 7a shows the fragility curves for each damage state of the structural components, with the characteristics reported in Table 1. Fig. 7b indicates the location of the median for each damage state in the backbone curve for the restoring force at each story. The comparison between the two curves reveals the connection between the fragility of the structural components and the strength/stiffness characteristics of each story. Such a link of the

1

Restoring force (fraction of ultimate)

(Light) (Moderate) (Significant) (Severe) (Collapse)

Finally, the life-cycle cost is given by:

1.5 2 drift (%)

2.5

3

Fig. 7. (a) Fragility functions for structural components and (b) restoring force-drift relationship (based on nominal parameter values).

518

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

damage states of the structural components to strength and stiffness model characteristics has not been established in most other studies that use similar methodologies for estimating repair damage from earthquakes (for example, in [4]). The expected repair cost for all damageable assemblies is selected based on the repair-cost characteristics reported in [18] and [4]. In particular, for the damageable structural components, the frame at each story is assumed to consist of beams and column that have possible repair methods {epoxy injection, jacketed repair, replacement} (depending on the extent of the damage) and associated mean repair cost {$ 8000, $ 22500, $ 34300} respectively, as discussed in detail in [36]. For each of the damage states of the ‘‘structural components” damageable assembly that are reported Table 1, the expected repair cost has been estimated assuming that a specific portion (chosen based on engineering judgment) of the beams and columns will need one of the three aforementioned repair methods. The chosen relationship between the damage state for the structural components and the percentage of the beams and columns that need some repair method is shown in Table 2. Fig. 8 shows the mean cost for the drift-sensitive (Fig. 8a) and acceleration-sensitive components (Fig. 8b) of the structure. These two plots produce the utility function (performance measure) that is used to evaluate the favorability of the dynamic response of the structure under the given ground motion excitation. Finally, in Fig. 9 the expected repair cost for each of the different drift-sensitive damageable assemblies is plotted. The accelerationsensitive assemblies have a much smaller contribution to the total cost, as will be demonstrated later, and so are not presented here. This figure shows that the repair cost for the paint is more important for smaller drifts but converges to a plateau fast. On the con-

trary, the repair cost for the structural components is low for smaller drifts but monotonically increases and becomes dominant for larger drifts. For medium drifts the repair cost for the partitions is relatively more important. Also, the maximum expected repair cost for the paint is significantly smaller that the repair cost for the other drift-sensitive components. These remarks illustrate that under smaller excitations, the cost of repainting the damaged rooms will be the major contributor to the total expected cost, primarily because of the assumption that the owner will desire to achieve a uniform appearance and repaint greater wall area than the one actually damaged. For moderate excitations, the total repair cost increases considerably, due to the cost of primarily replacing the damaged partitions and secondarily repairing the structural components. The cost of the structural components dramatically contributes to the total repair cost for large excitations which lead to nonlinear structural behavior and thus to large inter-story drifts. 4.4. Design optimization The maximum force capacities of the dampers in each floor are the four design variables u = [Fud,i: i = 1,. . .,4]. The initial design space for each variable is set to [0, 13000] kN for Fud,1 and Fud,2 and [0, 8000] kN for Fud,3 and Fud,4. For the SSO algorithm, the parameter selections are: q = 0.2, N = 2000. The shape for the admissible sets I is selected as a hyper-rectangle and the adaptive ˆ (Îk) becomes larger than 0.85. The identification is stopped when H framework suggested in Section 3.3 is adopted. In just six iterations, SSO converges to a set that has small sensitivity ˆ (Îk) > 0.85) with respect to all components of the design vector. (H

Table 2 Relationship between damage states for damageable structural components and type of repair needed for the beams and columns. Percentage of beams and columns of the frame that needs each type of repair No repair (%)

Epoxy injection (%)

Jacketed repair (%)

Replacement (%)

1 2 3 4 5

80 25 0 0 0

20 50 50 33.3 0

0 25 25 33.3 0

0 0 25 33.3 100

Expected repair cost

(Light) (Moderate) (Significant) (Severe) (Collapse)

x $105 (a) Drift sensitive assemblies 15 10 5 0

0

0.01

0.02

0.03

0.04

Expected repair cost

Damage states for ‘‘structural components” assembly

(b) Acceleration sensitive assemblies

x $105 4 3 2 1 0 0

0.5

1

1.5

2

acceleration (g)

drift

Expected repair cost

Fig. 8. Cumulative expected repair cost as a function of EDP for (a) drift and (b) acceleration sensitive assembles (per story).

8

x $105 Paint Partitions Structural components

6 4 2 0

0

0.005

0.01

0.015

0.02 drift

0.025

0.03

0.035

0.04

Fig. 9. Expected repair cost as a function of EDP (drift) for the drift sensitive assemblies (per story).

519

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

(a) Initial samples and PDF

N

1500

2

1500 1000

1

p(M)

500 0 5.5

6

6.5

M

7

7.5

0.04

1000

N 500

0 8

0

0

p(r)

0.02 20

r

40

60 0

(b) Samples from SSO and IS PDF 80 60 N 40 20 0 5.5

6

6.5

M

7

7.5

100 0.6 0.4 p (M) 50 is N 0.2 0 0 0 8

0.06 0.04 pis(r) 0.02 20

r

40

60

0

Fig. 10. Details about formulation of importance sampling densities.

A second optimization stage is then implemented to pinpoint the exact optimal solution u*; the algorithm selected for this stage is the stochastic perturbation stochastic approximation (SPSA) described in Appendix C. Forty iterations were needed in the second stage of the framework using a sample size of N = 1000 for each evaluation of the objective function (note that two evaluations are required per iteration). The results for the optimization are presented in Nable 3. The set identified at the end of the SSO stage, ISSO, its center uSSO, the optimal solution u* and the value of the objective function for these two points are reported. Following the discussion in Section 3.3, for the second stage of the optimization importance sampling densities are established for the structural model parameters and the seismological parameters, M and r, but not for the high-dimensional white-noise sequence. Fig. 10b illustrates this concept for M and r. A truncated lognormal distribution is selected for the IS PDF for M (with median 7 and logarithmic standard deviation 0.1) and a lognormal for r (with median 15 and logarithmic standard deviation 0.4). Note that the IS PDF for M is significantly different from its initial distribution; since M is expected to have a strong influence on h(u,h), the difference between the distributions is expected to have a big effect on the accuracy of the estimation. The respective difference between the PDFs for r is much smaller. For the structural model parameters this difference is negligible, and the IS PDFs were approximated to be Gaussian distributions, like p(hs), with a slightly shifted mean value but the same variance. These remarks show that the sensitivity of the loss estimation to the seismological parameters, that is, to the stochastic excitation characteristics, is much more important than to the structural model characteristics. Additionally, the distribution of the samples for the moment magnitude in Fig. 10b show that earthquake events with magnitude smaller then 5.5 have negligible contribution to the repair cost of the building (at least when u belongs in ISSO). With respect to the efficiency of the IS scheme discussed here, the coefficient of variation (c.o.v.) for Êh[h(u,XN)] for a sample size N = 1000 is 16% withoutpusing IS and 4% when IS is used. This c.o.v. varies according to ffiffiffiffi 1= N [29] thus, the sample size for direct estimation of the objective function with the same level of accuracy as in the case when IS

is applied would be approximately 16 times larger. This illustrates the efficiency increase that can be established by the IS scheme. The results in Table 3 indicate that SSO efficiently identifies the set ISSO containing u* and leads to a significant reduction of the size of the search space; the mean reduction per design variable (last column of Table 3) is 85%. Additionally, the converged optimal solution in the second stage, u*, is close to the center of the set that is identified by SSO, uSSO, and the objective function at that point Eh[h(uSSO,h)] is not significantly different from the optimal value Eh[h(u*,h)]. Thus, selection of uSSO as the design choice leads to a sub-optimal design but it is close to the true optimum in terms of both the design vector selection and its corresponding performance. These characteristics, along with the small computational burden (only 6  2000 = 12000 system simulations) needed to converge to ISSO, illustrate the effectiveness and quality of the set identification in SSO. For the second optimization stage, 80,000 system analyses were needed in order to converge to the optimal solution when the combined framework was used. When SPSA was implemented without exploiting the information from the SSO stage this number was 612,000. This comparison shows that the information from SSO (better starting point of the algorithm, better normalization of the search space, ability to form importance sampling densities) leads to a significant reduction of the computational cost for the SPSA algorithm.

4.5. Efficiency of seismic retrofitting The expected lifetime cost for the structure in each direction without the dampers is estimated as $1.1 million. The expected lifetime cost of the retrofitted system is $430,000, so the addition of the viscous dampers improves significantly the performance of the structural system. Of this amount, $267,000 corresponds to the cost for the installation of the viscous dampers and $163,000 to the present worth of the expected repair cost for damage from future earthquakes. The maximum force capacities for the dampers at each story under optimal design have been reported in Table 3.

Table 3 Cumulative results for the stochastic optimization.

Fud,1 Fud,2 Fud,3 Fud,4

ISSO (kN)

uSSO (kN)

^h[h(uSSO,h)] E

u* (kN)

^h[h(u*,h)] E

[5857, [4539, [4085, [1841,

6418 5292 4801 2400

0.438  106 $

6420 5195 4481 2060

0.430  106 $

6980] 6045] 5517] 2959]

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V ISSO =V U

nu

0.131

520

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

(a) Structure without retrofitting Life cycle cost Ceiling:1% structural partitions 32% 36% contents 2% Cost of whole pie: $1,100,000 paint: 29%

(c) Structure with dampers Repair Cost

(b) Structure with dampers Life cycle cost structural:11.7% paint 11.7% contents 1.1% Dampers partitions 62.1% 10.7% Ceiling 0.4% Cost of whole pie: $430,000

ceiling:1% partitions structural 30% 33% contents 3%

paint:33%

Cost of whole pie: $160,000

Fig. 11. Details about expected life-cycle cost.

They are also illustrated in Fig. 12. These values seem reasonable compared to current commercial applications of viscous dampers. Fig. 11 shows the decomposition of the expected life-cycle cost and the lifetime repair cost into its different components for both the initial structure and the retrofitted structure. Only minor changes occur in the distribution of the total cost over its different components. Note that the relative importance of the repair cost for acceleration-sensitive assemblies (for example, the building contents) increases by the addition of the dampers, as expected, but still the importance of this cost remains small and it is practically negligible. The improvement in the performance of the structure is clearly evident in these charts. Also, the comparison between the plots in Fig. 11a and c, taking additionally into account the distribution of the repair cost in Fig. 9, clearly shows that for the retrofitted structure, the small drift response accounts for the smaller amount of the total repair cost, since the relative importance of the repair cost for the paint becomes larger and for the structural components smaller. Finally, Fig. 12 shows the decomposition of the repair cost and the cost of the dampers into the four different stories for both the initial structure and the retrofitted structure. The life-cycle cost for the retrofitted structure is also reported. For the initial structure the earthquake damage is slightly larger in the first story but in general are characterized by an almost uniform distribution along the height of the building. This is not the case for the retrofitted structure, where the earthquake damage are concentrated in the lower part of the structure, primarily the first story. If the additional cost of the dampers is considered, then the distribution along the height gets more regular but is still higher for the lower parts of the building. This may be attributed, ultimately, to the fact that the earthquake forces at the higher floors are smaller (smaller shear force), thus smaller size dampers are able to alleviate the undesirable performance easier and prevent yielding at these

(a) Life-cycle cost for structure without retrofitting

floors without reaching their ultimate forcing capacity, which would reduce their efficiency. Of course, this behavior is also connected to the distribution of nonlinear phenomena along the height of the building. Nonlinear behavior is much stronger at the lower floors; the energy dissipation that is provided by this hysteretic behavior reduces the forces acting on the higher floors for the retrofitted structure. 5. Conclusions Robust stochastic performance-based design in earthquake engineering was discussed where the life-cycle cost is adopted as the performance objective. Attention was focused on the optimal design of passive dissipative devices for earthquake risk mitigation, although the methodologies discussed can be extended to other types of problems. A complete probabilistic framework was presented for detailed estimation and optimization of the life-cycle cost for such applications. In this framework, many common approximations for the life-cycle design process, including system modeling, performance quantification, stochastic analysis and optimization, are avoided. Stochastic simulation was considered for evaluation of the system model performance. A highly efficient two-stage framework was presented for the challenging design optimization. This framework is based on the novel SSO algorithm that can establish a global sensitivity analysis for the performance objective using a relatively small number of system analyses. The capabilities of the suggested simulation-based approach allow us to consider complex system, excitation and performance evaluation models that incorporate, based on the available knowledge, all important characteristics of the true system and its environment into the design process. This was particularly important for the design application considered here since most types of dissipative devices used

Maximum force (kN) capacity of dampers

(b) Life-cycle cost for structure with dampers

[$ 0.035 106] Damper cost

$ 0.288 106 2060 $ 0.276 106 4481

[$ 0.012 106] Repair cost [$ 0.066 106] [$ 0.028 106]

$ 0.047 106

$ 0.095 106

[$ 0.075 106]

$ 0.221 106 5195 $ 0.315 106 6420

[$ 0.048 106] [$ 0.089 106]

$ 0.123 106

$ 0.164 106 [$ 0.075 106]

Fig. 12. Distribution of life-cycle cost between different stories.

521

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

for earthquake applications have maximum capabilities that inherently make their behavior nonlinear under extreme loading conditions. Their effect on the structural performance can be accurately described only in terms of non-linear time history responses. A comprehensive methodology was presented for estimating the repair cost due to damage from future excitations; this methodology uses the nonlinear response of the structure under a given seismic excitation to estimate the damage for a detailed, component level. A realistic probabilistic model was presented for describing the ground motion for future earthquake excitations. This model establishes a direct link between the probabilistic seismic hazard description of the structural site and the ground acceleration time history. An example was presented that considers the retrofitting of a four-story non-ductile reinforced-concrete office building with viscous dampers. The results illustrate the capabilities of the proposed framework for improving the structural behavior in a manner that is meaningful to its stakeholders (economic criteria). The life-cycle cost of the structure was significantly reduced by the suggested retrofitting scheme. These capabilities also refer to computational efficiency and the treatment of complex analysis models rather than choosing simplified models for describing the model performance.

According to [23], the total amplitude spectrum A(f;M,r) for the acceleration time history is expressed as a product of the source, path and site contributions:

Aðf ; M; rÞ ¼ ð2pf Þ2 Sðf ; MÞ 1 expðpko f Þ exp½pfR=ðQðf Þbs Þ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  8ffi Am R f 1 þ fmax

eðt; M; RÞ ¼ aðt=t n Þb exp ðcðt=tn ÞÞ

ð27Þ

where b = kln(g)/[1 + k(ln(k)  1)], c = b/k, a = [exp(1)/k]b and tn = 0.1R + 1/fa with k = 0.2, g = 0.05.

Appendix B. Sampling techniques Two algorithms that can be used for simulating samples from

p(u,h) [29] are discussed here. The first one is the accept-reject method which belongs to direct Monte Carlo methods. First, choose an appropriate proposal PDF f (u,h) and then generate independent samples as follows: (1) Randomly simulate candidate sample [uc, hc] from f(u,h) and u from uniform (0,1). pðuc ;hc Þ > u, (2) Accept [u,h] = [uc,hc] if hðuc ; hc Þ Mf ðuc ;hc Þ where M > max h ðu; hÞ pðu;hÞ . f ðu;hÞ u;h

(3) Return to (1) otherwise. The second one is the Metropolis–Hastings algorithm, which is a Markov Chain Monte Carlo method and is expressed through the iterative form:

Appendix A. Spectrum and envelope function for ground motion time-history



The envelope function for the earthquake excitation is represented by ([23]):

ð25Þ

hkþ1  from a (1) Randomly simulate a candidate sample ½~ ukþ1 ; ~ hkþ1 juk ; hk Þ proposal PDF qð~ ukþ1 ; ~ (2) Compute acceptance ratio

rkþ1 ¼

hs ð~ ukþ1 ; ~hkþ1 Þpð~ ukþ1 ; ~hkþ1 Þqðuk ; hk j~ ukþ1 ; ~hkþ1 Þ ukþ1 ; ~hkþ1 juk ; hk Þ hs ðukþ1 ; hkþ1 Þpðukþ1 ; hkþ1 Þqð~

(3) Simulate u from uniform (0,1) and set The equivalent two point-source spectrum developed by [24] for California conditions is adopted for the source:

" Sðf ; MÞ ¼ CM w

1e 1 þ ðf =fa Þ2

þ

e 1 þ ðf =fb Þ2

( ½ukþ1 ; hkþ1  ¼

#

½~ ukþ1 ; ~hkþ1  if r kþ1 P u otherwise ½uk ; hk 

ð26Þ

where Mw is the seismic moment (in dyn-cm) which is connected to the moment magnitude, M, by the relationship log10Mw = 1.5(M + 10.7) and the constant C is given by C =pRffiffiffiUVF/(4pRoqsbs3); RU = 0.55 is the average radiation pattern, V = 1= 2 represents the partition of total shear-wave velocity into horizontal components, F = 2 is the free surface amplification, qs = 2.8 g/cm3 and bs = 3.5 km/s are the density and shear-wave velocity in the vicinity of the source, and Ro = 1 is a reference distance. The frequencies fa and fb in (26) are given by log10fa = 2.181  0.496M and log10fb = 2.41  0.408M, respectively, and e is a weighting parameter described by the expression log10e = 0.605  0.255M. For the rest of the parameters in (25) the term 1/R is the geometric spread factor, where R = [h2 + r2]1/2 is the radial distance from the earthquake source to the site, with log10h = 0.15M  0.05 representing a moment dependent, nominal ‘‘pseudo-depth”. The term exp[pfR/(Q(f)bs)] accounts for elastic attenuation through the earth’s crust with Q(f) = 180f0.45 a regional quality factor. The quotient factor in (25) is related to near-surface attenuation with parameters fmax = 10 and ko = 0.03. Finally Am is a near-surface amplification factor which is described through the empirical curves for generic rock sites given by [37]. An alternative approach suggested by [22] (instead of using the empirical curves) would be to set Am to an average constant value equal to 2.

In this case the samples are correlated (the next sample depends on the previous one) but follow the target distribution after a burn-in period, i.e. after the Markov chain reaches stationarity. The algorithm is particularly efficient when seed samples that follow the target distribution are already available. Since the initial samples are distributed according to p(u,h), the Markov Chain generated in this way is always in its stationary state and all simulated samples follow the target distribution. The efficiency of both these sampling algorithms depends on the proposal PDFs f(u,h) and qð~ u; ~ hju; hÞ. These PDFs should be chosen to closely resemble h(u,h)p(u,h) and still be easy to sample from. If the first feature is established then most samples are accepted and the efficiency of the algorithm is high (see [3,29] for more details). For problems with a high-dimensional vector h the uncertain parameter vector should be partitioned into two sets, H1 and H2. H1 is comprised of parameters that individually do not significantly influence the performance measure (they have significant influence only when viewed as a group), while H2 is comprised of parameters that have individually a strong influence on h(u,h). The latter set typically corresponds to a low-dimensional vector. For the first set the proposal densities could be selected equal to the distribution p(h|u); this circumvents the problem of having to select efficient proposal densities for all the model parameters (see [3,22] for more guidelines on selecting proposal densities).

522

A.A. Taflanidis, J.L. Beck / Structural Safety 31 (2009) 508–522

Appendix C. Stochastic perturbation simultaneous approximation with common random numbers The implementation of SPSA takes the iterative form:

ukþ1 ¼ uk  ak gk ðuk ; XN Þ

ð28Þ

where u1 2 U is the chosen point to initiate the algorithm and the jth component for the common random numbers (CRN) simultaneous perturbation approximation to the gradient vector in the kth iteration, gk(uk,XN,k), is given by:

g k;j ¼

^h;N ðu þ ck Dk ; XN;k Þ  E ^h;N ðu  ck Dk ; XN;k Þ E k k 2ck Dj;j

ð29Þ

where the same sample set XN,k (stream of ‘‘random numbers”) are adopted in the simulations generating the two different estimates, at the kth iteration, in order to create a consistent estimation error and thus reduce the relative importance of the estimation error. Dk 2 Rnu in (29) is a vector of mutually independent random variables that defines the random direction of simultaneous perturbation for uk and that satisfies the statistical properties given in [28]. A symmetric Bernoulli 1 distribution is typically chosen for the components of Dk . The selection for the sequences {ck} and {ak} is discussed in detail in [30]. A choice that guarantees asymptotic convergence to u is ak = a/(k + w)b and ck = c1/kf, where 4f  b > 0, 2b  2f > 1, with w,f > 0 and 0 < b 6 1. Regarding the rest of the parameters for the sequences {ck} and {ak}: w is typically set to 10% of the number of iterations selected for the algorithm and the initial step c1 is chosen ‘‘close” to the standard deviation of the estimation error. Convergence of the iterative process is judged based on the value kukþ1  uk k in the last few steps, for an appropriate selected vector norm. Blocking rules can also be applied in order to avoid potential divergence of the algorithm, especially in the first iterations (see [28] for more details). In [3], the following guidelines were suggested for tuning of the SPSA parameters using information from SSO: u1 should be selected as the center of the set ISSO; and parameter a can be chosen so that the initial step for each component of u is smaller than a certain fraction (chosen as 1/10) of the respective size of ISSO, based on the estimate for g1 from (29). This estimate should be averaged over ng (chosen as 6) evaluations because of its importance in the efficiency of the algorithm. Also, no movement in any direction should be allowed that is greater than a quarter of the size of the respective dimension of ISSO (blocking rule). References [1] Jaynes ET. Probability theory: the logic of science. Cambridge, UK: Cambridge University Press; 2003. [2] Taflanidis AA, Beck JL. An efficient framework for optimal robust stochastic system design using stochastic simulation. Comput Meth Appl Mech Eng 2008;198(1):88–101. [3] Taflanidis AA. Stochastic system design and applications to stochastically robust structural control. Pasadena, CA: Ph.D Thesis in Civil Engineering; California Institute of Technology, EERL Report: 2007-5; 2007. [4] Goulet CA, Haselton CB, Mitrani-Reiser J, Beck JL, Deierlein G, Porter KA, et al. Evaluation of the seismic performance of code-conforming reinforced-concrete frame building – from seismic hazard to collapse safety and economic losses. Earthquake Eng Struct Dynamics 2007;36(13):1973–97. [5] Enevoldsen I, Sorensen JD. Reliability-based optimization in structural engineering. Struct Safe 1994;15(3):169–96.

[6] Gasser M, Schueller GI. Reliability-based optimization of structural systems. Math Method Operat Res 1997;46:287–307. [7] Park K-S, Koh HM, Hahm D. Integrated optimum design of viscoelastically damped structural systems. Eng Struct 2004;26:581–91. [8] Kong JS, Frangopol DM. Life-cycle reliability-based maintenance cost optimization of deteriorating structures with emphasis on bridges. J Struct Eng 2003;129(6):818–28. [9] Ang H-SA, Lee J-C. Cost optimal design of R/C buildings. Reliab Eng System Safe 2001;73:233–8. [10] Liu M, Burns SA, Wen YK. Optimal seismic design of steel frame buildings based on life cycle cost considerations. Earthquake Eng Struct Dynamics 2003;32:1313–32. [11] Fragiadakis M, Lagaros ND, Papadrakakis M. Performance-based multiobjective optimum design of steel structures considering life-cycle cost. Struct Multidiscip Optimiz 2006;32:1–11. [12] Cox RT. Probability, frequency and reasonable expectation. Am J Phys 1946;14:1–13. [13] Cox RT. Algebra of probable inference. Baltimore: The Johns Hopkins University Press; 1961. [14] Beck JL, Katafygiotis LS. Updating models and their uncertainties. I: Bayesian statistical framework. J Eng Mech 1998;124(4):455–61. [15] Porter KA, Beck JL, Shaikhutdinov RV. Sensitivity of building loss estimates to major uncertain variables. Earthquake Spectra 2002;18(4):719–43. [16] Kircher CA, Whitman RV, Holmes WT. Hazus earthquake loss estimation methods. Nat Hazard 2006;7(2):45–59. [17] Porter KA, Kiremidjian AS, LeGrue JS. Assembly-based vulnerability of buildings and its use in performance evaluation. Earthquake Spectra 2001;18(2):291–312. [18] Mitrani J. An ounce of prevention: Probabilistic loss estimation for performance-based earthquake engineering. Pasadena, CA: Ph.D Thesis in Applied Mechanics, California Institute of Technology, EERL Report: 2007-1; 2007. [19] Jalayer F, Beck JL. Effects of two alternative representations of ground-motion uncertainty on probabilistic seismic demand assessment of structures. Earthquake Eng Struct Dynamics 2008;37(1):61–79. [20] Baker JW, Cornell CA. A vector-valued ground motion intensity measure consisting of spectral acceleration and epsilon. Earthquake Eng Struct Dynamics 2005;34(10):1193–217. [21] Bazzurro P, Cornell CA. Disaggregation of seismic hazard. Bullet Seismolog Soc Am 1999;89(2):501–20. [22] Au SK, Beck JL. Subset simulation and its applications to seismic risk based on dynamic analysis. J Eng Mech – ASCE 2003;129(8):901–17. [23] Boore DM. Simulation of ground motion using the stochastic method. Pure Appl Geophys 2003;160:635–76. [24] Atkinson GM, Silva W. Stochastic modeling of California ground motions. Bullet Seismolog Soc Am 2000;90(2):255–74. [25] Ruszczynski A, Shapiro A. Stochastic programming. New York: Elsevier; 2003. [26] Polak E, Royset JO. Efficient sample size in stochastic nonlinear programming. J Comput Appl Math 2007;217(2):301–10. [27] Calafiore GC, Dabbene F, editors. Probabilistic and randomized methods for design under uncertainty. Springer-Verlag; 2006. [28] Spall JC. Introduction to stochastic search and optimization. New York: WileyInterscience; 2003. [29] Robert CP, Casella G. Monte Carlo statistical methods. 2nd ed. New York, NY: Springer; 2004. [30] Kleinmann NL, Spall JC, Naiman DC. Simulation-based optimization with stochastic approximation using common random numbers. Manage Sci 1999;45(11):1570–8. [31] Au SK, Beck JL. A new adaptive importance sampling scheme. Struct Safe 1999;21:135–58. [32] Pradlwater HJ, Schueller GI, Koutsourelakis PS, Champris DC. Application of line sampling simulation method to reliability benchmark problems. Struct Safe 2007;29(3):208–21. [33] Au SK, Beck JL. Importance sampling in high dimensions. Struct Safe 2001;25(2):139–63. [34] Iwan WD, Cifuentes AO. A model for system identification of degrading structures. Int J Earthquake Eng Struct Dynamics 1986;14:877–90. [35] Kramer SL. Geotechnical earthquake engineering. Prentice Hall; 2003. [36] Porter KA, Beck JL, Shaikhutdinov RV, Au SK, Mizukoshi K, Miyamura M, et al. Effect of seismic risk on lifetime property values. Earthquake Spectra 2004;20:1211–37. [37] Boore DM, Joyner WB. Site amplifications for generic rock sites. Bullet Seismolog Soc Am 1997;87:327–41.