44 %
56 %
Information about Bayes QASPR SAMSI

Published on March 4, 2008

Author: Gabrielle


Slide1:  Reliability and Risk Engineering 1 Bayesian Calibration of the QASPR simulation John McFarland 3rd Year Graduate Student Vanderbilt University Advisor: Sankaran Mahadevan Acknowledgements: Laura Swiler Tony Giunta Ramesh Rebba Slide2:  Reliability and Risk Engineering 2 Presentation Overview Bayesian calibration for 1 observation Calibration for observations from multiple scenarios Calibration based on multivariate response Preliminaries Slide3:  Reliability and Risk Engineering 3 Terminology (mostly consistent with KOH) KOH: Kennedy and O’Hagan, and their work on Bayesian calibration Calibration: The use of experimental observations of response values to inform on uncertain input values of a simulator Scenario descriptor inputs (x): i.e. boundary/initial conditions that go into the simulator and are usually observable for field experiments Calibration inputs (q ): All other simulator inputs; typically uncertain, constant across scenarios, and not observable in field experiments z: Experimental observations of the response quantity y: Available runs of the corresponding simulation h(x,q): Gaussian process model of the simulator PCA: Principal Component Analysis Slide4:  Reliability and Risk Engineering 4 Computational approach Estimation of Bayesian Posterior: MCMC simulation (component-wise version of Metropolis algorithm) 25,000 samples Gaussian random walk proposals, with variances adjusted for optimal acceptance rate Gaussian Process Model: All governing parameters treated as fixed Parameters estimated using MLE, with gradients Slide5:  Reliability and Risk Engineering 5 Gaussian process model formulation Gaussian process Correlation function For QASPR data, used constant trend Predictions MLE to estimate: GLS Analytical Gradient-based optimization Slide6:  Reliability and Risk Engineering 6 Bayesian Calibration Model KOH: Observations GP of simulator Model inadequacy Observation error First, consider a simple formulation, with one observation: 0 One x Simple formulation: (1 Observation) Slide7:  Reliability and Risk Engineering 7 Bayesian Calibration Model (1 observation) KOH define a joint likelihood for observed code runs y and experimental observations z However, if we think of as a response surface approximation, we could define just the likelihood for the experiments First define: So we have: Slide8:  8 QASPR project overview (courtesy of Tony Giunta) QASPR: Qualification Alternatives to the Sandia Pulsed Reactor Purpose: Qualify future weapon components without fast burst neutron test facilities (SNL will lose unique SPR-III test reactor in Q4FY06) Goal: Predict, with quantified uncertainty, weapon subsystem response in threat radiation environments. Approach: Obtain relevant radiation effects data from other test facilities (ion beams, electrons, gamma rays, slow neutrons, etc.) “Atoms-to-circuits” modeling and simulation activity across SNL organizations Quantify uncertainty in test data & simulation data; validate computer models Threat radiation environment Device effects (transistor, diode, etc.) Atomic-scale defects Circuit effects QMU Assessment neutrons x-rays g-rays Slide9:  Reliability and Risk Engineering 9 Application to QASPR simulation Response is gain (current ratio) vs. time Scenario variables (x): time, “Q1, Q2, Q3” Calibration inputs (q): 12 variables Prior belief about q: just bounds (uniform distributions) Data: 1 experiment each for Q1, Q2, Q3 300 simulation runs each Response at 4 time instances Objective: What values of q should be used in the simulator? “1D Code” Device Level Slide10:  Reliability and Risk Engineering 10 MLE values of the correlation parameters hint at their importance to the simulation (a.k.a Automatic Relevance Determination) Q1, response 1 “nominal case” Q2, response 1 Most important inputs: 2, 6, and 11 First standardize original input data Then use MLE to estimate each correlation scale parameter Small xi  GP insensitive to xi Sensitivities depend on scenario Slide11:  Reliability and Risk Engineering 11 Response approximation for inputs 6 and 11 Approximated response Uncertainty Scenario: Q1 Slide12:  Reliability and Risk Engineering 12 Response approximation for inputs 2 and 8 Approximated response Uncertainty Scenario: Q1 Slide13:  Reliability and Risk Engineering 13 Response approximations and location of experiment for Q1 Experiment Slide14:  Reliability and Risk Engineering 14 Calibration results for Q1 scenario 2 11 8 6 z = 0.41 sexp = 0.041 (10%) Recall contour plots suggested high values for inputs 2, 6, 11 and anything for 8 Slide15:  Reliability and Risk Engineering 15 Calibrated parameters have complex structure r = -0.26 r = -0.29 Full joint structure is captured by MCMC simulation Considering marginals only gives very bad results Open question: How should we summarize the joint results? (joint confidence intervals?, mode?) Approximate 2D posterior probability densities Slide16:  Reliability and Risk Engineering 16 What about other scenarios, Q2 and Q3? Based on original runs: Simulator under-predicts for Q1 Does well on avg. for Q2 Over-predicts for Q3 Without a scenario-dependent bias term, joint calibration may not work well Slide17:  Reliability and Risk Engineering 17 First, we calibrate separately for Q1, Q2, Q3 to compare 2 11 8 6 Q1 Q2 Q3 Slide18:  Reliability and Risk Engineering 18 How can we use multiple observations for different scenarios? First, I use code runs y1 and y2 to make separate GP response approximations for each scenario Seems more appropriate than making one GP for the simulator of both scenarios, since we only have code runs for 2 discrete scenarios Slide19:  Reliability and Risk Engineering 19 Can we calibrate simultaneously based on multiple scenarios, without a bias term? Possible for Q1 and Q2, but Q1 and Q3 are too different Q1 and Q2 Experiements Calibrated predictions Slide20:  Reliability and Risk Engineering 20 What about response measurements over time? Simulator bias does not change as much with time Measurements over time are not independent of each other No repeated experiments available to estimate correlations Q1 Slide21:  Reliability and Risk Engineering 21 How can we use multiple correlated observations? No repeated experiments, so how should we estimate Sexp? One possibility is to use observed simulator runs to estimate correlations: Could construct , but my code currently can not take advantage of Cartesian designs, so I instead consider a separate GP for each time instance (4) Under-estimates uncertainty, compared to using a single GP where covariance has nonzero off-diagonals Slide22:  Reliability and Risk Engineering 22 What if there are many time instances? May be too cumbersome to build a separate GP for each time instance Further, the response may be correlated over time Could build 1 GP, taking advantage of Cartesian design Could try a dimension reduction, such as PCA If Sexp has some small eigenvalues, could use PCA to reduce dimensionality of z Let A(k) contain first k relevant eigenvectors of Sexp New Lhood: Where h’ contains only k surrogates, based on similarly transformed code outputs: To recover original variables: Slide23:  Reliability and Risk Engineering 23 Successfully applied the PCA reduction to QASPR data Used one scenario (Q1) Response measured at 4 times Used PCA to reduce dimension to 2 Only needed 2 GP’s instead of 4 Experiements Calibrated predictions Savings could be substantial for highly multivariate output Slide24:  Reliability and Risk Engineering 24 Summary Applied Bayesian framework for calibration Treated simulator GP as a surrogate model Did not include “model inadequacy” function Application had high dimensionality (d =12) Considered multiple scenarios and multivariate output Target graduation: ~ May 2008 Email: Future work: Compare performance of different calibration methodologies (with/without bias term, etc..)  ideas? Study calibration of multi-level simulations (e.g. component / subsystem / system) Slide25:  Reliability and Risk Engineering 25 Global sensitivities for QASPR simulation Slide26:  Reliability and Risk Engineering 26 Other plots Slide27:  = 0.30 = -0.20 Reliability and Risk Engineering 27 Metropolis Algorithm Example Pick initial point, θ0 For i = 1 : NUM_SAMPLES Generate candidate perturbation, ξ Calculate candidate, θ* Calculate acceptance ratio, α Generate u[0,1] If u < α θ i+1 = θ* Else θ i+1 = θ i End if End Loop 0.5 0.5 0.3 … Stored Samples = 0.15 = 0.65 = 0.31 = 0.40 = 2.31 = 0.85 Simulate posterior from coin example

Add a comment

Related presentations