Nuclear Computational Science - A Century in Review

von: Yousry Azmy, Enrico Sartori

Springer-Verlag, 2010

ISBN: 9789048134113 , 470 Seiten

Format: PDF, OL

Kopierschutz: Wasserzeichen

Windows PC,Mac OSX geeignet für alle DRM-fähigen eReader Apple iPad, Android Tablet PC's Online-Lesen für: Windows PC,Mac OSX,Linux

Preis: 96,29 EUR

Mehr zum Inhalt

Nuclear Computational Science - A Century in Review


 

"Chapter 6 Sensitivity and Uncertainty Analysis of Models and Data (p. 291-293)

Dan Gabriel Cacuci

6.1 Introduction

This chapter highlights the characteristic features of statistical and deterministic methods currently used for sensitivity and uncertainty analysis of measurements and computationalmodels. The symbiotic linchpin between the objectives of uncertainty analysis and those of sensitivity analysis is provided by the “propagation of errors” equations, which combine parameter uncertaintieswith the sensitivities of responses (i.e., results of measurements and/or computations) to these parameters.

It is noted that all statistical uncertainty and sensitivity analysis methods first commence with the “uncertainty analysis” stage, and only subsequently proceed to the “sensitivity analysis” stage. This procedural path is the reverse of the procedural (and conceptual) path underlying the deterministic methods of sensitivity and uncertainty analysis, where the sensitivities are determined prior to using them for uncertainty analysis.

In particular, it is emphasized that the Adjoint Sensitivity Analysis Procedure (ASAP) is themost e?cientmethod for computing exactly the local sensitivities for large-scale nonlinear problems comprising many parameters. This e?ciency is underscored with illustrative examples. The computational resources required by the most popular statistical and deterministic methods are discussed comparatively. A brief discussion of unsolved fundamental problems, open for future research, concludes this chapter.

6.2 Sensitivities and Uncertainties in Measurements and Computational Models: Basic Concepts

In practice, scientists and engineers often face questions such as: How well does the model under consideration represent the underlying physical phenomena? What confidence can one have that the numerical results produced by the model are correct? How far can the calculated results be extrapolated? How can the predictability and/or extrapolation limits be extended and/or improved?

Answers to such questions are provided by sensitivity and uncertainty analyses. As computerassisted modeling and analyses of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable investigative scientific tools in their own right. Since computers operate on mathematical models of physical reality, computed results must be compared to experimental measurements whenever possible.

Such comparisons, though, invariably reveal discrepancies between computed and measured results. The sources of such discrepancies are the inevitable errors and uncertainties in the experimental measurements and in the mathematical models. In practice, the exact forms of mathematical models and/or exact values of data are not known, so their mathematical form must be estimated. The use of observations to estimate the underlying features of models forms the objective of statistics.

This branch of mathematical science embodies both inductive and deductive reasoning, encompassing procedures for estimating parameters from incomplete knowledge and for refining prior knowledge by consistently incorporating additional information. Thus, assessing and, subsequently, reducing uncertainties in models and data require the combined use of statistics together with the axiomatic, frequency, and Bayesian interpretations of probability."