The Reality of Computer Models: Statistics and Virtual Science
Topic
Computer models are imperfect representations of real phenomena. An
austere view is that validating a model cannot be done, the "primary
value of models is heuristic: models are representations, useful for
guiding further study but not susceptible to proof." This view may have
substantial basis in purely scientific roles, as distinct from a
model's use in policy and engineering contexts. But the real validation
issue, we contend, is not whether a model is absolutely correct or only
a useful guide. Rather, it is to assess the degree to which it is an
effective surrogate for reality: does the model provide predictions
accurate enough for intended use?
Incisive argument on the validity of models, seen as assessment of their utility, has previously been hampered by the lack of a structure in which quantitative evaluation of a model's performance can be addressed. The lack has given wide license to challenge computer model predictions (just what is the uncertainty in temperature predictions connected with increases in CO2?). A structure for validation should:
- Permit clear cut statements on what and how performances are to be addressed and assessed;
- Account for uncertainties stemming from a multiplicity of sources including field measurements and, especially, model inadequacies; and
- Recognize the confounding of calibration/tuning with model inadequacy – tuning can mask flaws in the model; flaws in the model may lead to incorrect values for calibration parameters.
We will describe such a structure (and applications). It is built on methods and concepts for the statistical design and analysis of virtual experiments, drawing on elements of Gaussian stochastic processes and Bayesian analysis.
Speakers
This is a Past Event
Event Type
Scientific, Seminar
Date
February 19, 2007
Time
-
Location