HS2.1.7What is a «good» hydrological model for impact study?
|Convener: Alexander Gelfan | Co-Conveners: Vazken Andréassian , Valentina Krysanova , Yury Motovilov
/ Attendance Thu, 12 Apr, 17:30–19:00
George Box is credited with the quote that “All models are wrong, but some are useful”. Really, all hydrological models operating in a predictive mode are known to be more or less problematic, and involve uncertainties. When we are dealing with the hydrological models in impact studies, the problem is how to distinguish between the useful and useless models. It is unlikely that this distinction could be done a priori comparing the models’ conceptualization. Using a distributed model based on physical principles does not assure its higher reliability for future projections compared to a less sophisticated model. Similarly, comparison of models performance in the historical period is questionable for decisive judgement. Indeed, a good agreement between the model output and data in the past is not per se a guarantee for reliable projections in the future, just indicates that the model is plausible.
We invite contributions from the hydrological and climatological communities. Specifically, contributions addressing the following issues are welcome:
• What are the grounds for credibility of hydrological projections in the future?
• Is it possible to define more suitable models for operating under changes, i.e. models with a greater extrapolating capacity?
• Does a good performance of hydrological models in historical period increase its credibility for future projections, and, if yes, what are the criteria of good performance?
• How the model calibration/validation should be done if a model is intended for impact studies?
• When we are dealing with an ensemble of models, are some of them more plausible than others, and how can we judge about that?
• Is there an optimal way for weighting model projections on the basis of their performance in the past?
• Are there any essential model deficiencies, which prevent its application under changes, and how to exclude them, if possible?
• How to estimate ranges of the model capabilities, and to safeguard against the model use for tasks beyond its capabilities?