Problem 20. Reducing uncertainty in model prediction: The role of model invalidation
- Lancaster, United Kingdom of Great Britain – England, Scotland, Wales (k.beven@lancaster.ac.uk)
We would like to use models that are fit for a particular purpose in making predictions. Traditionally, models have been calibrated against historical data and then some sample of those calibrated models used in prediction. There has been very little consideration of the aleatory and epistemic uncertainties of the pertinent hydrological process and just how that might affect the way we assess models as hypotheses about how catchment systems work. We suggest that a more Popperian approach is required to assess when models should be considered as NOT fit for purpose. Model invalidation is, after all, a good thing in that it means we need to do better; that some improvements are required, either to the data, to the auxiliary relations or to the model structures being used. The question is what is an appropriate methodology for such hypothesis testing when we KNOW there are epistemic uncertainties associated with the observations? We consider this issue for the case of flood hydrograph simulation using Dynamic Topmodel, making use of a strategy of limits of acceptability for model simulations set prior to making model runs.
How to cite: Beven, K., Kretzschmar, A., Smith, P., and Chappell, N.: Problem 20. Reducing uncertainty in model prediction: The role of model invalidation, IAHS-AISH Scientific Assembly 2022, Montpellier, France, 29 May–3 Jun 2022, IAHS2022-452, https://doi.org/10.5194/iahs2022-452, 2022.