EGU26-15141, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-15141
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Friday, 08 May, 14:15–14:25 (CEST)
 
Room 2.31
Disentangling pitfalls and (bad) choices in the hydrological modelling process and their impact on model performance, uncertainty and model choice
Thomas Wöhling1,2 and Alexander Bartusch1
Thomas Wöhling and Alexander Bartusch
  • 1TUD Dresden University of Technology, Chair of Hydrology, Dresden, Germany (thomas.woehling@tu-dresden.de)
  • 2Lincoln Agritech, Christchurch/Hamilton, New Zealand

The setup of hydrological models in accordance to good modelling practice guidelines involves several steps of model development that have to be performed sequentially as well as iteratively. Purpose is decisive for choices that need to be made in the modelling process. Regardless of whether the task is to develop a predictive or explanatory model, the modelling process typically involves a stage where one or several model candidates are selected, a stage of model calibration, uncertainty quantification and a phase of model evaluation and diagnostics. Ideally, at the end of the process stands a model that serves the purpose.
Explanatory models could be used to learn about (dominant) hydrological processes and thus require a certain level of process realism in the governing equations that represent the system under study. In practice, modellers form one or several hypotheses about “how the system works” and test these hypotheses by setting up corresponding model structures whose parameters are trained on data. Model choice is then a matter of model-data (mis)fit. 
However, errors in the data and model inputs, uncertainty in parameter values, misspecified or missing processes, scaling issues, among others, can and often do lead to parameter compensation in the model calibration stage. Model ensembles typically cover only a fraction of the model space, i.e. the “population” of plausible model structures. Particularly when the “true” model (or a “realistic” one) is not included, model choice boils down to model flexibility or fidelity rather than plausibility. A further complicating factor is that misspecification in combination with confidence in the data can distort uncertainty estimates of parameters and predictions, potentially leading to over-confident and biased distributions. The relative contributions of different error sources to the total uncertainty are then also affected. Unfortunately, most of this goes unnoticed, even when following good modelling practice guidelines.
In this contribution we illustrate some of these potential pitfalls and bad choices in hydrological modelling with both a synthetic test case where the “true model” exists and with an ensemble of candidate hydrological models and field data from the Forellenbach catchment in the Bavarian National Park. We highlight and demonstrate the impact of misspecified priors, biased data and uncertain model forcings on model choice and briefly discuss model fidelity vs. plausibility. Bayesian analysis is applied for model diagnosis and to disentangle error sources and their relative contribution to total uncertainty. Most of these issues, either separately or combined, have been described in modelling studies before. We like to raise awareness and encourage further discussion in the hydrological community on suitable and practical solutions to identify and treat major uncertainty sources in hydrological modelling. 

How to cite: Wöhling, T. and Bartusch, A.: Disentangling pitfalls and (bad) choices in the hydrological modelling process and their impact on model performance, uncertainty and model choice, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15141, https://doi.org/10.5194/egusphere-egu26-15141, 2026.