EGU24-17094, updated on 11 Mar 2024
https://doi.org/10.5194/egusphere-egu24-17094
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

How can we improve the correctness and plausibility of our hydrological models?

Corina Hauffe, Diana Spieler, Clara Brandes, Sofie Pahner, and Niels Schütze
Corina Hauffe et al.
  • TU Dresden, Chair of Hydrology , Department of Environmental Science, Dresden, Germany (corina.hauffe@tu-dresden.de)

Using hydrological models is a common task for almost all hydrologists. Sometimes there is enough time to conduct a comparison study before selecting a model or we use the model we already know. But do we really know “our” model? Do we test all processes and approaches implemented prior to the model application? Usually we assume that the models are working correctly and by doing so we strongly rely on the developers willingness and capability to provide a mathematically and physically well tested hydrological model.

We believe that more effort is needed to ensure the quality assurance of models. This topic is yet underdeveloped in hydrology. We argue that our models should pass a standardized quality test in which they proof physical robustness and hydrologic plausibility. The commonly used split-sample test (Klemes, 1986) for an area of interest during the model validation may not be the best option to test for model quality. Attempts to increase standardization, transparency, and model quality have already been made e.g. by introducing the good modelling practice (van Waveren et al., 1999) and the FAIR principles (Wilkinson et al., 2016).

Nevertheless, there is still much potential to improve the quality assurance of models. We suggest a framework consisting of (1) the usage of synthetic input data and catchment properties, (2) a standardized test scheme, and (3) a set of diagnostics to evaluate the model results. The current study focuses on the development of the test scheme, which includes global behaviour tests, robustness tests, and additional tests.

Applying these tests serves different purposes: (1) detecting model limitations, (2) finding unintended feedback processes, (3) wrong or hydrological implausible responses, and (4) hidden or fixed parameters of a model. This kind of functional validation already proofed to be useful. A case study for the model ArcEGMO revealed several findings, e.g. fixed parameters, undocumented process implementations for lake evaporation and an unintended model response in the calculation of the groundwater recharge. Therefore, we believe that standardized tests would improve our model understanding, model usage and the trust in the model results.

 

Klemeš: Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31, 13–24, https://doi.org/10.1080/02626668609491024, 1986.

van Waveren et al.: Good Modelling Practice Handbook, Tech. report, Dutch Dept. of Public Works, Institute for Inland Water Management and Waste Water Treatment, https://www.researchgate.net/publication/233864541_Good_Modelling_Practice_Handbook, 1999.

Wilkinson et al.: The FAIR Guiding Principles for scientific data management and stewardship, Scientific Data, https://doi.org/10.1038/sdata.2016.18, 2016.

How to cite: Hauffe, C., Spieler, D., Brandes, C., Pahner, S., and Schütze, N.: How can we improve the correctness and plausibility of our hydrological models?, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17094, https://doi.org/10.5194/egusphere-egu24-17094, 2024.