HS1.3.1 | Revisiting good modelling practices – where are we today and where to tomorrow?
EDI
Revisiting good modelling practices – where are we today and where to tomorrow?
Convener: Diana SpielerECSECS | Co-conveners: Keirnan Fowler, Lieke MelsenECSECS, Wouter KnobenECSECS
Orals
| Wed, 26 Apr, 16:15–17:55 (CEST)
 
Room 2.15
Posters on site
| Attendance Wed, 26 Apr, 14:00–15:45 (CEST)
 
Hall A
Posters virtual
| Attendance Wed, 26 Apr, 14:00–15:45 (CEST)
 
vHall HS
Orals |
Wed, 16:15
Wed, 14:00
Wed, 14:00
Many papers have advised on careful consideration of the approaches and methods we choose for our hydrological modelling studies as they potentially affect our modelling results and conclusions. However, there is no common and consistently updated guidance on what good modelling practice is and how it has evolved since e.g. Klemes (1986), Refsgaard & Henriksen (2004) or Jakeman et al. (2006). In recent years several papers have proposed useful practices such as benchmarking (e.g. Seibert et al., 2018), controlled model comparison (e.g. Clark et al., 2011), careful selection of calibration periods (e.g. Motavita et al., 2019) and methods (e.g. Fowler et al., 2018 ), or testing the impact of subjective modelling decisions along the modelling chain (Melsen et al., 2019). However, despite their very justified existence, none of the proposed methods have become quite as common and indispensable as the split sample test (KlemeŠ, 1986) and its generalisation to cross-validation.

This session intends to provide a platform for a visible and ongoing discussion on what ought to be the current standard(s) for an appropriate modelling protocol that considers uncertainty in all its facets and promotes transparency in the quest for robust and reliable results. We aim to bring together, highlight and foster work that develops, applies, or evaluates procedures for a trustworthy modelling workflow or that investigates good modelling practices for particular aspects of the workflow. We invite research that aims to improve the scientific basis of the entire modelling chain and puts good modelling practice in focus again. This might include (but is not limited to) contributions on:

(1) Benchmarking model results
(2) Developing robust calibration and evaluation frameworks
(3) Going beyond common metrics in assessing model performance and realism
(4) Conducting controlled model comparison studies
(5) Developing modelling protocols and/or reproducible workflows
(6) Examples of adopting the FAIR (Findable, Accessible, Interoperable and Reusable) principles in the modelling chain
(7) Investigating subjectivity along the modelling chain
(8) Uncertainty propagation along the modelling chain
(9) Communicating model results and their uncertainty to end users of model results
(10) Evaluating implications of model limitations and identifying priorities for future model development and data acquisition planning

Orals: Wed, 26 Apr | Room 2.15

Chairpersons: Diana Spieler, Wouter Knoben, Lieke Melsen
16:15–16:25
|
EGU23-10230
|
HS1.3.1
|
solicited
|
On-site presentation
Anthony Jakeman, Sondoss Elsawah, and Serena Hamilton

Good modelling practice has many requirements. Above all, the process should be complete and transparent enough so that the credibility of its conclusions can be comprehended, or even assessed, by its intended audience. And the more complex, uncertain and cross-sectoral the problem being modelled, or potentially devastating its consequences may be, the more the need for good practice. Consequently, good modelling practice is essential in addressing not just climate change issues, but also cross-sectoral issues such as occurs with water, energy, agriculture and the socio-economy. Yet despite widespread acknowledgment of the grand socio-environmental challenges facing the planet, practices as seen in the major literature largely remain meagre, and most often are pathetically inadequate.

The presentation begins with a list of specific technical complaints around poor practice, ones that could be easily remedied by modellers, to concede this unnecessary state of affairs. We argue for a suitable ontology around concepts for anchoring good modelling practice, including trustworthiness, assurance, robustness, reproducibility and credibility, along with fitness-for-purpose notions of usability, reliability and feasibility. We also emphasize the often-overlooked role of human factors in the modelling process, including assumptions and choices made by the modeller, and consider how consequent biases or uncertainties can be reduced. We then synthesize the steps in the modelling process as recognized in the scientific and grey literature, and provide examples of checklists of questions that merit addressing for each step. Many of these questions prompt consideration of methodological choices, especially around uncertainty and scale. Good modelling practice warrants greater transparency in documenting, justifying and, wherever possible, comparing methodological choices and related assumptions. We argue that the level of robustness to choices be made clearer.

The modelling community must however address how to advance modelling so that good practice becomes not just well-known but common practice. Instruments for achieving this are posited around: regulation by journals in terms of standards that they require for relevant papers published; developing incentives for following good practice; promoting an institutional/community culture around it, and expanding education and capacity building in modelling that focusses from the start on good practice as being fundamental.

How to cite: Jakeman, A., Elsawah, S., and Hamilton, S.: Instrumenting good modelling practice as common practice, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10230, https://doi.org/10.5194/egusphere-egu23-10230, 2023.

16:25–16:35
|
EGU23-10527
|
HS1.3.1
|
On-site presentation
Martyn Clark, Wouter Knoben, Guoqiang Tang, Ashley van Beusekom, Louise Arnal, and Ray Spiteri

Many hydrological modelling groups face similar challenges, with untapped opportunities to share code and concepts across different model development groups. An active community of practice is emerging, where the focus is not so much on developing a community hydrological model, and more on advancing the science and practice of community hydrological modelling. This presentation will summarize our recent efforts to develop open-source models, methods, and datasets to enable process-based hydrological prediction across large geographical domains. This presentation summarizes our recent efforts to advance the science and practice of hydrological modelling, focusing on recent work to (1) develop multi-source probabilistic hydrometeorological forcing datasets on continental and global domains; (2) advance a flexible approach to represent a myriad of physical processes in a unified modelling framework; (3) improve the numerical robustness and efficiency of large-domain terrestrial system model simulations; and (4) develop extensible and reproducible modeling workflows. The presentation will highlight major scientific challenges, future research needs, and some key opportunities for community collaboration.

How to cite: Clark, M., Knoben, W., Tang, G., van Beusekom, A., Arnal, L., and Spiteri, R.: Improving the science and practice of hydrological modelling, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10527, https://doi.org/10.5194/egusphere-egu23-10527, 2023.

16:35–16:45
|
EGU23-8858
|
HS1.3.1
|
ECS
|
On-site presentation
Monica Morrison

For scientific products to be considered actionable for stakeholder purposes they must meet certain epistemic and contextual conditions of adequacy. In evaluating the adequacy of a scientific product derived from Earth system models for actionable purposes—such as adaptation or resilience planning— there is a tendency to engage in the evaluation of raw model output and subject it to postprocessing to gain desired reliability and fitness. However, this reductive approach and focus on data ignores questions of whether the simulations, model configurations, and representational features of the models are themselves adequate and reliable for the actionable applications. This talk will lay out the reasons why we need to shift our practices to evaluating models and their products in a more holistic manner and provides insight into a framework for doing so. Scientific models—in this case Earth system models—are constructed with certain purposes and research questions in mind. These purposes, and more detailed research questions, engender representational values, which are reflections of what we want to know and why we want to know it. When model development is informed by these representational values underlying our questions and purposes, they are determinants of the decisions made during model construction about what we choose to represent and how we choose to represent it. The consequence is that the models constructed reflect these representational values and occupy a representational perspective, one that is fit for answering the questions and purposes that governed its development, but not those questions and applications that lie outside that perspective. To avoid increasing epistemic risk when using models for actionable purposes, which can result in downstream social harms, we need to assess the adequacy and reliability of our instruments and their products further upstream, in terms of consistency between the representational values that are embedded in the model in virtue of its development pathway and those that are implied in the actionable science questions the model could be applied to answer. More holistic, tailored assessments will allow us to avoid increases in epistemic risk due to how stakeholder representational values and conditions of adequacy can be inconsistent with those values being reflected in the representational content of the model being employed.

How to cite: Morrison, M.: Adequacy and Reliability of Earth System Models: Actionable Purposes, Model Inadequacy and Epistemic Risk, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8858, https://doi.org/10.5194/egusphere-egu23-8858, 2023.

16:45–16:55
|
EGU23-7800
|
HS1.3.1
|
On-site presentation
Cécile Ménard, Sirpa Rasmus, and Ioanna Merkouriadi

Historically, snow physics models were developed to forecast avalanches. Over the years, their application has broadened to hydrological, climatological, ecological and permafrost studies to cite but a few. However, the structure of mid-latitude mountain snowpacks upon which snow physics models are based (generally deep snowpack with snow denser at the bottom than at the top because of compaction) differs considerably from the structure of high latitude snowpacks (generally shallow snowpack with dense wind-compacted snow at the top and large snow crystals at the bottom). This difference has been known for decades to be a potentially large source of uncertainty when simulating heat exchanges in the Arctic and Antarctic. Therefore, with Arctic warming having consequences on the global climate, why have snow physics modellers not developed a model with a high latitude or “arctic snowpack” yet? Taking this question as a case-study to understand the role that subjective decisions play at every phase of model developments, we interviewed more than twenty snow physics model users (e.g. ecologists, anthropologists, remote sensing and climate scientists) and developers to understand the following: what motivates model developments? What or who determines which parametrization, which process is to be prioritised over others? What role does the research question play? What about funding or staff availability? We will show that positionality, anchoring bias and interpersonal relationships play far more prominent roles in the physical sciences that commonly acknowledged and will draw lessons from the social sciences to increase transparency in our modelling practice.

How to cite: Ménard, C., Rasmus, S., and Merkouriadi, I.: What motivates model developments? A multi-perspective case study from snow physics models., EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7800, https://doi.org/10.5194/egusphere-egu23-7800, 2023.

16:55–17:05
|
EGU23-556
|
HS1.3.1
|
ECS
|
On-site presentation
|
Janneke Remmers, Rozemarijn ter Horst, Ryan Teuling, and Lieke Melsen

The usage of hydrological models is diverse and omnipresent. For practical purposes, these models are applied to, for example, flood forecasting, water allocation, and climate change impacts. Numerous methods exist to execute any modelling study. Choosing a method creates a narrative behind each model result. This implies that models are not neutral. So, how do modellers make these decisions? We conducted fourteen semi-structured interviews between September and December 2021 with nine modellers from six different water authorities and five modellers from four different consultancy companies in the Netherlands. The interviews were all recorded and transcribed. We executed an inductive content analysis on the transcriptions. We will discuss the motivation modellers have to make choices during the modelling process. With these insights, we aim to contribute to a discussion on how models, despite their unavoidable non-neutrality, can be robust and dependable to support decision making. Standardisation, e.g. automation, can be a way to achieve this. Understanding the social aspects behind the modelling process is necessary to move forward in modelling and modelling workflows, as well as being able to share and reflect on the model results including the narrative behind it.

How to cite: Remmers, J., ter Horst, R., Teuling, R., and Melsen, L.: A Modeller’s Compass: How Modellers Navigate Dozens of Decisions, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-556, https://doi.org/10.5194/egusphere-egu23-556, 2023.

17:05–17:15
|
EGU23-2008
|
HS1.3.1
|
On-site presentation
Ralf Merz, Arianna Miniussi, Stefano Basso, and Larisa Tarasova

Conceptual hydrological models are irreplaceable tools for large-scale (i.e., from regional to global) hydrological predictions. Large-scale modeling studies typically strive to employ one single model structure regardless of the diversity of catchments under study. However, little is known on the optimal model complexity for large-scale applications. In a modeling experiment across 700 catchments in the contiguous United States, we analyze the performance of a conceptual (bucket style) distributed hydrological model with varying complexity (5 model versions with 11–45 parameters) but with exactly the same inputs and spatial and temporal resolution and implementing the same regional parameterization approach. The performance of all model versions compares well with those of contemporary large-scale models tested in the United States, suggesting that the applied model structures reasonably account for the dominant hydrological processes. Remarkably, our results favor a simpler model structure where the main hydrological processes of runoff generation and routing through soil, groundwater, and the river network are conceptualized in distinct but parsimonious ways. As long as only observed runoff is used for model validation, including additional soil layers in the model structure to better represent vertical soil heterogeneity seems not to improve model performance. More complex models tend to have lower model performance and may result in rather large uncertainties in simulating states and fluxes (soil moisture and groundwater recharge) in model ensemble applications. Overall, our results indicate that simpler model structures tend to be a more reliable choice, given the limited validation data available at large scale.

How to cite: Merz, R., Miniussi, A., Basso, S., and Tarasova, L.: More Complex is Not Necessarily Better in Large-Scale Hydrological Modeling, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2008, https://doi.org/10.5194/egusphere-egu23-2008, 2023.

17:15–17:25
|
EGU23-15300
|
HS1.3.1
|
On-site presentation
|
Andrijana Todorović and Claudia Teutschbein

Various models are available to hydrologists, including models of different structures, spatial and temporal discretisation, or multiple parameter sets of a single model. But the "trustworthiness" of these models is called into doubt when they reproduce runoff equally well in the calibration period (equifinality), but diverge in their simulation outputs outside this period. A common way to account for modelling uncertainty is to use so-called ensembles that combine several model members. However, it has been debated/discussed that models that do not provide “the right answers for the right reasons” and, consequently, yield poor performance in a prediction or forecasting mode, should be omitted from such ensembles. Various evaluation protocols aimed at detecting such models have emerged over the years, however, this remains an open research question, and more research is needed especially in the context of shifting hydrologic regimes in a changing climate.

 

Adopting the consistency in model performance in reproducing runoff as an additional criterion to select among multiple models emerges as a plausible way to identify the most “trustworthy” ones. We propose an approach that relies on detailed analyses of model performance across subperiods of increasing length contained within the calibration period. A good performance in both short and longer subperiods is crucial as the former can be quite extreme (e.g., extremely dry or wet), while the latter “expose” a model to various hydroclimatic conditions. To analyse the consistency in model performance, an efficiency measure (e.g., the Kling-Gupta coefficient, KGE) can be computed in each subperiod, and each model can be ranked in each subperiod according to the measure. Models yielding the most consistent and the highest performance can then be selected either (1) as a certain percentage of models with the highest rank averaged across all subperiods, or (2) by imposing a rank threshold that has to be reached in every subperiod. We here further propose to additionally evaluate the selected subset of consistent and high-performing models over an independent period using various other performance indicators (e.g., Nash-Sutcliffe coefficient or volumetric efficiency) as well as model ability to reproduce hydrological signatures (e.g., mean, high and low flows, or runoff dynamics). The evaluation performance of the selected models can then be compared to the best (reference) model obtained from the calibration over the full calibration period with the selected efficiency measure (here KGE) as the objective function.

 

To showcase the advantages of the proposed approach, it is here applied to two different models (3DNet-Catch and GR4J) each with 20,000 randomly sampled parameter sets in three unimpaired catchments. In addition to the promising results, the proposed approach is characterised by its ease-of-use and flexibility, i.e., it can be implemented with any ensemble of models (e.g., randomly selected parameter sets of a single model, or different models created e.g., from a modular framework), or with any other aspect of model performance.

How to cite: Todorović, A. and Teutschbein, C.: Consistency in Model Performance as a Criterion for Trustworthy Hydrological Modelling, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15300, https://doi.org/10.5194/egusphere-egu23-15300, 2023.

17:25–17:35
|
EGU23-12261
|
HS1.3.1
|
ECS
|
On-site presentation
Martin Gauch, Frederik Kratzert, Oren Gilon, Hoshin Gupta, Juliane Mai, Grey Nearing, Bryan Tolson, Sepp Hochreiter, and Daniel Klotz

Everyone wants their hydrologic models to be as good as possible. But how do we know if a model is accurate or not? In the spirit of rigorous and reproducible science, the answer should be: we calculate metrics. Yet, as humans, we sometimes follow a scheme of "I know a good model when I see it" and manually inspect hydrographs to assess their quality. This is certainly a valid method for sanity checks, but it is unclear whether these subjective visual ratings agree with metric-based rankings. Moreover, the consistency of such inspections is unclear, as different observers might come to different conclusions about the same hydrographs.

In this presentation, we report a large-scale study where we collected responses from 622 experts, who compared and judged more than 14,000 pairs of hydrographs from 13 different models. Our results show that overall, human ratings broadly agree with quantitative metrics in a clear preference for a Machine Learning model. At the level of individuals, however, there is a large amount of inconsistency between ratings from different participants. Still, in cases where experts agree, we can predict their most likely rating purely from qualitative metrics. This indicates that we can encode intersubjective human preferences with a small set of objective, quantitative metrics. To us, these results make a compelling case for the community to put more trust into existing metrics—for example, by conducting more rigorous benchmarking efforts.

How to cite: Gauch, M., Kratzert, F., Gilon, O., Gupta, H., Mai, J., Nearing, G., Tolson, B., Hochreiter, S., and Klotz, D.: Peeking Inside Hydrologists' Minds: Comparing Human Judgment and Quantitative Metrics of Hydrographs, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12261, https://doi.org/10.5194/egusphere-egu23-12261, 2023.

17:35–17:45
|
EGU23-15221
|
HS1.3.1
|
On-site presentation
Daniel Klotz, Martin Gauch, Grey Nearing, Sepp Hochreiter, and Frederik Kratzert

Skillful today, inept tomorrow. Today's hydrological models have pronounced and complex error dynamics (e.g., small, highly correlated errors for low flows and large, random errors for high flows). Modellers generally accept that simple, variance based evaluation criteria — like the Nash-Sutcliffe Efficiency (NSE) — are not fully able to capture these intricacies. The (implied) consequences of this are however seldom discussed.

This contribution examines how evaluating the model over two data partitions (above and below a chosen threshold) relates to a global model evaluation of both partitions combined (i.e., the usual way of computing the NSE). For our experiments we manipulate dummy simulations with gradient descent to approximate specific NSE values for each partition individually. Specifically, we set the NSE for runoff values that fall below the threshold, and vary the NSE of the simulations above the threshold as well as the threshold itself. This enables us to study how the global NSE relates to the partition NSEs and the threshold. Intuitively, one would wish that the global NSE somehow reflects the performance on the partitions in a comprehensible manner. We do however show that this relation is not trivial.

Our results also show that subdividing the data and evaluating over the resulting partitions yields different information regarding model deficiencies than an overall evaluation. The downside is that we have less data to estimate the NSE. In the future we can use this for model selection and diagnostic purposes.

How to cite: Klotz, D., Gauch, M., Nearing, G., Hochreiter, S., and Kratzert, F.: The persistence of errors: How evaluating models over data partitions relates to a global evaluation, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15221, https://doi.org/10.5194/egusphere-egu23-15221, 2023.

17:45–17:55
|
EGU23-9710
|
HS1.3.1
|
ECS
|
On-site presentation
Ehsan Nabavi

The COVID-19 pandemic has shown the importance of modeling in guiding decision-making for governments and society, and the significant influence that modelers hold, especially during times of crisis. Water modelers may also encounter similar situations where their models are caught up in political debates, shaping people's everyday lives. 

This paper discusses the cultural and professional norms around water modeling practice that need to be established or revisited in order to make modeling work more responsible, through a review of models developed for COVID-19. It introduces six areas of study for "responsible water modeling" that can advance future theoretical and practical discussions on the topic: (1) building a common appreciation of the concept of responsibility, (2) interactions between science and policy, (3) the influence of boundary judgments on the model's outcome, (4) the politics of uncertainty, (5) stakeholder involvement, and (6) integration and coordination

The paper suggests that by focusing on these subjects, the fundamental principles and characteristics of responsible modeling can be established in order to address and respond to water challenges while also serving the public good.

How to cite: Nabavi, E.: Navigating Responsible Water Modeling in the Wake of COVID-19, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9710, https://doi.org/10.5194/egusphere-egu23-9710, 2023.

Posters on site: Wed, 26 Apr, 14:00–15:45 | Hall A

Chairpersons: Diana Spieler, Wouter Knoben
A.11
|
EGU23-968
|
HS1.3.1
Juliane Mai, Hongren Shen, Bryan Tolson, Étienne Gaborit, Richard Arsenault, James Craig, Vincent Fortin, Lauren Fry, Martin Gauch, Daniel Klotz, Frederik Kratzert, Nicole O'Brien, Daniel Princz, Sinan Rasiya Koya, Tirthankar Roy, Frank Seglenieks, Narayan Shretha, Andre Guy Temgoua, Vincent Vionnet, and Jonathan Waddell

Model intercomparison studies are carried out to test and compare the simulated outputs of various model setups over the same study domain. The Great Lakes region is such a domain of high public interest as it not only resembles a challenging region to model with its trans-boundary location, strong lake effects, and regions of strong human impact but is also one of the most densely populated areas in the United States and Canada. This study brought together a wide range of researchers setting up their models of choice in a highly standardized experimental setup using the same geophysical datasets, forcings, common routing product, and locations of performance evaluation across the 1x106 km2 study domain. The study comprises 13 models covering a wide range of model types from Machine Learning based, basin-wise, subbasin-based, and gridded models that are either locally or globally calibrated or calibrated for one of each of six predefined regions of the watershed. This study not only compares models regarding their capability to simulated streamflow (Q) but also evaluates the quality of simulated actual evapotranspiration (AET), surface soil moisture (SSM), and snow water equivalent (SWE).

The main results of this study are:

  • The comparison of models regarding streamflow reveals the superior quality of the Machine Learning based model in all experiments performed.
  • While the locally calibrated models lead to good performance in calibration and temporal, they lose performance when they are transferred to locations the model has not been calibrated on.
  • The regionally calibrated models exhibit low performances in highly regulated and urban areas as well as agricultural regions in the US.
  • Comparisons of additional model outputs against gridded reference datasets show that aggregating model outputs and the reference dataset to basin scale can lead to different conclusions than a comparison at the native grid scale.
  • A multi-objective-based analysis of the model performances across all variables reveals overall excellent performing locally calibrated models as well as regionally calibrated models.
  • Model outputs and observations produced and used in this study are available on an interactive website (www.hydrohub.org/mips_introduction.html#grip-gl) and on FRDR (http://www.frdr-dfdr.ca).

How to cite: Mai, J., Shen, H., Tolson, B., Gaborit, É., Arsenault, R., Craig, J., Fortin, V., Fry, L., Gauch, M., Klotz, D., Kratzert, F., O'Brien, N., Princz, D., Rasiya Koya, S., Roy, T., Seglenieks, F., Shretha, N., Temgoua, A. G., Vionnet, V., and Waddell, J.: The Great Lakes Runoff Intercomparison Project (GRIP-GL), EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-968, https://doi.org/10.5194/egusphere-egu23-968, 2023.

A.12
|
EGU23-11062
|
HS1.3.1
Jens Kiesel, Nicola Fohrer, Paul D. Wagner, Marcelo Haas, and Björn Guse

In several hydrological studies the need for consistency in hydrological modeling was highlighted. To achieve model consistency, it is required that all relevant hydrological processes are evaluated for accuracy in their spatio-temporal representation under consideration of available observations. In this study, we transfer the idea of hydrological consistency to water quality modeling. We focus on water quality modelling in rural mesoscale catchments and the interaction with agricultural production systems and their management. Based on several studies, we have developed a guideline which includes the following six challenges:

  • Representation of rural landscape: Spatial and temporal patterns of land use and land management are critical to adequately represent water quality in models. Remote sensing and land use models are very useful resources to be exploited.
  • Accuracy in model structure and model parameters: The transfer of a model diagnostic analysis to water quality leads to a better understanding of how water quality variables are controlled by model structures and corresponding model parameters.
  • Check of multiple model output for consistency: Assessing multiple model outputs regarding their temporal, spatial and process performance using observed time series, remotely sensed spatial patterns, knowledge about transport pathways and even soft data can significantly enhance model consistency.
  • Joint multi-metric calibration of discharge and water quality for all magnitudes: Multi-metric calibration using performance metrics and signature measures both for discharge and water quality, such as flow and nitrate duration curve, leads to more balanced model simulations that represent all magnitudes of discharge and water quality accurately.
  • Scenarios and storylines for reliable land management: Scenarios and storylines should be co-developed with stakeholders in the river basin to increase realism and the acceptance of model results. They should be coherent in space and time, and provide a mix of available management options.
  • Consistent interpretation of impacts on water quality: The interpretation of scenarios can be supported by diagnostic tools to show the effectiveness of measures and their combinations while considering their costs and impacts on ecosystem services.

In our contribution, we give examples and further details regarding each challenge to give insights how to achieve consistency in water quality modelling.

How to cite: Kiesel, J., Fohrer, N., Wagner, P. D., Haas, M., and Guse, B.: A guideline for consistent water quality modeling in rural areas, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11062, https://doi.org/10.5194/egusphere-egu23-11062, 2023.

A.13
|
EGU23-8153
|
HS1.3.1
|
ECS
Mattia Neri and Elena Toth

The reliability of rainfall-runoff models in reproducing hydrological drought events is of primary importance for multiple applications (e.g. water resource management or agricultural risk assessment), especially in a context of expected future water scarcity. Typical model performance metrics are often not enough to assess the accuracy in the simulation of droughts. In fact, it is necessary to consider drought-specific indices taking into account, e.g., low flow characteristics, duration and deficit volumes as well as their seasonality and timing. Understanding which hydrological processes are (or are not) adequately modeled and why, in respect to such drought-specific performances, allows to assess the strengths and weaknesses of each model and may provide guidance on how to improve model set-up and its reliability.

Through the application of a conceptual semi-distributed model on a set of Alpine basins, the aim of this preliminary work is to analyse the relationship between drought-specific performance metrics, basin characteristics and model parameters. In particular, the specific influence of the different model state variables (e.g. snow water equivalent, evapotranspiration and soil moisture) on the reproduction of drought events is investigated.

The model used is a semi-distributed modelling framework based on the airGR rainfall-runoff models (Coron et al. 2017), applied through the R package airGRiwr (Dorchies 2022). The case study is a set of Alpine catchments, characterised by a high degree of “nestdness” which allows to fully implement the semi-distributed model structure and to perform its diagnosis.

The major advantage of a semi-distributed model, if properly set-up, is its ability to differentiate hydrological dynamics between the sub-catchments. In mountainous basins, for instance, simulating in a separate way the upstream headwater sub-catchments may substantially improve the accuracy in the simulation of snow storage and melting, which strongly affect the occurrence and timing of drought events. For this reason, the work will also analyse the benefits of an increasing spatial resolution of the semi-distributed set-up of the model, comparing the outcomes obtained when sequentially calibrating the model in a semi-distributed fashion on the upstream sub-catchments in respect to the baseline of a lumped configuration.

 

References

Coron, L., Thirel, G., Delaigue, O., Perrin, C. and Andréassian, V. (2017). The Suite of Lumped GR Hydrological Models in an R package. Environmental Modelling and Software, 94, 166-171, doi: 10.1016/j.envsoft.2017.05.002.

David Dorchies (2022). airGRiwrm: 'airGR' Integrated Water Resource Management. R package version 0.6.1. https://CRAN.R-project.org/package=airGRiwrm

How to cite: Neri, M. and Toth, E.: On the accurate simulation of hydrological droughts in Alpine regions: investigating the multiple role of rainfall-runoff model dynamics and basin characteristics, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8153, https://doi.org/10.5194/egusphere-egu23-8153, 2023.

A.14
|
EGU23-579
|
HS1.3.1
|
ECS
Cilcia Kusumastuti, Rajeshwar Mehrotra, and Ashish Sharma

Most systematic bias correction approaches which are developed based on the bias of the statistical properties of interest perform well to bias correct the current climate simulations with respect to observations. However, the significance of the application of systematic bias correction approaches on the raw output of climate model simulations remains a debate due to the unavailability of future climate observation to validate the approach.

The output of a recent ultra-high resolution climate model simulation, UHR-CESM, demonstrates the best performance to simulate variability of sea surface temperature (SST) in the tropical Pacific with an exception of a small bias in mean. This knowledge encouraged us to use the outputs of the model to represent the truth both in current and future climates. We use the output of the model in response to the current climate CO2 concentration as the representative of the current climate. While the outputs of model simulation in response to doubling and quadrupling CO2 concentrations are used as the representative of the truth of future climates.

We bias correct monthly SST simulations for 8 (eight) Coupled Model Intercomparison Project 6 (CMIP6) over the Niño 3.4 region having the same CO2 concentration as our reference model using a novel time-frequency continuous wavelet-based bias correction (CWBC). The results show a nearly perfect correction of distributional, trend, and spectral attributes biases in the 8 (eight) climate model simulations in the current climate and a consistent reduction of the biases in the model simulation in response to doubled CO2 concentration. Although the overall quality of the statistical attributes is improved after the application of bias correction in response to the more extreme change of quadrupled CO2 concentration, a degradation in the spectral attributes is observed. It shows that a systematic bias correction approach has its upper limit. Therefore, while the application of bias correction approaches is recommended prior to the further use of raw climate model simulations, up to what extent future climate simulations are reliably bias corrected should be handled carefully.

How to cite: Kusumastuti, C., Mehrotra, R., and Sharma, A.: Is there an upper extent to systematic bias correction of climate model simulations? Application to low-frequency variability within the Niño3.4 region, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-579, https://doi.org/10.5194/egusphere-egu23-579, 2023.

A.15
|
EGU23-5702
|
HS1.3.1
Rolf Hut, Jerom Aerts, Pau Wiersma, Vincent Hoogelander, Nick van de Giesen, Niels Drost, Peter Kalverla, Ben van Werkhoven, Stefan Verhoeven, Fakhereh (Sarah) Alidoost, Barbara Vreede, and Yang Liu

The eWaterCycle platform introduced in 2022 (https://doi.org/10.5194/gmd-15-5371-2022) provides hydrologists with an online platform to conduct numerical studies involving hydrological models. It allows hydrologists to work with each other's data and datasets directly from a webbrowser. The workflow of the experiment done is clearly visible, reproducible and easily adaptable because of how eWaterCycle separates the model (the algorithm) used from the experiment done with the model. eWaterCycle is designed such that research conducted on the platform is ‘FAIR by design’. Using eWaterCycle, studies can be done in less time, more transparently and by more junior members of the hydrological community than was possible a few years ago. 

In this presentation, we will explain the capabilities of the eWaterCycle platform and show them by describing recently (published) works of MSc and PhD members of our team, including a model coupling study, a large sample hydrology study and a climate impact assessment study.

How to cite: Hut, R., Aerts, J., Wiersma, P., Hoogelander, V., van de Giesen, N., Drost, N., Kalverla, P., van Werkhoven, B., Verhoeven, S., Alidoost, F. (., Vreede, B., and Liu, Y.: The eWaterCycle platform for open and FAIR hydrological collaboration, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5702, https://doi.org/10.5194/egusphere-egu23-5702, 2023.

A.16
|
EGU23-13770
|
HS1.3.1
Hélène Boisgontier, Dirk Eilander, Laurène Bouaziz, Joost Buitink, Anaïs Couasnon, Roel de Goede, Mark Hegnauer, Tim Leijnse, and Willem van Verseveld

Hydrological models are crucial to understand water systems and perform impact assessment studies. However, these models require a lot of accurate data, especially if the model is spatially distributed. However, sufficiently accurate datasets while available, for example from earth-observations, need to be converted into model-specific, sometimes idiosyncratic, file formats. Therefore, hydrological models require various steps to process raw input data to model data which, if done manually, makes the process time consuming and hard to reproduce. Hence, there is a clear need for automated model instance setup for increased transparency and reproducibility in hydrological modeling.

 

HydroMT (Hydro Model Tools) is an open-source Python package (https://github.com/Deltares/hydromt) that aims to make the process of building hydrological model instances and analyzing their results automated and reproducible. Compared to many other packages for automated model instance setup, HydroMT is data- and model-agnostic, meaning that data sources can easily be interchanged without additional coding and the generic model interface can be used for different model software. This makes it possible to reuse workflows to prepare input from different datasets or for different model software that require the same parameter (e.g. Manning roughness derived from land use maps) and thereby supporting controlled model intercomparison and sensitivity experiments. 

 

In this contribution we show the application of HydroMT for flood hazard modeling using the distributed hydrological Wflow model and the reduced-physics hydrodynamic SFINCS model, both open-source models. We use HydroMT to setup a controlled and reproducible model experiment. We test the sensitivity of both models to various data sources used- and assumptions taken in the model instance building process and compare the skill to simulate peak discharge. Using this application, we discuss the merits and limitations of HydroMT and next steps toward FAIR hydrological modeling.

How to cite: Boisgontier, H., Eilander, D., Bouaziz, L., Buitink, J., Couasnon, A., de Goede, R., Hegnauer, M., Leijnse, T., and van Verseveld, W.: Towards FAIR hydrological modeling with HydroMT, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13770, https://doi.org/10.5194/egusphere-egu23-13770, 2023.

A.17
|
EGU23-8553
|
HS1.3.1
|
ECS
|
Alexander Dolich, Mirko Mälicke, Ashish Manoj J, Jan Wienhöfer, and Erwin Zehe

The virtual research environment V-FOR-WaTer provides functionalities to store and access hydrological and other environmental data from various sources and disciplines. We propose a framework to run containerized tools within the V-FOR-WaTer toolbox, that is intended to solve the problem of combining software or scripts developed in different programming languages.

The framework is used to manage Docker containers, which can contain software like tools for data analysis or environmental modeling. Alongside the well-known advantages of containerization, such as development speed and efficiency, isolation from the local system, dependency management and portability, the usage of containers also ensures a high degree of reproducibility.

Given a scientific context, containers are especially useful to combine scripts in different languages following different development paradigms. To do so, we developed a framework-agnostic container specification which standardizes inputs and outputs from and to containers to ease the development of new tools. As of now we also provide templates for tools developed in Python, R, Octave and NodeJS.

We present an exemplary workflow for the CATFLOW hydrological model. Data from the V-FOR-WaTer environment is loaded using a Python tool and preprocessed with an existing R tool. After running the FORTRAN model, existing tools in Python, R and MATLAB are used for analysis and presentation of results. When executing the workflow, the user does not need to be familiar with the different programming languages of individual tools, since the containerized tools are self-contained by definition.

How to cite: Dolich, A., Mälicke, M., Manoj J, A., Wienhöfer, J., and Zehe, E.: Using Docker in environmental research, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8553, https://doi.org/10.5194/egusphere-egu23-8553, 2023.

Posters virtual: Wed, 26 Apr, 14:00–15:45 | vHall HS

Chairpersons: Lieke Melsen, Keirnan Fowler
vHS.3
|
EGU23-904
|
HS1.3.1
|
ECS
|
Lieke Melsen

Hydrological models play a key role in contemporary hydrological scientific research. For this study, 400+ scientific hydrological vacancies were analyzed, to evaluate whether the job description already prescribed which model must be used, and whether experience with a specific model was an asset. Of the analysed job positions, 76%  involved at least some modelling. Of the PhD positions that involved any modelling, the model is already prescribed in the vacancy text in 17%  of the cases, for postdoc positions this was 30%. A small questionnaire revealed that also beyond the vacancies where the model is already prescribed, in many Early-Career Scientist (ECS) projects the model to be used is pre-determined and, actually, also often used without further discussion. There are valid reasons to pre-determine the model in these projects, but at the same time, this can have long-term consequences for the ECS: experience with the model will influence the research identity the ECS is developing, and might influence future opportunities of the ECS - it might be strategic to gain experience with popular, broadly used models, or to become part of an efficient modelling team. This serves an instrumental vision on modelling. Seeing models as hypotheses calls for a more critical evaluation. We can educate ECS the current rules of the game, while at the same time actively stimulate critically questioning these rules.

How to cite: Melsen, L.: Recruitment of early career scientists for hydrological modelling positions: implications for model progress, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-904, https://doi.org/10.5194/egusphere-egu23-904, 2023.

vHS.4
|
EGU23-16869
|
HS1.3.1
Lele Shu, Yan Chang, Xianhong Meng, Paul Ullrich, Christopher Duffy, Hao Chen, Shihua Lyu, Yaonan Zhang, and Zhaoguo Li

Model and data are essential for current geoscientific research. Too many hydrological models are available for potential modelers, plus too much spatial terrestrial data related to modeling is accessible to users. More importantly, reproducibility is one of the key features in science,  which is barely discussed in hydrological models. Two significant reasons are that (1) the various hydrological models are incompatible since they require different variables, even if some of them share the same terminology, and (2) the complexity of model structure makes it impossible to deploy a model swiftly in any new research area. 
Our project is to establish a Global Hydrological Data Cloud (GHDC, https://shuddata.com) providing essential terrestrial variables for generic hydrological modeling, as modelers provide the watershed boundary and model requests. The data retrieved from the GHDC covers terrain, topology, soil/geology, landuse, hydraulic parameters and meteorological time-series data. The demonstration of three watershed examples with the Simulator of Hydrologic Unstructured Domains (SHUD),  can be a standard paradigm for physically-based hydrological modeling and instructive for other modeling processes, as the procedures are transferable to other hydrological models and regions. 

How to cite: Shu, L., Chang, Y., Meng, X., Ullrich, P., Duffy, C., Chen, H., Lyu, S., Zhang, Y., and Li, Z.: Open, Quick and Reproducible Hydrological Model Deployment Cloud Platform, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16869, https://doi.org/10.5194/egusphere-egu23-16869, 2023.