HS3.9 | Advances in Diagnostics, Sensitivity Analysis, Bayesian Evaluation, and Hypothesis Testing of Earth and Environmental Systems Models
Advances in Diagnostics, Sensitivity Analysis, Bayesian Evaluation, and Hypothesis Testing of Earth and Environmental Systems Models
Co-organized by BG9/ESSI1/NP5
Convener: Juliane Mai | Co-conveners: Thomas Wöhling, Cristina Prieto, Anneli GuthkeECSECS, Hoshin Gupta, Wolfgang Nowak, Uwe Ehret
Orals
| Mon, 15 Apr, 14:00–15:45 (CEST), 16:15–18:00 (CEST)
 
Room 2.31
Posters on site
| Attendance Tue, 16 Apr, 10:45–12:30 (CEST) | Display Tue, 16 Apr, 08:30–12:30
 
Hall A
Orals |
Mon, 14:00
Tue, 10:45
Proper characterization of uncertainty remains a major research and operational challenge in Environmental Sciences and is inherent to many aspects of modelling impacting model structure development; parameter estimation; an adequate representation of the data (inputs data and data used to evaluate the models); initial and boundary conditions; and hypothesis testing. To address this challenge, methods that have proved to be very helpful include a) uncertainty analysis (UA) that seeks to identify, quantify and reduce the different sources of uncertainty, as well as propagating them through the model, and b) the closely-related methods for sensitivity analysis (SA) that evaluate the role and significance of uncertain factors in the functioning of systems/models.

This session invites contributions that discuss advances, both in theory and/or application, in (Bayesian) UA methods and methods for SA applicable to all Earth and Environmental Systems Models (EESMs), which embrace all areas of hydrology, such as classical hydrology, subsurface hydrology and soil science.

Topics of interest include (but are not limited to):
1) Novel methods for effective characterization of sensitivity and uncertainty
2) Novel methods for spatial and temporal evaluation/analysis of models
3) Novel approaches and benchmarking efforts for parameter estimation
4) Improving the computational efficiency of SA/UA (efficient sampling, surrogate modelling, parallel computing, model pre-emption, model ensembles, etc.)
5) The role of information and error on SA/UA (e.g., input/output data error, model structure error, parametric error, regionalization error in environments with no data etc.)
6) Methods for evaluating model consistency and reliability as well as detecting and characterizing model inadequacy
7) Analyses of over-parameterised models enabled by AI/ML techniques
8) Robust quantification of predictive uncertainty for model surrogates and machine learning (ML) models
9) Approaches to define meaningful priors for ML techniques in hydro(geo)logy

The invited speaker of this session is Francesca Pianosi (University of Bristol).

Orals: Mon, 15 Apr | Room 2.31

Chairpersons: Juliane Mai, Thomas Wöhling, Cristina Prieto
14:00–14:05
14:05–14:25
|
EGU24-10770
|
solicited
|
Highlight
|
On-site presentation
Francesca Pianosi, Hannah Bloomfield, Gemma Coxon, Robert Reinecke, Saskia Salwey, Georgios Sarailidis, Thorsten Wagener, and Doris Wendt

Uncertainty and sensitivity analysis are becoming an integral part of mathematical modelling of earth and environmental systems. Uncertainty analysis aims at quantifying uncertainty in model outputs, which helps to avoid spurious precision and increase the trustworthiness of model-informed decisions. Sensitivity analysis aims at identifying the key sources of output uncertainty, which helps to set priorities for uncertainty reduction and model improvement.

In this presentation, we draw on a range of recent studies and projects to discuss the status of uncertainty and sensitivity analysis, focusing in particular on ‘global’ approaches, whereby uncertainties and sensitivities are quantified across the entire space of plausible variability of model inputs.

We highlight some of the challenges and untapped potential of these methodologies, including: (1) innovative ways to use global sensitivity analysis to test the ‘internal consistency’ of models and therefore support their diagnostic evaluation; (2) challenges and opportunities to promote the uptake of these methodologies to increasingly complex models, chains of models, and models used in industry; (3) the limits of uncertainty and sensitivity analysis when dealing with epistemic, poorly bounded or unquantifiable sources of uncertainties.

How to cite: Pianosi, F., Bloomfield, H., Coxon, G., Reinecke, R., Salwey, S., Sarailidis, G., Wagener, T., and Wendt, D.: Uncertainty and sensitivity analysis: new purposes, new users, new challenges, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10770, https://doi.org/10.5194/egusphere-egu24-10770, 2024.

14:25–14:35
|
EGU24-10517
|
On-site presentation
Lieke Melsen, Arnald Puy, and Andrea Saltelli

Science, being conducted by humans, is inherently a social activity. This is evident in the development and acceptance of scientific methods. Science is not only socially shaped, but also driven (and in turn influenced) by technological development: technology can open up new research avenues. At the same time, it has been shown that technology can cause lock-ins and path dependency. A scientific activity driven both by social behavior and technological development is modelling. As such, studying modelling as a socio-technical activity can provide insights both in enculturation processes and in lock-ins and path dependencies. Even more, enculturation can lead to lock-ins. We will demonstrate this for the Nash-Sutcliffe Efficiency (NSE), a popular evaluation metric in hydrological research. Through a bibliometric analysis we show that the NSE is part of hydrological research culture and does not appear in adjacent research fields. Through a historical analysis we demonstrate the path dependency that has developed with the popularity of the NSE. Finally, through exploring the faith of alternative measures, we show the lock-in effect of the use of the NSE. As such, we confirm that the evaluation of models needs to take into account cultural embeddedness. This is relevant because peers' acceptance is a powerful legitimization argument to trust the model and/or model results, including for policy relevant applications. Culturally determined bias needs to be assessed for its potential consequences in the discipline. 

How to cite: Melsen, L., Puy, A., and Saltelli, A.: Lock-ins and path dependency in evaluation metrics used for hydrological models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10517, https://doi.org/10.5194/egusphere-egu24-10517, 2024.

14:35–14:45
|
EGU24-8007
|
ECS
|
On-site presentation
Daniel Klotz, Martin Gauch, Frederik Kratzert, Grey Nearing, and Jakob Zscheischler

This contribution presents a diagnostic approach to investigate unexpected side effects that can occur during the evaluation of rainfall--runoff models.

The diagnostic technique that we use is based on the idea that one can use gradient descent to modify the runoff observations/simulations to obtain warranted observations/simulations. Specifically, we show how to use this concept to manipulate any hydrograph (e.g., a copy of the observations) so that it approximates specific NSE values for individual parts of the data. In short, we follow the following recipe to generate the synthetic simulations: (1) copy the observations, (2) add noise, (3) clip the modified discharge to zero, and (4) optimise the obtained simulation values by using gradient descent until a desired NSE value is reached.

To show how this diagnostic technique can be used we demonstrate a behaviour of Nash--Sutcliffe Efficiency (NSE) that appears when evaluating a model over subsets of the data: If models perform poorly for certain situations, this lack of performance is not necessarily reflected in the NSE (of the overall data). This behaviour follows from the definition of NSE and is therefore 100% explainable. However, from our experience it can be unexpected for many modellers. Our results also show that subdividing the data and evaluating over the resulting partitions yields different information regarding model deficiencies than an overall evaluation. We call this phenomenon the Divide And Measure Nonconformity or DAMN.



How to cite: Klotz, D., Gauch, M., Kratzert, F., Nearing, G., and Zscheischler, J.: Investigating the divide and measure nonconformity , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8007, https://doi.org/10.5194/egusphere-egu24-8007, 2024.

14:45–14:55
|
EGU24-8872
|
On-site presentation
Monica Riva, Andrea Manzoni, Rafael Leonardo Sandoval, Giovanni Michele Porta, and Alberto Guadagnini

Large-scale groundwater flow models are key to enhance our understanding of the potential impacts of climate and anthropogenic factors on water systems. Through these, we can identify significant patterns and processes that most affect water security. In this context, we have developed a comprehensive and robust theoretical framework and operational workflow that can effectively manage complex heterogeneous large-scale groundwater systems. We rely on machine learning techniques to map the spatial distribution of geomaterials within three-dimensional subsurface systems. The groundwater modeling approach encompasses (a) estimation of groundwater recharge and abstractions, as well as (b) appraisal of interactions among subsurface and surface water bodies. We ground our analysis on a unique dataset that encompasses lithostratigraphic data as well as piezometric and water extraction data across the largest aquifer system in Italy (the Po River basin). The quality of our results is assessed against pointwise information and hydrogeological cross-sections which are available within the reconstructed domain. These can be considered as soft information based on expert assessment. As uncertainty quantification is critical for subsurface characterization and assessment of future states of the groundwater system, the proposed methodology is designed to provide a quantitative evaluation of prediction uncertainty at any location of the reconstructed domain. Furthermore, we quantify the relative importance of uncertain model parameters on target model outputs through the implementation of a rigorous Global Sensitivity Analysis. By evaluating the spatial distribution of global sensitivity metrics associated with model parameters, we gain valuable insights into areas where the acquisition of future information could enhance the quality of groundwater flow model parameterization and improve hydraulic head estimates. The comprehensive dataset provided in this study, combined with the reconstruction of the subsurface system properties and piezometric head distribution and with the quantification of the associated uncertainty, can be readily employed in the context of groundwater availability and quality studies associated with the region of interest. The approach and operational workflow are flexible and readily transferable to assist identification of the main dynamics and patterns of large-scale aquifer systems of the kind here analyzed.

How to cite: Riva, M., Manzoni, A., Sandoval, R. L., Porta, G. M., and Guadagnini, A.: Characterization and modeling of large-scale aquifer systems under uncertainty: methodology and application to the Po River aquifer system, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8872, https://doi.org/10.5194/egusphere-egu24-8872, 2024.

14:55–15:05
|
EGU24-539
|
ECS
|
On-site presentation
Songjun Wu, Doerthe Tetzlaff, Keith Beven, and Chris Soulsby

Successful calibration of distributed hydrological models is often hindered by complex model structures, incommensurability between observed and modeled variables, and the complex nature of many hydrological processes. Many approaches have been proposed and compared for calibration, but the comparisons were generally based on parsimonious models with limited objectives. The conclusions could change when more parameters are to be calibrated with multiple objectives and increasing data availability. In this study four different approaches (random sampling, DREAM, NSGA-II, GLUE Limits of acceptability) were tested for a complex application - to calibrate 58 parameters of a hydrological model against 24 objectives (soil moisture and isotopes at 3 depths under vegetation covers). By comparing the simulation performance of parameter sets selected from different approaches, we concluded that random sampling is still usable in high-dimensional parameter space, providing comparable performance to other approaches despite of the poor parameter identifiability. DREAM provided better simulation performance and parameter convergence with informal likelihood functions; however, the difficulty in describing model residual distribution could possibly result in inappropriate formal likelihood functions and thus the poor simulations. Multi-criteria calibration, taking NSGA-II as an example, gave ideal model performance/parameter identifiability and explicitly unravelled the trade-offs between objectives after aggregating them (into 2 or 4); but calibrating against all 24 objectives was hindered by the “curse of dimensionality”, as the increasing dimension exponentially expanded the Pareto front and increased the difficulty to differentiate parameter sets. Finally, Limits of acceptability also provided comparable simulations; moreover, it can be regarded as a learning tool because detailed information about model failures is available for each objective at each timestep. However, the limitation is the insufficient exploration of high-dimensional parameter space due to the use of Latin-Hypercube sampling.

Overall, all approaches showed benefits and limitations, and a general approach to be easily used for such complex calibration cases without trial-and-error is still lacking. By comparing those common approaches, we realised the difficulty to define a proper objective function for many-objective optimisation, either for aggregated scalar function (due to the difficulty of assigning weights or assuming a form for the residual distribution) or the vector function (due to the expansion of the Pareto front). In this context, the Limits of Acceptability approach provided a more flexible way to define the “objective function” for each timestep, though it introduces extra demands in understanding data uncertainties and deciding on what should be considered acceptable. Moreover, in such many-objective optimisation, it is possible that not a single parameter set can capture all the objectives satisfactorily (not in 8 million run in this study).  The non-existence of any global optimal in the sample suggests that the concept of equifinality should be embraced in using an ensemble of comparable parameters to represent such complex systems.

How to cite: Wu, S., Tetzlaff, D., Beven, K., and Soulsby, C.: Revisiting the common approaches for hydrological model calibration with high-dimensional parameters and objectives , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-539, https://doi.org/10.5194/egusphere-egu24-539, 2024.

15:05–15:15
|
EGU24-4140
|
ECS
|
Virtual presentation
|
Jonathan Romero-Cuellar, Rezgar Arabzadeh, James Craig, Bryan Tolson, and Juliane Mai

The utilization of probabilistic streamflow predictions holds considerable value in the domains of predictive uncertainty estimation, hydrologic risk management, and decision support in water resources. Typically, the quantification of predictive uncertainty is formulated and evaluated using a solitary hydrological model, posing challenges in extrapolating findings to diverse model configurations. To address this limitation, this study examines variations in the performance ranking of various streamflow models through the application of a residual error model post-processing approach across multiple basins and models. The assessment encompasses 141 basins within the Great Lakes watershed, spanning the USA and Canada, and involves the evaluation of 13 diverse streamflow models using deterministic and probabilistic performance metrics. This investigation scrutinizes the interdependence between the quality of probabilistic streamflow estimation and the underlying model quality. The results underscore that the selection of a streamflow model significantly influences the robustness of probabilistic predictions. Notably, transitioning from deterministic to probabilistic predictions, facilitated by a post-processing approach, maintains the performance ranking consistency for the best and worst deterministic models. However, models of intermediate rank in deterministic evaluation exhibit inconsistent rankings when evaluated in probabilistic mode. Furthermore, the study reveals that post-processing residual errors of long short-term memory (LSTM) network models consistently outperform other models in both deterministic and probabilistic metrics. This research emphasizes the importance of integrating deterministic streamflow model predictions with residual error models to enhance the quality and utility of hydrological predictions. It elucidates the extent to which the efficacy of probabilistic predictions is contingent upon the sound performance of the underlying model and its potential to compensate for deficiencies in model performance. Ultimately, these findings underscore the significance of combining deterministic and probabilistic approaches for improving hydrological predictions, quantifying uncertainty, and supporting decision-making in operational water management.

How to cite: Romero-Cuellar, J., Arabzadeh, R., Craig, J., Tolson, B., and Mai, J.: Integrating Deterministic and Probabilistic Approaches for Improved Hydrological Predictions: Insights from Multi-model Assessment in the Great Lakes Watersheds, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4140, https://doi.org/10.5194/egusphere-egu24-4140, 2024.

15:15–15:25
|
EGU24-6157
|
ECS
|
On-site presentation
Lea Friedli and Niklas Linde

Analyzing groundwater hazards frequently involves utilizing Bayesian inversions and estimating probabilities associated with rare events. A concrete example concerns the potential contamination of an aquifer, a process influenced by the unknown hydraulic properties of the subsurface. In this context, the emphasis shifts from the posterior distribution of model parameters to the distribution of a particular quantity of interest dependent on these parameters. To tackle the methodological hurdles at hand, we propose a Sequential Monte Carlo approach in two stages. The initial phase involves generating particles to approximate the posterior distribution, while the subsequent phase utilizes subset sampling techniques to evaluate the probability of the specific rare event of interest. Exploring a two-dimensional flow and transport example, we demonstrate the efficiency and accuracy of the developed PostRisk-SMC method in estimating rare event probabilities associated with groundwater hazards.

How to cite: Friedli, L. and Linde, N.: Analyzing Groundwater Hazards with Sequential Monte Carlo , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6157, https://doi.org/10.5194/egusphere-egu24-6157, 2024.

15:25–15:35
|
EGU24-7820
|
On-site presentation
Ilja Kröker, Elisabeth Nißler, Sergey Oladyshkin, Wolfgang Nowak, and Claus Haslauer

Soil temperature and soil moisture in the unsaturated zone depend on each other and are influenced by non-stationary hydro-meteorological forcing factors that are subject to climate change. 

The transport of both heat and moisture are crucial for predicting temperatures in the shallow subsurface and, as consequence, around and in drinking water supply pipes. Elevated temperatures in water supply pipes (even up to 25°C and above) pose a risk to human health due to increased likelihood of microbial contamination. 

To model variably saturated flow and heat transport, a partial differential equation (PDE)-based integrated hydrogeological model has been developed and implemented in the DuMuX simulation framework.  This model integrates the hydrometeorological forcing functions via a novel interface condition at the atmosphere-subsurface boundary. Relevant soil properties and their dependency on temperatures have been measured as time series at a pilot site at the University of Stuttgart in detail since 2020. 

Despite these efforts on measurements and model enhancement, some uncertainties remain. These include capillary-saturation relationships in materials where they are difficult to measure, especially in the gravel-type materials that are commonly used above drinking water pipes. 

To enhance our understanding of the underlying physical processes, we employ Bayesian inference, which is a well-established approach to estimate uncertain or unknown model parameters. Computationally cheap surrogate models allow to overcome the limitations of Bayesian methods for computationally intensive models, when such surrogate models are used in lieu of the physical (PDE)-based model. Here, we use the arbitrary polynomial chaos expansion equipped with Bayesian regularization (BaPC).  The BaPC allows to exploit latest (Bayesian) active learning strategies to reduce the number of model-runs that are necessary for constructing the surrogate model.  

In the present work, we demonstrate the calibration of a PDE-based integrated hydrogeological model using Bayesian inference on a BaPC-based surrogate.  The accuracy of the calibrated and predicted temperatures in the shallow subsurface is then assessed against real-world measurement data. 

How to cite: Kröker, I., Nißler, E., Oladyshkin, S., Nowak, W., and Haslauer, C.: Data-driven surrogate-based Bayesian model calibration for predicting vadose zone temperatures in drinking water supply pipes, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7820, https://doi.org/10.5194/egusphere-egu24-7820, 2024.

15:35–15:45
Coffee break
Chairpersons: Juliane Mai, Thomas Wöhling, Cristina Prieto
16:15–16:20
16:20–16:40
|
EGU24-20170
|
solicited
|
On-site presentation
Carlo Albert

In recent years, Machine Learning (ML) models have led to a substantial improvement in hydrological predictions. It appears these models can distill information from catchment properties that is relevant for the relationship between meteorological drivers and streamflow, which has so far eluded hydrologists.
In the first part of this talk, I shall demonstrate some of our attempts towards understanding these improvements. Utilising Autoencoders and intrinsic dimension estimators, we have shown that the wealth of available catchment properties can effectively be summarised into merely three features, insofar as they are relevant for streamflow prediction. Hybrid models, which combine the flexibility of ML models with mechanistic mass-balance models, are equally adept at predicting as pure ML models but come with only a few interpretable interior states. Combining these findings will, hopefully, bring us closer to understanding what these ML models seem to have 'grasped'.
In the second part of the talk, I will address the issue of uncertainty quantification. I contend that error modelling should not be attempted on the residuals. Rather, we should model the errors where they originate, i.e., on the inputs, model states, and/or parameters. Such stochastic models are more adept at expressing the intricate distributions exhibited by real data. However, they come at the cost of a very large number of unobserved latent variables and thus pose a high-dimensional inference problem. This is particularly pertinent when our models include ML components. Fortunately, advances in inference algorithms and parallel computing infrastructure continue to extend the limits on the number of variables that can be inferred within a reasonable timeframe. I will present a straightforward example of a stochastic hydrological model with input uncertainty, where Hamiltonian Monte Carlo enables a comprehensive Bayesian inference of model parameters and the actual rain time-series simultaneously.

How to cite: Albert, C.: Advances and prospects in hydrological (error) modelling, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20170, https://doi.org/10.5194/egusphere-egu24-20170, 2024.

16:40–16:50
|
EGU24-18804
|
ECS
|
On-site presentation
Max Rudolph, Thomas Wöhling, Thorsten Wagener, and Andreas Hartmann

Inverse problems play a pivotal role in hydrological modelling, particularly for parameter estimation and system understanding, which are essential for managing water resources. The application of statistical inversion methodologies such as Generalized Likelihood Uncertainty Estimation (GLUE) is often obstructed, however, by high model computational cost given that Monte Carlo sampling strategies often return a very small fraction of behavioural model runs. There is a need, however, to balance this aspect with the demand for broadly sampling the parameter space. Especially relevant for spatially distributed or (partial) differential equation based models, this aspect calls for computationally efficient methods of statistical inference that approximate the “true” posterior parameter distribution well. Our study introduces multilevel GLUE (MLGLUE), which effectively mitigates these computational challenges by exploiting a hierarchy of models with different computational grid resolutions (i.e., spatial or temporal discretisation), inspired by multilevel Monte Carlo strategies. Starting with low-resolution models, MLGLUE only passes parameter samples to higher-resolution models for evaluation if associated with a high likelihood, which poses a large potential for substantial computational savings. We demonstrate the applicability of the approach using a groundwater flow model with a hierarchy of different spatial resolutions. With MLGLUE, the computation time of parameter inference could be reduced by more than 60% compared to GLUE, while the resulting posterior distributions are virtually identical. Correspondingly, the uncertainty estimates of MLGLUE and GLUE are also very similar. Considering the simplicity of the implementation as well as its efficiency, MLGLUE promises to be an alternative for statistical inversion of computationally costly hydrological models.

How to cite: Rudolph, M., Wöhling, T., Wagener, T., and Hartmann, A.: Accelerating Hydrological Model Inversion: A Multilevel Approach to GLUE, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18804, https://doi.org/10.5194/egusphere-egu24-18804, 2024.

16:50–17:00
|
EGU24-14219
|
On-site presentation
Scott K. Hansen, Daniel O'Malley, and James Hambleton

We consider the optimal inference of spatially heterogeneous hydraulic conductivity and head fields based on three kinds of point measurements that may be available at monitoring wells: of head, permeability, and groundwater speed. We have developed a general, zonation-free technique for Monte Carlo (MC) study of field recovery problems, based on Karhunen-Loève (K-L) expansions of the unknown fields, whose coefficients are recovered by an analytical adjoint-state technique. This allows unbiased sampling from the space of all possible fields with a given correlation structure and efficient, automated gradient-descent calibration. The K-L basis functions have a straightforward notion of period, revealing the relationship between feature scale and reconstruction fidelity, and they have an a priori known spectrum, allowing for a non-subjective regularization term to be defined. We have performed automated MC calibration on over 1100 conductivity-head field pairs, employing a variety of point measurement geometries and quantified the mean-squared field reconstruction accuracy, both globally and as a function of feature scale.

We present heuristics for feature scale identification, examine global reconstruction error, and explore the value added by both groundwater speed measurements and by two different types of regularization. We show that significant feature identification becomes possible as feature scale exceeds four times measurement spacing and identification reliability subsequently improves in a power law fashion with increasing feature scale.

How to cite: Hansen, S. K., O'Malley, D., and Hambleton, J.: Feature scale and identifiability: quantifying the information that point hydraulic measurements provide about heterogeneous head and conductivity fields, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14219, https://doi.org/10.5194/egusphere-egu24-14219, 2024.

17:00–17:10
|
EGU24-20013
|
ECS
|
On-site presentation
|
Marit Hendrickx, Jan Diels, Jan Vanderborght, and Pieter Janssens

With the rise of affordable, autonomous sensors and IoT (Internet-of-Things) technology, it is possible to monitor soil moisture in a field online and in real time. This offers opportunities for real-time model calibration for irrigation scheduling. A framework is presented where realtime sensor data are coupled with a soil water balance model to predict soil moisture content and irrigation requirements at field scale. SWIM², Sensor Wielded Inverse Modelling of a Soil Water Irrigation Model, is a framework based on the DREAM inverse modelling approach to estimate 12 model parameters (soil and crop growth parameters) and their uncertainty distribution. These parameter distributions result in soil moisture predictions with a prediction uncertainty estimate, which enables a farmer to anticipate droughts and estimate irrigation requirements.

The SWIM² framework was validated based on three growing seasons (2021-2023) in about 30 fields of vegetable growers in Flanders. Kullback–Leibler divergence (KLD) was used as a metric to quantify information gain of the model parameters starting from non-informative priors. Performance was validated in two steps, i.e. the calibration period and prediction period, which is in correspondence with the real-world implementation of the framework. The RMSE, correlation (R, NSE) and Kling-Gupta efficiency (KGE) of soil moisture were analyzed in function of time, i.e. the amount of available sensor data for calibration.

Soil moisture can be predicted accurately after 10 to 20 days of sensor data is available for calibration. The RMSE during the calibration period is generally around 0.02 m³/m³, while the RMSE during the prediction period decreases from 0.04 to 0.02 m³/m³ when more calibration data is available. Information gain (KLD) of some parameters (e.g. field capacity and curve number) largely depends on the presence of dynamic events (e.g. precipitation events) during the calibration period. After 40 days of sensor data, the KGE and Pearson correlation of the calibration period become stable with median values of 0.8 and 0.9, respectively. For the validation period, the KGE and Pearson correlation are increasing in time, with median values from 0.3 to 0.7 (KGE) and from 0.7 to 0.95 (R). These good results show that, with this framework, we can simulate and predict soil moisture accurately. These predictions can in turn be used to estimate irrigation requirements.

Precipitation radar data was primarily considered as an input without uncertainty. As an extension, precipitation forcing error can be treated in DREAM by applying rainfall multipliers as additional parameters that are estimated in the inverse modelling framework. The multiplicative error of the radar data was quantified by comparison of radar data to rain gauge measurements. The prior uncertainty of the logarithmic multipliers was described by a Laplace distribution and was implemented in DREAM. The extended framework with rainfall multipliers shows better convergence and acceptance rate compared to the main framework. The calibration period shows better performance with higher correlations and lower RMSE values, but a decrease in performance was found for the validation period. These results suggest that the implementation of rainfall multipliers leads to overfitting, resulting in lower predictive power.

How to cite: Hendrickx, M., Diels, J., Vanderborght, J., and Janssens, P.: Field-scale soil moisture predictions using in situ sensor measurements in an inverse modelling framework: SWIM², EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20013, https://doi.org/10.5194/egusphere-egu24-20013, 2024.

17:10–17:20
|
EGU24-16361
|
ECS
|
On-site presentation
Anna Störiko, Albert J. Valocchi, Charles Werth, and Charles E. Schaefer

Stochastic modeling of contaminant reactions requires the definition of prior distributions for the respective rate constants. We use data from several experiments reported in the literature to better understand the distribution of pseudo-first-order rate constants of abiotic TCE reduction in different sediments. These distributions can be used to choose informed priors for these parameters in reactive-transport models.

Groundwater contamination with trichloroethylene (TCE) persists at many hazardous waste sites due to back diffusion from low-permeability zones such as clay lenses. In recent years, the abiotic reduction of TCE by reduced iron minerals has gained attention as a natural attenuation process, but there is uncertainty as to whether the process is fast enough to be effective. Pseudo-first-order rate constants have been determined in laboratory experiments and are reported in the literature for various sediments and rocks, as well as for individual reactive minerals. However, rate constants can vary between sites and aquifer materials. Reported values range over several orders of magnitude.

To assess the uncertainty and variability of pseudo-first-order rate constants, we compiled data reported in several studies. We built a statistical model based on a hierarchical Bayesian approach to predict probability distributions of rate constants at new sites based on this data set. We then investigated whether additional information about the sediment composition at a site could reduce the uncertainty. We tested two sets of predictors: reactive mineral content or the extractable Fe(II) content. Knowing the reactive mineral content reduced the uncertainty only slightly. In contrast, knowing the Fe(II) content greatly reduced the uncertainty because the relationship between Fe(II) content and rate constants is approximately log-log-linear. Using a simple example of diffusion-controlled transport in a contaminated aquitard, we show how the uncertainty in the predicted rate constants affects the predicted remediation times.

How to cite: Störiko, A., Valocchi, A. J., Werth, C., and Schaefer, C. E.: Estimating prior distributions of TCE transformation rate constants from literature data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16361, https://doi.org/10.5194/egusphere-egu24-16361, 2024.

17:20–17:30
|
EGU24-18569
|
On-site presentation
Lu Wang, Yueping Xu, Haiting Gu, and Xiao Liang

Deeper insights on internal model behaviors are essential as hydrological models are becoming more and more complex. Our study provides a framework which combines the time-varying global sensitivity analyses with data mining techniques to unravel the process-level behavior of high-complexity models and tease out the main information. The extracted information is further used to assist parameter identification. The physically-based Distributed Hydrology-Soil-Vegetation Model (DHSVM) set up in a mountainous watershed is used as a case study. Specifically, a two-step GSA including time-aggregated and time-variant approaches are conducted to address the problem of high parameter dimensionality and characterize the time-varying parameter importance. As we found difficulties in interpreting the long-term complicated dynamics, a clustering operation is performed to partition the entire period into several clusters and extract the corresponding temporal parameter importance patterns. Finally, the clustered time clusters are utilized in parameterization, where each parameter is identified in their dominant times. Results are summarized as follows: (1) importance of selected soil and vegetation parameters varies greatly throughout the period; (2) typical patterns of parameter importance corresponding to flood, very short dry-to-wet, fast recession and continuous dry periods are successfully distinguished. We argue that somewhere between “total period” and “continuous discrete time” can be more useful for understanding and interpretation; (3) parameters dominant for short times are much more identifiable when they are identified in dominance time cluster(s); (4) the enhanced parameter identifiability overall improves the model performance according to the metrics of NSE, LNSE, and RMSE, suggesting that the use of GSA information has the potential to provide a better search for optimal parameter sets.

How to cite: Wang, L., Xu, Y., Gu, H., and Liang, X.: Investigating dynamic parameter importance of a high-complexity hydrological model and implications for parameterization, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18569, https://doi.org/10.5194/egusphere-egu24-18569, 2024.

17:30–17:40
|
EGU24-14805
|
Virtual presentation
Ali Abousaeidi, Seyed Mohammad Mahdi Moezzi, Farkhondeh Khorashadi Zadeh, Seyed Razi Sheikholeslami, Albert Nkwasa, and Ann van Griensven

Sensitivity analysis of complex models, with a large number of input variables and parameters, is time-consuming and inefficient, using traditional approaches. Considering the capability of computing importance indices, the machine learning technique of the Random Forest (RF) is introduced as an alternative to conventional methods of sensitivity analysis. One of the advantages of using the RF model is the reduction of computational costs for sensitivity analysis.

The objective of this research is to analyze the importance of the input variables of a semi-distributed and physically-based hydrological model, namely SWAT (soil and water assessment tool) using the RF model. To this end, an RF-based model is first trained using SWAT input variables (such as, precipitation and temperature) and SWAT output variables (like streamflow and sediment load). Then, using the importance index of the RF model, the ranking of input variables, in terms of their impact on the accuracy of the model results, is determined. Additionally, the results of the sensitivity analysis are examined graphically. To validate the ranking results of the RF-based approach, the parameter ranking results of the Sobol G function, using the RF-based approach and the sensitivity analysis method of Sobol’ are compared. The ranking of the model input variables plays a significant role in the development of models and prioritizing efforts to reduce model errors.

Key words: Sensitivity analysis, model input variables, Machine learning technique, Random forest, SWAT model.

How to cite: Abousaeidi, A., Moezzi, S. M. M., Khorashadi Zadeh, F., Sheikholeslami, S. R., Nkwasa, A., and van Griensven, A.: Sensitivity analysis of input variables of a SWAT hydrological model using the machine learning technique of random forest, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14805, https://doi.org/10.5194/egusphere-egu24-14805, 2024.

17:40–17:50
|
EGU24-19966
|
On-site presentation
Aronne Dell'Oca, Monica Riva, Alberto Guadagnini, and Leonardo Sandoval

The runoff process in environmental systems is influenced by various variables that are typically are affected by uncertainty. These include, for example, climate and hydrogeological quantities (hereafter denoted as environmental variables). Additionally, the runoff process is influenced by quantities that are amenable to intervention/design (hereafter denoted as operational variables) and can therefore be set to desired values on the basis of specific management choices. A key question in this context is: How do we discriminate the impact of operational variables, whose values can be decided in the system design or management phase, on system outputs considering also the action of uncertainty associated with environmental variables? We tackle this issue upon introducing a novel approach which we term Operational Sensitivity Analysis (OSA) and set within a Global Sensitivity Analysis (GSA) framework. OSA enables us to assess the sensitivity of a given model output specifically to operational factors, while recognizing uncertainty in the environmental variables. This approach is developed as a complement to a traditional GSA, which does not differentiate at the methodological level the nature of the type of variability associated with operational or environmental variables.

We showcase our OSA approach through an exemplary scenario associated with a urban catchment where flooding results from sewer system failure. In this context, we distinguish between operational variables, such as sewer system pipe properties and urban area infiltration capacity, and environmental variables such as, urban catchment drainage properties and rain event characteristics. Our results suggest that the diameter of a set of pipes in the sewer network is the most influential operational variable. As such, it provides a rigorous basis upon which one could plan appropriate actions to effectively manage the system response.

How to cite: Dell'Oca, A., Riva, M., Guadagnini, A., and Sandoval, L.: Operational Sensitivity Analysis for Flooding in Urban Systems under Uncertainty, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19966, https://doi.org/10.5194/egusphere-egu24-19966, 2024.

17:50–18:00
|
EGU24-13330
|
ECS
|
On-site presentation
An improved Copula-Based Framework for Efficient Global Sensitivity Analysis
(withdrawn)
Hongli Liu, Martyn Clark, Shervan Gharari, Razi Sheikholeslami, Jim Freer, Wouter Knoben, Christopher Marsh, and Simon Michael Papalexiou

Posters on site: Tue, 16 Apr, 10:45–12:30 | Hall A

Display time: Tue, 16 Apr, 08:30–Tue, 16 Apr, 12:30
Chairpersons: Juliane Mai, Thomas Wöhling, Cristina Prieto
A.43
|
EGU24-7610
Bo-Tsen Wang, Chia-Hao Chang, and Jui-Pin Tsai

Understanding the spatial distribution of the aquifer parameters is crucial to evaluating the groundwater resources on a basin scale. River stage tomography (RST) is one of the potential methods to estimate the aquifer parameter fields. Utilizing the head variations caused by the river stage to conduct RST is essential to delineate the regional aquifer's spatial features successfully. However, the two external stimuli of the aquifer system, rainfall and river stage, are usually highly correlated, resulting in mixed features in the head observations, which may cause unreasonable estimates of parameter fields. Thus, separating the head variations sourced from rainfall and river stage is essential to developing the reference heads for RST. To solve this issue, we propose a systematic approach to extracting and reconstructing the head variations of river features from the original head observations during the flood periods and conducting RST. We utilized a real case study to examine the developed method. This study used the groundwater level data, rainfall data, and river stage data in the Zhuoshui River alluvial fan in 2006. The hydraulic diffusivity (D) values of five observation wells were used as the reference for parameter estimation. The results show that the RMSE of the D value is 0.027 (m2/s). The other three observation wells were selected for validation purposes, and the derived RMSE is 0.85(m2/s). The low RMSE reveals that the estimated D field can capture the characteristics of the regional aquifer. The results also indicate that the estimated D values derived from the developed method are consistent with the sampled D values from the pumping tests in the calibration and validation processes in the real case study. The results demonstrate that the proposed method can successfully extract and reconstruct the head variations of river features from the original head observations and can delineate the features of the regional parameter field. The proposed method can benefit RST studies and provide an alternative mixed-feature signal decomposition and reconstruction method.

How to cite: Wang, B.-T., Chang, C.-H., and Tsai, J.-P.: Parameter estimation of heterogeneous field in basin scale based on signal analysis and river stage tomography, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7610, https://doi.org/10.5194/egusphere-egu24-7610, 2024.

A.44
|
EGU24-1745
Husam Baalousha, Marwan Fahs, and Anis Younes

The inverse problem in hydrogeology poses a significant challenge for modelers due to its ill-posed nature and the non-uniqueness of solutions. This challenge is compounded by the substantial computational efforts required for calibrating highly parameterized aquifers, particularly those with significant heterogeneity, such as karst limestone aquifers. While stochastic methods like Monte Carlo simulations are commonly used to assess uncertainty, their extensive computational requirements often limit their practicality.

The Null Space Monte Carlo (NSMC) method provides a parameter-constrained approach to address these challenges in inverse problems, allowing for the quantification of uncertainty in calibrated parameters. This method was applied to the northern aquifer of Qatar, which is characterized by high heterogeneity. The calibration of the model utilized the pilot point approach, and the calibrated results were spatially interpolated across the aquifer area using kriging.

NSMC was then employed to generate 100 sets of parameter-constrained random variables representing hydraulic conductivities. The null space vectors of these random solutions were incorporated into the parameter space derived from the calibrated model. Statistical analysis of the resulting calibrated hydraulic conductivities revealed a wide range, varying from 0.1 to 350 m/d, illustrating the significant variability inherent in the karstic nature of the aquifer.

Areas with high hydraulic conductivity were identified in the middle and eastern parts of the aquifer. These regions of elevated hydraulic conductivity also exhibited high standard deviations, further emphasizing the heterogeneity and complex nature of the aquifer's hydraulic properties.

How to cite: Baalousha, H., Fahs, M., and Younes, A.: Predictive uncertainty analysis using null-space Monte Carlo , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1745, https://doi.org/10.5194/egusphere-egu24-1745, 2024.

A.45
|
EGU24-2300
|
ECS
Tobias Karl David Weber, Alexander Schade, Robert Rauch, Sebastian Gayler, Joachim Ingwersen, Wolfgang Nowak, Efstathios Diamantopoulos, and Thilo Streck

The importance of evapotranspiration (ET) fluxes for the terrestrial water cycle is demonstrated by an overwhelming body of literature. Unfortunately, errors in their measurement contribute significantly to (model) uncertainties in quantifying and understanding ecohydrological systems. Measurements of surface-atmosphere fluxes of water at the ecosystem scale, the eddy covariance method can be considered a powerful technique and considered an important tool to validate ET models. Spatially averaged fluxes of several hundred square meters may be obtained. While the eddy-covariance technique has become a routine method to estimate the turbulent energy fluxes at the soil-atmosphere boundary, it remains not error free. Some of the inherent errors are quantifiable and may be partitioned into systematic and stochastic errors. For model-data comparison, the nature of the measurement error needs to be known to derive knowledge about model adequacy. To this end, we compare several assumptions found in the literature to describe the statistical properties of the error with newly derived descriptions, in this study. We are able to show, how sensitive the assumptions about the error are on the model selection process. We demonstrate this by comparing daily agro-ecosystem ET fluxes simulated with the detailed agro-hydrological model Expert-N to data gathered using the eddy-covariance technique.

How to cite: Weber, T. K. D., Schade, A., Rauch, R., Gayler, S., Ingwersen, J., Nowak, W., Diamantopoulos, E., and Streck, T.: Representing systematic and random errors of eddy covariance measurements in suitable likelihood models for robust model selection , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2300, https://doi.org/10.5194/egusphere-egu24-2300, 2024.

A.46
|
EGU24-8818
Thomas Wöhling and Oliver Crespo Delgadillo

Groundwater heads are commonly used to monitor storage of aquifers and as decision variables for groundwater management. Alluvial gravel aquifers are often characterized by high transmissivities and a corresponding strong seasonal and inter-annual variability of storage. The sustainable management of such aquifers is challenging, particularly for already tightly allocated aquifers and in increasingly extreme and potentially drier climates, and might require the restriction of groundwater abstraction for periods of time. Stakeholders require lead-in time to prepare for potential restrictions of their consented takes.

Groundwater models have been used in the past to support groundwater decision making and to provide the corresponding predictions of groundwater levels for operational forecasting and management. In this study, we benchmark and compare different model classes to perform this task: (i) a spatially explicit 3D groundwater flow model (MODFLOW), (ii) a conceptual, bucket-type Eigenmodel, (iii) a transfer-function model (TFN), and (iv) three machine learning (ML) techniques, namely, Multi-Layer Perceptron models (MLP), Long Short-Term Memory models (LSTM), and Random Forrest (RF) models. The model classes differ widely in their complexity, input requirements, calibration effort, and run-times. The different model classes are tested on four groundwater head time series taken from the Wairau Aquifer in New Zealand (Wöhling et al., 2020). Posterior parameter ensembles of MODFLOW (Wöhling et al., 2018) and the EIGENMODEL (Wöhling & Burbery, 2020) were combined with TFN and ML variants with different input features to form a (prior) multi-model ensemble. Models classes are ranked with posterior model weights derived from Bayesian model selection (BMS) and averaging (BMA) techniques.

Our results demonstrate that no “model that fits all” exists in our model set. The more physics-based MODFLOW model is not necessarily providing the most accurate predictions, but can provide physical meaning and interpretation for the entire model region and outputs at locations where no data is available. ML techniques have generally much lower input requirements and short run-times. They show to be competitive candidates for groundwater head predictions where observations are available, even for system states that lie outside the calibration data range.

Because the performance of model types is site-specific, we advocate the use of multi-model ensemble forecasting wherever feasible. The benefit is illustrated by our case study, with BMA uncertainty bounds providing a better coverage of the data and the BMA mean performing well for all tested sites. Redundant ensemble members (with BMA weights of zero) are easily filtered out to obtain efficient ensembles for operational forecasting.

 

References

Wöhling T, Burbery L (2020). Eigenmodels to forecast groundwater levels in unconfined river-fed aquifers during flow recession. Science of the Total Environment, 747, 141220, doi: 10.1016/j.scitotenv.2020.141220.

Wöhling, T., Gosses, M., Wilson, S., Wadsworth, V., Davidson, P. (2018). Quantifying river-groundwater interactions of New Zealand's gravel-bed rivers: The Wairau Plain. Goundwater doi:10.1111/gwat.12625

Wöhling T, Wilson SR, Wadsworth V, Davidson P. (2020). Detecting the cause of change using uncertain data: Natural and anthropogenic factors contributing to declining groundwater levels and flows of the Wairau Plain Aquifer, New Zealand. Journal of Hydrology: Regional Studies, 31, 100715, doi: 10.1016/j.ejrh.2020.100715.

 

How to cite: Wöhling, T. and Crespo Delgadillo, O.: Predicting groundwater heads in alluvial aquifers: Benchmarking different model classes and machine-learning techniques with BMA/S, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8818, https://doi.org/10.5194/egusphere-egu24-8818, 2024.

A.47
|
EGU24-11414
|
ECS
Muhammad Nabeel Usman, Jorge Leandro, Karl Broich, and Markus Disse

Flash floods have become one of the major natural hazards in central Europe, and climate change projections indicate that the frequency and severity of flash floods will increase in many areas across the world and in central Europe. The complexity involved in the flash flood generation makes it difficult to calibrate a hydrological model for the prediction of such peak hydrological events. This study investigates the best approach to calibrate an event-based conceptual HBV model, comparing different trials of single-objective, single-event multi-objective (SEMO), and multi-event-multi-objective (MEMO) model calibrations. Initially, three trials of single-objective calibration are performed w.r.t. RMSE, NSE, and BIAS separately, then three different trials of multi-objective optimization, i.e., SEMO-3D (single-event three objectives), MEMO-3D (mean of three objectives from two events), and MEMO-6D (two events six objectives) are formulated. Model performance was validated for several peak events via 90 % (confidence interval) CI-based output uncertainty quantification. The uncertainties associated with the model predictions are estimated stochastically using the ‘relative errors (REs)’ between the simulated (Qsim) and measured (Qobs) discharges as a likelihood measure. Single-objective model calibration demonstrated that significant trade-offs exist between different objective functions, and no unique parameter set can optimize all objectives simultaneously. Compared to the solutions of single-objective calibration, all the multi-objective calibration formulations produced relatively accurate and robust results during both model calibration and validation phases. The uncertainty intervals associated with all the trials of single-objective calibration and the SEMO-3D calibration failed to capture observed peaks of the validation events. The uncertainty bands associated with the ensembles of Pareto solutions from the MEMO-3D and MEMO-6D (six-dimensional) calibrations displayed better performance in reproducing and capturing more significant peak validation events. However, to bracket peaks of large flash flood events within the prediction uncertainty intervals, the MEMO-6D optimization outperformed all the single-objective, SEMO-3D, and MEMO-3D multi-objective calibration methods. This study suggests that the MEMO_6D is the best approach for predicting large flood events with lower model output uncertainties when the calibration is performed with a better combination of peak events.

How to cite: Usman, M. N., Leandro, J., Broich, K., and Disse, M.: Single vs. multi-objective optimization approaches to calibrate an event-based conceptual hydrological model using model output uncertainty framework., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11414, https://doi.org/10.5194/egusphere-egu24-11414, 2024.

A.48
|
EGU24-12676
|
ECS
Amirhossein Ershadi, Michael Finkel, Binlong Liu, Olaf Cirpka, and Peter Grathwohl

Column leaching tests are a common approach for evaluating the leaching behavior of contaminated soil and waste materials, which are often reused for various construction purposes. The observed breakthrough curves of the contaminants are affected by the intricate dynamics of solute transport, inter-phase mass transfer, and dispersion. Disentangling these interactions requires numerical models. However, inverse modeling and parameter sensitivity analysis are often time-consuming, especially when sorption/desorption kinetics are explicitly described by intra-particle diffusion, requiring the discretization along the column axis and inside the grains. To replace such computationally expensive models, we developed a machine-learning based surrogate model employing two disparate ensemble methods (stacking and weighted distance average) within the defined parameter range based on the German standard for column leaching tests. To optimize the surrogate model, adaptive sampling methods based on three distinct infill criteria are employed. These criteria include maximizing expected improvement, the Mahalanobis distance (exploitation), and maximizing standard deviation (exploration).
The stacking surrogate model makes use of extremely randomized trees and random forest as base- and meta-model. The model shows a very good performance in emulating the behavior of the original numerical model (Relative Root Mean Squared Error = 0.09). 
Our proposed surrogate model has been applied to estimate the complete posterior parameter distribution using Markov Chain Monte Carlo simulation. The impact of individual input parameters on the predictions generated by the surrogate model was analyzed using SHapley Additive exPlanations methods.

How to cite: Ershadi, A., Finkel, M., Liu, B., Cirpka, O., and Grathwohl, P.: Physics-Informed Ensemble Surrogate Modeling of Advective-Dispersive Transport Coupled with Film Intraparticle Pore Diffusion Model for Column Leaching Test, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12676, https://doi.org/10.5194/egusphere-egu24-12676, 2024.

A.49
|
EGU24-13393
|
ECS
Guoqiang Tang, Andrew Wood, Andrew Newman, Martyn Clark, and Simon Papalexiou

Ensemble gridded meteorological datasets are critical for driving hydrology and land models, enabling uncertainty analysis, and supporting a variety of hydroclimate research and applications. The Gridded Meteorological Ensemble Tool (GMET) has been a significant contributor in this domain, offering an accessible platform for generating ensemble precipitation and temperature datasets. The GMET methodology has continually evolved since its initial development in 2006, primarily in the form of a FORTRAN code base, and has since been utilized to generate historical and real-time ensemble meteorological (model forcing) datasets in the U.S. and part of Canada. A recent adaptation of GMET was used to produce multi-decadal forcing datasets for North America and the globe (EMDNA and EM-Earth, respectively). Those datasets have been used to support diverse hydrometeorological applications such as streamflow forecasting and hydroclimate studies across various scales. GMET has now evolved into a Python package called the Geospatial Probabilistic Estimation Package (GPEP), which offers methodological and technical enhancements relative to GMET. These include greater variable selection flexibility, intrinsic parallelization, and especially a broader suite of estimation methods, including the use of techniques from the scikit-learn machine learning library. GPEP enables a wider variety of strategies for local and global estimation of geophysical variables beyond traditional hydrological forcings.  This presentation summarizes GPEP and introduces major open-access ensemble datasets that have been generated with GMET and GPEP, including a new effort to create high-resolution (2 km) surface meteorological analyses for the US. These resources are useful in advancing hydrometeorological uncertainty analysis and geospatial estimation.

How to cite: Tang, G., Wood, A., Newman, A., Clark, M., and Papalexiou, S.: Datasets and tools for local and global meteorological ensemble estimation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13393, https://doi.org/10.5194/egusphere-egu24-13393, 2024.

A.50
|
EGU24-16086
|
ECS
Fernand Baguket Eloundou, Lukas Strebel, Bibi S. Naz, Christian Poppe Terán, Harry Vereecken, and Harrie-Jan Hendricks Franssen

The Community Land Model version 5 (CLM5) integrates processes encompassing the water, energy, carbon, and nitrogen cycles, and ecosystem dynamics, including managed ecosystems like agriculture. Nevertheless, the intricacy of CLM5 introduces predictive uncertainties attributed to factors such as input data, process parameterizations, and parameter values. This study conducts a comparative analysis between CLM5 ensemble simulations and eddy covariance and in-situ measurements, focusing on the effects of uncertain model parameters and atmospheric forcings on the water, carbon, and energy cycles.
Ensemble simulations for 14 European experimental sites were performed with the CLM5-BGC model, integrating the biogeochemistry component. In four perturbation experiments, we explore uncertainties arising from atmospheric forcing data, soil parameters, vegetation parameters, and the combined effects of these factors. The contribution of different uncertainty sources to total simulation uncertainty was analyzed by comparing the 99% confidence
intervals from ensemble simulations with measured terrestrial states and fluxes, using a three-way analysis of variance.
The study identifies that soil parameters primarily influence the uncertainty in estimating surface soil moisture, while uncertain vegetation parameters control the uncertainty in estimating evapotranspiration and carbon fluxes. A combination of uncertainty in atmospheric forcings and vegetation parameters mostly explains the uncertainty in sensible heat flux estimation. On average, the 99% confidence intervals envelope >40% of the observed fluxes, but this varies greatly between sites, exceeding 95% in some cases. For some sites, we could identify model structural errors related to model spin-up assumptions or erroneous plant phenology. The study guides identifying factors causing underestimation or overestimation in the variability of fluxes, such as crop parameterization or spin-up, and potential structural errors in point-scale simulations in CLM5.

How to cite: Eloundou, F. B., Strebel, L., Naz, B. S., Terán, C. P., Vereecken, H., and Hendricks Franssen, H.-J.: Disentangling the role of different sources of uncertainty and model structural error on predictions of water and carbon fluxes with CLM5 for European observation sites, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16086, https://doi.org/10.5194/egusphere-egu24-16086, 2024.

A.51
|
EGU24-5219
|
ECS
Anneli Guthke, Philipp Reiser, and Paul-Christian Bürkner

Proper sensitivity and uncertainty analysis for complex Earth and environmental systems models may become computationally prohibitive. Surrogate models can be an alternative to enable such analyses: they are cheap-to-run statistical approximations to the simulation results of the original expensive model. Several approaches to surrogate modelling exist, all with their own challenges and uncertainties. It is crucial to correctly propagate the uncertainties related to surrogate modelling to predictions, inference and derived quantities in order to draw the right conclusions from using the surrogate model.

While the uncertainty in surrogate model parameters due to limited training data (expensive simulation runs) is often accounted for, what is typically ignored is the approximation error due to the surrogate’s structure (bias in reproducing the original model predictions). Reasons are that such a full uncertainty analysis is computationally costly even for surrogates (or limited to oversimplified analytic cases), and that a comprehensive framework for uncertainty propagation with surrogate models was missing.

With this contribution, we propose a fully Bayesian approach to surrogate modelling, uncertainty propagation, parameter inference, and uncertainty validation. We illustrate the utility of our approach with two synthetic case studies of parameter inference and validate our inferred posterior distributions by simulation-based calibration. For Bayesian inference, the correct propagation of surrogate uncertainty is especially relevant, because failing to account for it may lead to biased and/or overconfident parameter estimates and will spoil further interpretation in the physics’ context or application of the expensive simulation model.

Consistent and comprehensive uncertainty propagation in surrogate models enables more reliable approximation of expensive simulations and will therefore be useful in various fields of applications, such as surface or subsurface hydrology, fluid dynamics, or soil hydraulics.

How to cite: Guthke, A., Reiser, P., and Bürkner, P.-C.: Quantifying Uncertainty in Surrogate-based Bayesian Inference, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5219, https://doi.org/10.5194/egusphere-egu24-5219, 2024.

A.52
|
EGU24-2267
|
ECS
|
Roya Mourad, Gerrit Schoups, and Wim Bastiaanssen

Remote sensing observations hold useful prior information about the terrestrial water cycle. However, combining remote sensing products for each hydrological variable does not close the water balance due to the associated uncertainties. Therefore, there is a need to quantify bias and random errors in the data. This study presents an extended version of the data-driven probabilistic data fusion for closing the water balance at a basin scale. In this version, we implement a monthly 250-m grid-based Bayesian hierarchical model leveraging multiple open-source data of precipitation, evaporation, and storage in an ensemble approach that fully exploits and maximizes the prior information content of the data. The model relates each variable in the water balance to its “true” value using bias and random error parameters with physical nonnegativity constraints. The water balance variables and error parameters are treated as unknown random variables with specified prior distributions. Given an independent set of ground-truth data on water imports and river discharge along with all monthly gridded water balance data, the model is solved using a combination of Markov Chain Monte Carlo sampling and iterative smoothing to compute posterior distributions of all unknowns. The approach is applied to the Hindon Basin, a tributary of the Ganges River, that suffers from groundwater overexploitation and depends on surface water imports. Results provide spatially distributed (i) hydrologically consistent water balance estimates and (ii) statistically consistent error estimates of the water balance data. 

How to cite: Mourad, R., Schoups, G., and Bastiaanssen, W.: A grid-based data-driven ensemble probabilistic data fusion: a water balance closure approach applied to the irrigated Hindon River Basin, India , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2267, https://doi.org/10.5194/egusphere-egu24-2267, 2024.