Displays

HS8.1.6

NOTE: We are delighted to have Prof. Peter Reichert from the Department of Systems Analysis, Integrated Assessment and Modelling at the Eawag as our invited speaker.

Proper characterization of uncertainty remains a major challenge, and is inherent to many aspects of modelling such as structural development, hypothesis testing and parameter estimation, and the adequate characterization of parameters, forcing data and initial and boundary conditions. To address this challenge, useful methods are uncertainty analysis, sensitivity analysis and inversion (calibration), either in Bayesian, geostatistical or conventional manners.
This session invites contributions that discuss advances, both in theory and/or application, in methods for SA/UA and inversion applicable to all Earth and Environmental Systems Models (EESMs). This includes all areas of hydrology, such as classical hydrology, subsurface hydrology and soil science. Topics of interest include (but are not limited to):

1) Novel methods for effective characterization of sensitivity and uncertainty,
2) Novel approaches for parameter estimation, data inversion and data assimilation,
3) Novel methods for spatial and temporal evaluation/analysis of models,
4) Single- versus multi-criteria SA/UA/inversion,
5) The role of data information and error on SA/UA (e.g., input/output error, model structure error, worth of data etc.), and
6) Improving the computational efficiency of SA/UA/inversion (efficient sampling, surrogate modelling, parallel computing, model pre-emption, etc.).

Contributions addressing any or all aspects of sensitivity/uncertainty, including those related to structural development, hypothesis testing, parameter estimation, data assimilation, forcing data, and initial and boundary conditions are invited.

Public information:
Presenters:

08:30 to 08:39 :: Welcome and introduction
08:39 to 08:45 :: Peter Reichert (_invited_)
08:45 to 08:51 :: Gabriele Baroni
08:51 to 08:57 :: Valentina Svitelman
08:57 to 09:03 :: Monica Riva
09:03 to 09:09 :: Trine Enemark
09:09 to 09:15 :: Charles Luce
09:15 to 09:21 :: Raphael Schneider
09:21 to 09:27 :: Lisa Watson
09:27 to 09:33 :: Mara Meggiorin
09:33 to 09:39 :: Gabrielle Rudi
09:39 to 09:45 :: Anna E. Sikorska-Senoner
09:45 to 09:51 :: Mariaines Di Dato
09:51 to 09:57 :: Imane Farouk
09:57 to 10:03 :: Sabine M. Spiessl
10:03 to 10:09 :: Robin Schwemmle
10:09 to 10:15 :: Falk Heße

Share:
Convener: Wolfgang Nowak | Co-conveners: Hoshin Gupta, Amin Haghnegahdar, Juliane Mai, Cristina Prieto, Saman Razavi, Thomas Wöhling
Displays
| Attendance Mon, 04 May, 08:30–10:15 (CEST)

Files for download

Session materials Download all presentations (55MB)

Chat time: Monday, 4 May 2020, 08:30–10:15

Chairperson: Cristina Prieto and Thomas Wöhling
D455 |
EGU2020-6765
| solicited
Peter Reichert, Lorenz Ammann, and Fabrizio Fenicia

The same observed precipitation falling onto seemingly the same initial state of a catchment will not lead to the same streamflow. The following causes are contributing to this non-deterministic behavior: (i) Unobserved spatial heterogeneity and limited time resolution of rainfall and other climatic observations limit the accuracy of observing the true input and other influencing factors. (ii) The knowledge about the initial state of the hydrological system is even more incomplete than about the input. (iii) Temporal changes in catchment properties that are not or not accurately described by the model also affect its response. As the same observed input can lead to different, unobserved internal states that affect streamflow for quite some time after a precipitation event, a description of such a system exclusively by considering input and output errors is not considering all relevant mechanisms. The description of such non-deterministic behavior (at the resolution of input and output observations) requires a stochastic model. To account for this apparent stochasticity of the system while still exactly maintaining mass balances, mass transfer processes should be made stochastic, rather than the mass balance equations. This can easily be done by turning the parameters of a deterministic, hydrological model into stochastic processes in time. As an additional advantage of this approach, the inferred time series of the parameters can be used to find relationships to input and model states that can (and have to) be used to improve the underlying hydrological model. On the other hand, the additional degrees of freedom for parameter estimation can lead to overparameterization, non-identifiability, and even “misuse” of “stochasticity” by “shifting” mechanistic relationships to the time-dependent parameter. These potential drawbacks require a very careful analysis.

In this talk, we will briefly review the methodology of stochastic, time-dependent parameters and investigate the potential and challenges of the suggested approach with a case study. In particular, we will demonstrate how we can learn about model deficits and how to reduce them, how the incautious application of the methodology can lead to (very) poor predictions, and how predictive cross-validation can help identifying whether the time-dependence of the parameters were “misused” to represent relationships that were not considered in the model or whether they can be assumed to represent true randomness.

How to cite: Reichert, P., Ammann, L., and Fenicia, F.: Potential and Challenges of Investigating Intrinsic Uncertainty of Hydrological Models with Stochastic, Time-Dependent Parameters, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6765, https://doi.org/10.5194/egusphere-egu2020-6765, 2020.

D456 |
EGU2020-200
Robin Schwemmle, Dominic Demand, and Markus Weiler

A better understanding of what is causing the performance of hydrological models to be “poor” or “good” is crucial for a diagnostically meaningful evaluation approach. However, current evaluation efforts are mostly based on aggregated efficiency measures such as Kling-Gupta Efficiency (KGE) and Nash-Sutcliffe Efficiency (NSE). These aggregated measures allow to distinguish between “poor” and “good” model performance only. Especially in case of “poor” model performance it is important to identify the errors which may have caused such unsatisfying simulations. These errors may have their origin in the model parameters, the model structure, and/or the input data. In order to provide insight into the origin of the error, we define three types of errors which may be related to the source of error: constant error (e.g. caused by consistent precipitation overestimation), dynamic error (e.g. caused by deficient vertical redistribution) and timing error (e.g. caused by precipitation or infiltration routine). Based on these types of errors, we propose the novel Diagnostic Efficiency (DE) measure, which accounts for the three error types by representing them in three individual metric components. The disaggregation of DE into its three metric components can be used for visualization in a 2-D space using a diagnostic polar plot. A major advantage of this visualization technique is that regions of error terms can be clearly distinguished from each other. In order to prove our concept, we first systematically generated errors by mimicking the three error types (i.e. simulations are calculated by manipulating observations). Secondly, by computing DE and the related diagnostic polar plots for the mimicked errors, we could supply evidence of the concept. Moreover, we tested our approach for a real case example. For this we used the CAMELS dataset. In particular, we compared streamflow simulations of a single catchment realized with different parameter sets to the observed streamflow. For this real case example the diagnostic polar plot suggests, that dynamic errors explain the model performance to a large extent. With the proposed evaluation approach, we aim to provide a diagnostic tool for model developers and model users. Particularly, the diagnostic polar plot enables hydrological interpretation of the proposed performance measure.

How to cite: Schwemmle, R., Demand, D., and Weiler, M.: Diagnostic efficiency - a diagnostic approach for model evaluation , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-200, https://doi.org/10.5194/egusphere-egu2020-200, 2020.

D457 |
EGU2020-5540
Gabriele Baroni and Till Francke

Global sensitivity analysis has been recognized as a fundamental tool to assess the input-output model response and evaluate the role of different sources of uncertainty. Among the different methods, variance- and distribution-based (or also called moment-independent) methods have mostly been applied. The first method relies on variance decomposition while the second method compares the entire distributions. The combination of both methods has also been recognized to provide possibly a better assessment. However, the methods rely on different assumptions and the comparison of indices is not straightforward. For these reasons, the methods are commonly not integrated or even considered as alternative solutions. 

In the present contribution, we show a new strategy to combine the two methods in an effective way to perform a comprehensive global sensitivity analysis based on a generic sampling design. The strategy is tested on three commonly-used analytic functions and one hydrological model. The strategy is compared to the state-of-the-art Jansen/Saltelli approach.

The results show that the new strategy quantifies main effect and interactions consistently. It also outperforms current best practices by converging with a lower number of model runs. For these reasons, the new strategy can be considered as a new and simple approach to perform global sensitivity analysis that can be easily integrated in any environmental models.

How to cite: Baroni, G. and Francke, T.: A comprehensive global sensitivity analysis using generic sampling designs by means of a combination of variance- and distribution-based approaches., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5540, https://doi.org/10.5194/egusphere-egu2020-5540, 2020.

D458 |
EGU2020-5085
Marco Dal Molin, Dmitri Kavetski, Mario Schirmer, and Fabrizio Fenicia

One of the open challenges in catchment hydrology is prediction in ungauged basins (PUB), i.e. being able to predict catchment responses (typically streamflow) when measurements are not available. One of the possible approaches to this problem consists in calibrating a model using catchment response statistics (called signatures) that can be estimated at the ungauged site.
An important challenge of any approach to PUB is to produce reliable and precise predictions of catchment response, with an accurate estimation of the uncertainty. In the context of PUB through calibration on regionalized streamflow signatures, there are multiple sources of uncertainty that affect streamflow predictions, which relate to:

  • The use streamflow signatures, which, by synthetizing the underlying time series, reduce the information available for model calibration;
  • The regionalization of streamflow signatures, which are not observed, but estimated through some signature regionalization model;
  • The use of a rainfall-runoff model, which carries uncertainties related to input data, parameter values, and model structure.

This study proposes an approach that separately accounts for the uncertainty related to the regionalization of the signatures from the other types; the implementation uses Approximate Bayesian Computation (ABC) to infer the parameters of the rainfall-runoff model using stochastic streamflow signatures. 
The methodology is tested in six sub-catchments of the Thur catchment in Switzerland; results show that the regionalized model produces streamflow time series that are similar to the ones obtained by the classical time-domain calibration, with slightly higher uncertainty but similar fit to the observed data. These results support the proposed approach as a viable method for PUB, with a focus on the correct estimation of the uncertainty.

How to cite: Dal Molin, M., Kavetski, D., Schirmer, M., and Fenicia, F.: Exploring signatures-based calibration of hydrological models for prediction in ungauged basins., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5085, https://doi.org/10.5194/egusphere-egu2020-5085, 2020.

D459 |
EGU2020-10434
Martine van der Ploeg and Attila Nemes

Soil hydro-physical properties —such as soil water retention, (un)saturated hydraulic conductivity, shrinkage and swelling, organic matter content, texture (particle distribution), structure (soil aggregation/pore structure)and bulk density— are used in many sub(surface) modeling applications. Reliable soil-hydrophysical properties are key to proper predictions with such models, yet the harmonization and standardization of these properties has not received much attention. Lack of harmonization and standardization may lead to heterogeneity in data as a result of differences in methodologies, rather than real landscape heterogeneity. A need and scope has been identified to better harmonize, innovate, and standardize methodologies regarding measuring soil hydraulic properties that form the information base of many derived products in support of EU policy. With this identified need in mind the Soil Program on Hydro-Physics via International Engagement (SOPHIE) was initiated in 2017. Besides developing new activities that may advise future measurements, we also explore historic data and metadata and mine its relevant contents. The European Hydro-pedological Data Inventory (EU-HYDI), the largest European database on measured soil hydrophysical properties, is – to date – rather under-explored in this sense, which served as motivation for this work.

From EU-HYDI we selected those records that were complete for soil texture, bulk density and organic matter, and fitted pedo-transfer functions separately for particular water retention points (at heads of 0, 2.5, 10, 100, 300, 1000, 3000, 15000 cm) and saturated hydraulic conductivity by multi-linear regression. We then subtracted the observed retention and hydraulic conductivity values from their estimated counterparts, and grouped the residuals by measurement methodologies. The results show that there can be significant differences between different methodologies and sample sizes used to obtain the water retention and hydraulic conductivity in the laboratory. The results thus show that the EU-data that may underlie large scale modelling may introduce errors in the forcing data that are attributed to a lack of harmonization and standardization in currently used measurement protocols.

How to cite: van der Ploeg, M. and Nemes, A.: Data forcing errors resulting from lack of harmonization and standardization in measurement methodologies: A comparison of soil hydrophysical data from a large EU database., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10434, https://doi.org/10.5194/egusphere-egu2020-10434, 2020.

D460 |
EGU2020-11113
Wendy Sharples, Andrew Frost, Ulrike Bende-Michl, Ashkan Shokri, Louise Wilson, Elisabeth Vogel, and Chantal Donnelly

Australia has scarce freshwater resources and is already becoming drier under the impacts of climate change. Climate change impacts and other important hydrological processes occur on multiple temporal and spatial scales, prompting the need for large-scale, high-resolution, multidecadal hydrological models. Large-scale hydrological models rely on accurate process descriptions and inputs to be able to simulate realistic multi-scale processes, however parameterization is required to account for limitations in observational inputs and sub-grid scale processes. For example, defining the soil hydraulic boundary conditions at multiple depths using soil input maps at high-resolution across an entire continent is subject to uncertainty. A common way to reduce uncertainty associated with static inputs and parameterization, thereby improving model accuracy and reliability, is to optimize the model parameters toward a long record of historical data, namely calibration. The Australian Bureau of Meteorology’s operational hydrological model (The Australian Water Resources Assessment model: AWRA-L, www.bom.gov.au/water/landscape), which provides real-time monitoring of the continental water balance, is calibrated to a combined performance metric. This metric optimizes model performance against catchment based streamflow and satellite based evaporation and soil moisture observations for 295 sites across the country, where 21 separate parameters are calibrated continentally. Using this approach, AWRA-L has been shown to reproduce independent, historical in-situ data accurately across the water balance.

Additionally, the AWRA-L model is being used to project future hydrological fluxes and states using bias corrected meteorological inputs from multiple global climate models. Towards improving AWRA-L’s performance and stability for use in hydrological projections, we aim to generate a set of model parameters that perform well under conditions of climate variability as well as under historical conditions, with a two-stage approach. Firstly, a variance based sensitivity analysis for water balance components (e.g. low/mean/high flow, soil moisture and evapotranspiration) is performed, to rank the most influential parameters affecting the water balance components and to subsequently decrease the number of calibratable parameters, thus decreasing dimensionality and uncertainty in the calibration process. Secondly, the reduced parameter set is put through a multi-objective evolutionary algorithm (Borg MOEA, www.borgmoea.org), to capture the tradeoffs between the water balance component performance objectives. The tradeoffs between the water balance component objective functions and in-situ validation data are examined, including evaluation of performance in: a) Climate zones, b) Seasons, c) Wet and dry periods, and d) Trend reproduction. This comprehensive evaluation was undertaken to choose a model parameterization (or set thereof) which produces reasonable hydrological responses under future climate variability across the water balance. The outcome is a suite of parameter sets with improved performance across varying and non-stationary climate conditions. We propose this approach to improve confidence in hydrological models used to simulate future impacts of climate change.

How to cite: Sharples, W., Frost, A., Bende-Michl, U., Shokri, A., Wilson, L., Vogel, E., and Donnelly, C.: Simulating continental scale hydrology under projected climate change conditions: The search for the optimal parameterization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11113, https://doi.org/10.5194/egusphere-egu2020-11113, 2020.

D461 |
EGU2020-4485
Valentina Svitelman, Elena Saveleva, Peter Blinov, and Dmitrii Valetov

Safety assessment for a radioactive waste disposal facility is built on a systematic analysis of the long-term performance of natural and engineered barriers, the potential migration of radionuclides from the disposal facility, their movement in the environment and resulting radiation hazards.

Quantitative implementation of such kind of analysis requires an elaborated set of numerical models (thermo-mechanical, geochemical, groundwater flow and transport, etc.) that are realized in a variety of software tools.

It goes without saying that numerical models for such a complex system are associated with significant uncertainties of diverse origins: lack of the site-specific or material-specific data, natural variability of the host geological media, imperfect understanding of the underlying processes and so on.

The focus of our study is to provide uncertainty assessment, sensitivity analysis and calibration tools for the whole framework of numerical models involved in the safety assessment.

It became apparent on the way toward this goal that we need to balance between model-independent and model-tailored solutions. In addition to the expected diversity of input-output formats or objective functions for model calibration, we face limitations in the universality of the methods themselves.

For instance, the choice of global sensitivity analysis method is conditioned by model linearity, monotonicity, multimodality and asymmetry and of course its computational cost.

The selection process of the suitable optimization algorithm for calibration purposes is even more complicated because a universal optimization method is even theoretically impossible, and one algorithm can outperform another only if it is adjusted to the specific problem.

As a result, a sufficient list of sensitivity analysis methods includes correlation and regression analysis, multiple-start perturbation, variance-based and density-based methods. The set of calibration methods composed of methods with different search abilities including swarm intelligence, evolutionary and memetic algorithms, and their hybrids. The hybridization allows simultaneously benefit from exploration (global search) ability of one algorithm and exploitation (local search) power of another.

It is also worth mentioning that «unfortunate» results of sensitivity analysis or calibration may indicate the necessity of model revision. Examples of such indicators are low sensitivities to empirically significant parameters or optimal values of parameters close to the boundaries of the reasonable predefined range.

In light of the above uncertainty and sensitivity analysis and parameter calibration became not the model-independent final stage of numerical assessment, but an inseparable part of the model development routine.

How to cite: Svitelman, V., Saveleva, E., Blinov, P., and Valetov, D.: Uncertainty analysis tool as part of safety assessment framework: model-independent or model-tailored?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4485, https://doi.org/10.5194/egusphere-egu2020-4485, 2020.

D462 |
EGU2020-6626
Monica Riva, Aronne Dell'Oca, and Alberto Guadagnini

Modern models of environmental and industrial systems have reached a relatively high level of complexity. The latter aspect could hamper an unambiguous understanding of the functioning of a model, i.e., how it drives relationships and dependencies among inputs and outputs of interest. Sensitivity Analysis tools can be employed to examine this issue.

Global sensitivity analysis (GSA) approaches rest on the evaluation of sensitivity across the entire support within which system model parameters are supposed to vary. In this broad context, it is important to note that the definition of a sensitivity metric must be linked to the nature of the question(s) the GSA is meant to address. These include, for example: (i) which are the most important model parameters with respect to given model output(s)?; (ii) could we set some parameter(s) (thus assisting model calibration) at prescribed value(s) without significantly affecting model results?; (iii) at which space/time locations can one expect the highest sensitivity of model output(s) to model parameters and/or knowledge of which parameter(s) could be most beneficial for model calibration?

The variance-based Sobol’ Indices (e.g., Sobol, 2001) represent one of the most widespread GSA metrics, quantifying the average reduction in the variance of a model output stemming from knowledge of the input. Amongst other techniques, Dell’Oca et al. [2017] proposed a moment-based GSA approach which enables one to quantify the influence of uncertain model parameters on the (statistical) moments of a target model output.

Here, we embed in these sensitivity indices the effect of uncertainties both in the system model conceptualization and in the ensuing model(s) parameters. The study is grounded on the observation that physical processes and natural systems within which they take place are complex, rendering target state variables amenable to multiple interpretations and mathematical descriptions. As such, predictions and uncertainty analyses based on a single model formulation can result in statistical bias and possible misrepresentation of the total uncertainty, thus justifying the assessment of multiple model system conceptualizations. We then introduce copula-based sensitivity metrics which allow characterizing the global (with respect to the input) value of the sensitivity and the degree of variability (across the whole range of the input values) of the sensitivity for each value that the prescribed model output can possibly undertake, as driven by a governing model. In this sense, such an approach to sensitivity is global with respect to model input(s) and local with respect to model output, thus enabling one to discriminate the relevance of an input across the entire range of values of the modeling goal of interest. The methodology is demonstrated in the context of flow and reactive transport scenarios.

 

References

Sobol, I. M., 2001. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math. Comput. Sim., 55, 271-280.

Dell’Oca, A., Riva, M., Guadagnini, A., 2017. Moment-based metrics for global sensitivity analysis of hydrological systems. Hydr. Earth Syst. Sci., 21, 6219-6234.

How to cite: Riva, M., Dell'Oca, A., and Guadagnini, A.: Sensitivity analysis and the challenges posed by multiple approaches: a multifaceted mess, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6626, https://doi.org/10.5194/egusphere-egu2020-6626, 2020.

D463 |
EGU2020-11688
Trine Enemark, Luk Peeters, Dirk Mallants, and Okke Batelaan

Conceptual uncertainty is considered one of the major sources of uncertainty in groundwater flow modelling. In this regard, hypothesis testing is essential to increase system understanding by analysing and refuting alternative conceptual models. We present a systematic approach to conceptual model development and testing, which involves defining alternative models and then attempting to refute the alternative understandings using independent data. The method aims at finding an ensemble of conceptual understandings that are consistent with prior knowledge and observational data, rather than tuning the parameters of a single conceptual model to conform with the data through inversion.

The alternative understandings we test relate to the hydrological functioning of enclosed depressions in the landscape of the Wildman River Area, Northern Territory, Australia. These depressions provide potential for time-dependent surface water-groundwater interactions. Alternative models are developed representing the process structure and physical structure of the conceptual model of the depressions. Remote sensing data is used to test the process structure, while geophysical data is used to test the physical structure of the conceptual models.

The remote sensing and geophysical data are used twice in the applied workflow. First in a model rejection step, where models whose priors are inconsistent with the observations are rejected and removed from the ensemble. Then the data are used to update the probability of the accepted alternative conceptual models.

The updated conceptual model probabilities of the combined physical and process structures revealed the data indicated that the depressions act as preferential groundwater recharge features for three out of five depressions used as test case. For the fourth depression, the data is indecisive, and more testing would be needed to discriminate between model structures. For the fifth depression, all physical structures were rejected indicating that the model structure is still an unknown unknown.

This insight into system functioning gained from testing alternative conceptual models can be used in future modelling exercises. With more confidence in the conceptual model, confidence in the predictions of future modelling exercise increase, which can that underpin environmental management decisions.

How to cite: Enemark, T., Peeters, L., Mallants, D., and Batelaan, O.: Systematic hydrogeological conceptual model testing using remote sensing and geophysical data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11688, https://doi.org/10.5194/egusphere-egu2020-11688, 2020.

D464 |
EGU2020-12176
Charles Luce and Abigail Lute

A central question in model structural uncertainty is how complex a model should be in order to have greatest generality or transferability.  One school of thought is that models become more general by adding process subroutines.  On the other hand, model parameters and structures have been shown to change significantly when calibrated to different basins or time periods, suggesting that model complexity and model transferability may be antithetical.  An important facet to this discussion is noting that validation methods and data applied to model evaluation and selection may tend to bias answers to this question.  Here we apply non-random block cross-validation as a direct assessment of model transferability to a series of algorithmic space-time models of April 1 snow water equivalent (SWE) across 497 SNOTEL stations for 20 years.  In general, we show that low to moderate complexity models transfer most successfully to new conditions in space and time.  In other words, there is an optimum between overly complex and overly simple models.  Because structures in data resulting from temporal dynamics and spatial dependency in atmospheric and hydrological processes exist, naïvely applied cross-validation practices can lead to overfitting, overconfidence in model precision or reliability, and poor ability to infer causal mechanisms.  For example, random k-fold cross-validation methods, which are in common use for evaluating models, essentially assume independence of the data and would promote selection of more complex models.  We further demonstrate that blocks sampled with pseudoreplicated data can produce similar outcomes.  Some sampling strategies favored for hydrologic model validation may tend to promote pseudoreplication, requiring heightened attentiveness for model selection and evaluation.  While the illustrative examples are drawn from snow modeling, the concepts can be readily applied to common hydrologic modeling issues.

How to cite: Luce, C. and Lute, A.: Applying Non-Random Block Cross-Validation to Improve Reliability of Model Selection and Evaluation in Hydrology: An illustration using an algorithmic model of seasonal snowpack , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12176, https://doi.org/10.5194/egusphere-egu2020-12176, 2020.

D465 |
EGU2020-13391
Saket Pande and Mehdi Moayeri

It is intuitive that instability of hydrological system representation, in the sense of how perturbations in input forcings translate into perturbation in a hydrologic response, may depend on its hydrological characteristics. Responses of unstable systems are thus complex to model. We interpret complexity in this context and define complexity as a measure of instability in hydrological system representation. We use algorithms to quantify model complexity in this context from Pande et al. (2014). We use Sacramento soil moisture accounting model (SAC-SMA) parameterized for CAMEL data set (Addor et al., 2017) and quantify complexities of corresponding models. Relationships between hydrologic characteristics of CAMEL basins such as location, precipitation seasonality index, slope, hydrologic ratios, saturated hydraulic conductivity and NDVI and respective model complexities are then investigated.

Recently Pande and Moayeri (2018) introduced an index of basin complexity based on another, non-parameteric, model of least statistical complexity that is needed to reliably model daily streamflow of a basin. This method essentially interprets complexity in terms of difficulty in predicting historically similar stream flow events. Daily streamflow is modeled using k-nearest neighbor model of lagged streamflow. Such models are parameterised by the number of lags and radius of neighborhood that it uses to identify similar streamflow events from the past. These parameters need to be selected for each time step of prediction ’query’. We use 1) Tukey half-space data depth function to identify time steps corresponding to ’difficult’ queries and 2) then use Vapnik-Chervonenkis (VC) generalization theory, which trades off model performance with VC dimension (i.e. a measure of model complexity), to select parameters corresponding to k nearest neighbor model that is of appropriate complexity for modelling difficult queries. Average of selected model complexities corresponding to difficult queries are then related with the same hydrologic characteristics as above for CAMEL basins.

We find that complexities estimated on SAC-SMA model using the algorithm of Pande et al. (2014) are correlated with those estimated on knn model using VC generalization theory. Further, the relationships between the two complexities and hydrologic characteristics are also similar. This indicates that interpretation of complexity as a measure of instability in hydrological system representation is similar to the interpretation provided by VC generalization theory of difficulty in predicting historically similar stream flow events.  

Reference:

Addor, N., Newman, A. J., Mizukami, N., and Clark, M. P. (2017) The CAMELS data set: catchment attributes and meteorology for large-sample studies, Hydrol. Earth Syst. Sci., 21, 5293–5313, https://doi.org/10.5194/hess-21-5293-2017.

Pande, S., Arkesteijn, L., Savenije, H. H. G., and Bastidas, L. A. (2014) Hydrological model parameter dimensionality is a weak measure of prediction uncertainty, Hydrol. Earth Syst. Sci. Discuss., 11, 2555–2582, https://doi.org/10.5194/hessd-11-2555-2014.

Pande, S., and Moayeri, M. (2018). Hydrological interpretation of a statistical measure of basin complexity. Water Resources Research, 54. https://doi.org/10.1029/2018WR022675

How to cite: Pande, S. and Moayeri, M.: Physical interpretation of hydrologic model complexity revisited, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13391, https://doi.org/10.5194/egusphere-egu2020-13391, 2020.

D466 |
EGU2020-10321
Kyle Mosley, David Applegate, James Mather, John Shevelan, and Hannah Woollard

The issue of safely dealing with radioactive waste has been addressed in several countries by opting for a geological disposal solution, in which the waste material is isolated in a subsurface repository. Safety assessments of such facilities require an in-depth understanding of the environment they are constructed in. Assessments are commonly underpinned by simulations of groundwater flow and transport, using numerical models of the subsurface. Accordingly, it is imperative that the level of uncertainty associated with key model outputs is accurately characterised and communicated. Only in this way can decisions on the long-term safety and operation of these facilities be effectively supported by modelling.

In view of this, a new approach for quantifying uncertainty in the modelling process has been applied to hydrogeological models for the UK Low Level Waste Repository, which is constructed in a complex system of Quaternary sediments of glacial origins. Model calibration was undertaken against a dataset of observed groundwater heads, acquired from a borehole monitoring network of over 200 locations. The new methodology comprises an evolution of the calibration process, in which greater emphasis is placed on understanding the propagation of uncertainty. This is supported by the development of methods for evaluating uncertainty in the observed heads data, as well as the application of mathematical regularisation tools (Doherty, 2018) to constrain the solution and ensure stability of the inversion. Additional information sources, such as data on the migration of key solutes, are used to further constrain specific model parameters. The sensitivity of model predictions to the representation of heterogeneity and other geological uncertainties is determined by smaller studies. Then, with the knowledge of posterior parameter uncertainty provided by the calibration process, the resulting implications for model predictive capacity can be explored. This is achieved using the calibration-constrained Monte Carlo methodology developed by Tonkin and Doherty (2009).

The new approach affords greater insight into the model calibration process, providing valuable information on the constraining influence of the observed data as it pertains to individual model parameters. Similarly, characterisation of the uncertainty associated with different model outputs provides a deeper understanding of the model’s predictive power. Such information can also be used to determine the appropriate level of model complexity; the guiding principle being that additional complexity is justified only where it contributes either to the characterisation of expert knowledge of the system, or to the model’s capacity to represent details of the system’s behaviour that are relevant for the predictions of interest (Doherty, 2015). Finally, the new approach enables more effective communication of modelling results – and limitations – to stakeholders, which should allow management decisions to be better supported by modelling work.

References:

  • Doherty, J., 2015. Calibration and Uncertainty Analysis for Complex Environmental Models. Watermark Numerical Computing, Brisbane, Australia. ISBN: 978-0-9943786-0-6.
  • Doherty, J., 2018. PEST Model-Independent Parameter Estimation. User Manual Part I. 7th Edition. Watermark Numerical Computing, Brisbane, Australia.
  • Tonkin, M. and Doherty. J., 2009. Calibration-constrained Monte Carlo analysis of highly parameterized models using subspace techniques. Water Resources Research, 45, W00B10.

How to cite: Mosley, K., Applegate, D., Mather, J., Shevelan, J., and Woollard, H.: Approaches to uncertainty quantification in groundwater modelling for geological disposal of radioactive waste, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10321, https://doi.org/10.5194/egusphere-egu2020-10321, 2020.

D467 |
EGU2020-6966
Raphael Schneider, Hans Jørgen Henriksen, and Simon Stisen

The Continuous Ranked Probability Score (CRPS) is a popular evaluation tool for probabilistic forecasts. We suggest using it, outside its original scope, as an objective function in the calibration of large-scale groundwater models, due to its robustness to large residuals in the calibration data.

Groundwater models commonly require their parameters to be estimated in an optimization where some objective function measuring the model’s performance is to be minimized. Many performance metrics are squared error-based, which are known to be sensitive to large values or outliers. Consequently, an optimization algorithm using squared error-based metrics will focus on reducing the very largest residuals of the model. In many cases, for example when working with large-scale groundwater models in combination with calibration data from large datasets of groundwater heads with varying and unknown quality, there are two issues with that focus on the largest residuals: Such outliers are often i) related to observational uncertainty or ii) model structural uncertainty and model scale. Hence, fitting groundwater models to such deficiencies can be undesired, and calibration often results in parameter compensation for such deficiencies.

Therefore, we suggest the use of a CRPS-based objective function that is less sensitive to (the few) large residuals, and instead is more sensitive to fitting the majority of observations with least bias. We apply the novel CRPS-based objective function to the calibration of large-scale coupled surface-groundwater models and compare to conventional squared error-based objective functions. These calibration tests show that the CRPS-based objective function successfully limits the influence of the largest residuals and reduces overall bias. Moreover, it allows for better identification of areas where the model fails to simulate groundwater heads appropriately (e.g. due to model structural errors), that is, where model structure should be investigated.

Many real-world large-scale hydrological models face similar optimizations problems related to uncertain model structures and large, uncertain calibration datasets where observation uncertainty is hard to quantify. The CRPS-based objective function is an attempt to practically address the shortcomings of squared error minimization in model optimization, and is expected to also be of relevance outside our context of groundwater models.

How to cite: Schneider, R., Henriksen, H. J., and Stisen, S.: The CRPS – used as a robust objective function for groundwater model calibration in light of observation and model structural uncertainty, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6966, https://doi.org/10.5194/egusphere-egu2020-6966, 2020.

D468 |
EGU2020-9962
Alexandre Pryet

(Sub)surface hydrological models are more and more integrated, coupling multiple physical, biological and chemical processes. Such models are highly parameterized and most often, prior knowledge on these parameters is loose. Hopefully, such complex models may assimilate rich observation datasets, which constrain model parameters and reduce forecast uncertainties. The inclusion of diverse data types (aka “calibration targets”) within the so called “objective function” deserves particular attention to avoid bias in estimated parameters and forecasts of interest. In the most common approach, the fit between model outputs and data is described with a single objective function composed of the sum of weighted squared residuals between simulated values and their observed counterparts. When the residuals are statistically independent, homoscedastic and can be described with a gaussian probability distribution, the least square estimates obtained through the minimization of the objective function presents numerous advantages. However, when assimilating diverse data types with a model presenting structural error, the above-mentioned hypotheses on model residuals are at best very unlikely and in practice, never matched. Numerous studies investigated the interest of error modeling and data-transformation. Less attention has been paid to the integration of various data types (flows, heads, concentrations, soft data, ...) potentially spanning over several orders of magnitudes and originating from spatially distributed locations (wells, gaging stations, ...) each with contrasting sampling frequency (years, days, hours, ...). A purely formal statistical approach is challenging to put in practice, but the integration of such dataset into a single objective function deserves a relevant weighting strategy. Based on a synthetic model, different weighting strategies are compared based on their ability to reduce predictive bias and uncertainty. We propose an informal but practical formulation of the objective function that may be used for operational groundwater modeling case studies. The approach is eventually illustrated on a real-world case study.

How to cite: Pryet, A.: On the art of weighting an objective function with heterogeneous datasets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9962, https://doi.org/10.5194/egusphere-egu2020-9962, 2020.

D469 |
EGU2020-1245
Lisa Watson, Judith Verstegen, Menno Straatsma, and Derek Karssenberg

Ecosystem service valuation may be a relevant method for assisting policy makers in environmental related decisions. However, a number of problematic aspects of the calculations, including consistency of economy (e.g., purchasing price, production price, perceived value) and determining which ecosystem subservices to include (e.g. include disservices or only beneficial services), contribute to uncertainty in the final valuations. However, ecosystem service valuations currently lack 1) a quantification of total uncertainty in ecosystem service values as a result of the uncertainties in the subservices, and 2) an analysis of the relative sensitivity of total ecosystem service values to uncertainties in various subservices.  

In a previous study, we have computed a spatial distribution of global ecosystem services by disaggregating production values over the spatial existence of each subservice by country. Nineteen subservices arranged under nine services from four categories were calculated totalling approximately 1.3 trillion international dollars for 2005. Our current study aims to perform an error propagation analysis and a sensitivity analysis of the Food Service. The Food Service, which is comprised of nine subservices, accounts for 99.8% of the total global ecosystem service value. It is extremely important to understand the reliability of the valuation of this service because it greatly contributes and overshadows the other services.

Hereto, the cattle and sheep indicators in the Livestock Subservice and the apple orchard indicator in the Fruit Subservice are analyzed. The Livestock Subservice accounts for the majority of the Food Service and is comprised of cattle, sheep, buffalo, poultry, pig, and goat. The cattle and sheep indicators have three main sources of uncertainty: the animal weight, the production value, and the number of animals per hectare for meat versus the number of animals for dairy use. The uncertainty in animal weight varies considerably by species and is important because the production value is the international dollar per live-weight ton. The production values are published with designations as either a direct calculation or an estimated figure. In the case of animal population data, RMSE were provided as part of the data release.

The Fruit Subservice is the fourth largest contributor to the total Food Subservice value. It was chosen because the input data sets are different than the top three contributors to the Food Subservice (i.e. Livestock, Dairy, and Crops). The apple orchard indicator has two main sources of uncertainty: the production value and the production area. The uncertainty in the production values are qualified as unofficial figures by the data producer, while the production area followed agricultural land use, rather than mapped apple orchards.

Both an error propagation analysis of the defined uncertainties and a sensitivity analysis provide insight into the robustness into the computed ecosystem service assessment. Presenting and understanding uncertainty and sensitivity of ecosystem service assessments is consequential for incorporating ecosystem service assessments into climate change mitigation strategies.

How to cite: Watson, L., Verstegen, J., Straatsma, M., and Karssenberg, D.: Quantifying Uncertainty and Assessing Sensitivity in Global Mapping of Ecosystem Services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1245, https://doi.org/10.5194/egusphere-egu2020-1245, 2020.

D470 |
EGU2020-2991
HanFang Hsueh, Anneli Guthke, Eddy Thomas Woehling, and Wolfgang Nowak

When a deterministic hydrological model is calibrated, parameters applied in the model are commonly assigned time-constant values. This assignment ignores that errors in the model structure lead to time-dependent model errors. Such time-dependent error occurs, among other reasons, if a hydrological process is active in certain periods or situations in nature, yet is not captured by a model. Examples include soil freezing, complex vegetation dynamics, or the effect of extreme floods on river morphology. For a given model approximation, such missing process could become visible as apparent time-dependent best-fit values of model parameters. This research aims to develop a framework based on time-windowed Bayesian inference, to assist modelers in diagnosing this type of model error.


We suggest using time-windowed Bayesian model evidence (tBME) as a model evaluation metric, indicating how much the data in time windows support the claim that the model is correct. We will explain how to make tBME values a meaningful and comparable indicator within likelihood-ratio hypothesis tests. By using a sliding time window, the hypothesis test will indicate where such errors happen. The sliding time window can also be used to obtain a time sequence of posterior parameter distribution (or of best-fit calibration parameters). The dynamic parameter posterior will be further used to investigate the potential error source. Based on Bayes rule we can also observe how influential a parameter may be for model improvement.

We will show a corresponding visualization tool to indicate time periods where the model is potentially missing a process. We provide guidance to use showing how to use the dynamic parameter posterior to obtain insights on the error source and potentially to improve the model performance. The soil moisture model (HYDRUS 1D) was applied for a pilot test to prove the feasibility of this framework.

How to cite: Hsueh, H., Guthke, A., Woehling, E. T., and Nowak, W.: Diagnosing model-structural errors with a sliding time window Bayesian analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2991, https://doi.org/10.5194/egusphere-egu2020-2991, 2020.

D471 |
EGU2020-3393
Vanessa A. Godoy, Gian Franco Napa-García, and Jaime Gómez-Hernández

In this study, we compare the capability of the normal-score ensemble smoother with multiple data assimilation (NS-MDA) to identify hydraulic conductivity when it assimilates or hydraulic heads or concentrations. The study is performed in a two-dimensional numerical single point contamination experiment of an aquifer vertical cross section. Reference hydraulic conductivity maps are generated using geostatistics, and the groundwater flow and transport are solved to produce reference state variable data (hydraulic head and concentration). Assimilating data for the inverse problems are sampled in time at a limited number of points from the reference aquifer response. Prior variogram function of hydraulic conductivity is assumed and equally-likely realizations are generated. Stochastic inverse modelling is run using the NS-MDA for the identification of hydraulic conductivity by considering two scenarios: 1) assimilating hydraulic heads only and 2) assimilating concentrations only. Besides the qualitative analysis of the identified hydraulic conductivities maps, the results are quantified by using the average absolute bias (AAB) that represents a measure of accuracy between the reference values and the inversely identified values according each scenarios. The updated parameters reproduce the reference aquifer ones quite well for the two scenarios investigated, with better results for the scenario 1, indicating that NS-MDA is an effective approach to identifying hydraulic conductivities.

How to cite: A. Godoy, V., Napa-García, G. F., and Gómez-Hernández, J.: Identification of hydraulic conductivity via normal-score ensemble smoother with multiple data assimilation (NS-MDA) by assimilating hydraulic head or concentration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3393, https://doi.org/10.5194/egusphere-egu2020-3393, 2020.

D472 |
EGU2020-4568
| Highlight
Thomas Wöhling and Peter Davidson

The Upper Wairau Plain Aquifer serves as the major resource for drinking water and irrigation in the region of Blenheim in Marlborough, New Zealand. Natural recession of groundwater levels and storage occurs annually during the summer months in the upper part of the highly conductive gravel aquifer. Due to a number of particularly dry summers, aquifer storage has reached critical levels several times in the past which could lead to future restrictions on groundwater abstraction to be imposed by the Marlborough District Council (MDC) who manages the resource. The MDC requires at any given time an early warning whether or not these critical levels are likely to be reached. Correspondingly, an operational framework was developed to forecast Wairau Plains Aquifer groundwater levels and storage. The tool creates more lead time for the MDC in decision making and adaptive management of the Wairau Plain groundwater resources.

A numerical groundwater flow model of the Wairau Plain was previously set up to understand the main drivers of aquifer storage (Wöhling et al. 2018). Since that model posed practical restrictions to be of use for operational management purposes, we tested several data-driven surrogate models with easily attainable inputs that could be derived directly by an automated database query of the MDC monitoring network. Here, a tailor-made version of the Eigenmodel approach (Sahuquillo, 1983) is used to predict Wairau Aquifer groundwater heads and coupled with Markov chain Monte Carlo (MCMC) sampling for model calibration and parameter uncertainty analysis. Several Eigenmodels are embedded in a modular prediction framework that allows for a flexible description of critical model inputs depending on different states of knowledge and on different purposes of the analysis. A Wairau River flow master-recession curve has been derived from historic (observed) time series data to provide the boundary condition for the major recharge source of the aquifer.

The Eigenmodels perform very well in hind-casting the recessions of historic groundwater levels at selected locations of the Wairau Plain Aquifer. Periods with critical groundwater levels were successfully detected and accurately reproduced. The models are efficient and fast, which is a prerequisite for the operational management support tool. The way how aquifer recharge is described as a function of river discharge proved to be very sensitive to the accuracy of the results. Future plans are investigations on an improved knowledge of this relationship, but also to implement the propagation of input uncertainty through the framework, in addition to the treatment of parametric and predictive uncertainty which is implemented already.

 

References

Sahuquillo, A. (1983). An eigenvalue numerical technique for solving unsteady linear groundwater models continuously in time. Water Resources Research 19(1): 87-93.
Wöhling T., Gosses, M. Wilson, S., Davidson, P. (2018). Quantifying river-groundwater interactions of New Zealand's gravel-bed rivers: The Wairau Plain. Groundwater, 56(4), 647-666.

How to cite: Wöhling, T. and Davidson, P.: AQUIFERWATCH: Operational prediction of groundwater heads and storage during river flow recession in the Wairau Aquifer, New Zealand, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4568, https://doi.org/10.5194/egusphere-egu2020-4568, 2020.

D473 |
EGU2020-5938
Mara Meggiorin, Giulia Passadore, Andrea Sottani, and Andrea Rinaldo

Hydrogeological timeseries of hydraulic head contain important information for modelling the groundwater resource. Calibrating in transient conditions allows to define both conductivity and specific storage fields plus, in case, other flow boundary conditions (BCs) that fit at best the observations. Moreover, by having at least one year of records, different hydrological conditions are considered and fitted.

The major problem encountered by hydrogeologists is that hydrological records often have missing values. Then, different choices on observation sampling time are possible: for example, using daily data with missing values or monthly data that fastens also the model. These choices can alter the calibration process and affect the parameters estimation.

This study aims at understanding if and how optimal estimated parameter sets are different and, therefore, if the different choice on the time interval can preclude a proper calibration of the groundwater model. This analysis was performed by calibrating: (i) with all daily data, (ii) with different percentages of missing values on daily data, (iii) with weekly data, (iv) with monthly data and (v) with stationary conditions.

The estimated parameter sets of the different models obtained by using part of the data available (to simulate the loss of information) are compared to a base model, which is the best fit achieved by using all available daily observations. The flow model and calibration setup are constant for all models, only timeseries‘ observation vary.

The analysis is carried out on a real case of study: a flow model is built using the software FEFLOW for an area of the Bacchiglione Basin (Veneto, Italy). This area has been selected in a way to facilitate the calibration process. It is located on the plain close to the Leogra river where the aquifer is unconfined. The domain has both upstream and downstream borders roughly perpendicular to the regional groundwater flow direction and passing by sensors recording continuously the hydraulic head. In this way, the following BCs can be assigned: the Dirichlet BCs with transient values of the corresponding recording sensor for the boarders upstream and downstream and no-flow conditions for the lateral borders. Furthermore, inside the study area, there are sensors monitoring the hydraulic head, i.e. transient observations. Two borderline and four central sensors are recording daily values of hydraulic head. The year 2016 was chosen as calibration period, since no data is missing.

The comparison of resulting conductivity and specific storage fields is carried out by visual inspection of fields heterogeneity and statistical distributions. Moreover, models’ uncertainty is quantified with a calibration-constrained Monte-Carlo analysis.

The main understanding of this analysis is the anomalous result estimated by the monthly data model respect to other models: both conductivity and specific storage field are different in their heterogeneity and magnitude, reaching unlikely values.

This comprehension is important because the choice of monthly data is usually done for data scarcity or model fastening, but the effects on estimated fields are evident and important to consider. The analysis shows how different observations types, meaning daily to monthly data, affect the calibration process.

How to cite: Meggiorin, M., Passadore, G., Sottani, A., and Rinaldo, A.: Understanding the importance of hydraulic head timeseries for calibrating a flow model: application to the real case of the Bacchiglione Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5938, https://doi.org/10.5194/egusphere-egu2020-5938, 2020.

D474 |
EGU2020-6907
Jonas Allgeier, Ana Gonzalez-Nicolas, Daniel Erdal, Wolfgang Nowak, and Olaf A. Cirpka

The boundaries of surface-water catchments can be delineated by analyzing digital elevation models using geographic information systems. Surface-water divides and groundwater divides, however, might significantly differ from each other because the groundwater surface does not necessarily follow the surface topography. Hydraulic-head measurements are needed to properly delineate a groundwater divide and thereby the subsurface boundary of a catchment, but piezometers are expensive. It is therefore vital to optimize the placement of the necessary piezometers. In this work, we introduce an optimal design analysis, which can identify the best configuration of potential piezometer placements within a given set. The method is based on the formal minimization of the expected posterior uncertainty within a sampling-based Bayesian framework. It makes use of a random ensemble of behavioral steady-state groundwater flow models. For each behavioral realization we compute virtual hydraulic-head measurements at all potential well points and delineate the groundwater divide by particle tracking. We minimize the uncertainty of the groundwater-divide location by marginalizing over the virtual measurements. We test the method mimicking a real aquifer in South-West Germany. Previous works in this aquifer indicated a groundwater divide that is shifted compared to the surface-water divide. The analysis shows that the uncertainty in the localization of the groundwater divide can be reduced with each new well. A comparison of the maximum uncertainty reduction at different numbers of wells quantifies the added value of information for each new well. In our case study, the uncertainty reduction obtained by three monitoring points is maximized when the first well is close to the topographic surface water divide, the second one in the valley, and the third one in between. 

How to cite: Allgeier, J., Gonzalez-Nicolas, A., Erdal, D., Nowak, W., and Cirpka, O. A.: A Stochastic Framework to Optimize the Monitoring Strategy for the Delineation of a Groundwater Divide, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6907, https://doi.org/10.5194/egusphere-egu2020-6907, 2020.

D475 |
EGU2020-8510
Falk Heße, Lars Isachsen, Sebastian Müller, and Attinger Sabine

Characterizing the subsurface of our planet is an important task. Yet compared to many other fields, the characterization of the subsurface is always burdened by large uncertainties. These uncertainties are caused by the general lack of data and the large spatial variability of many subsurface properties. Due to their comparably low costs, pumping tests are regularly applied for the characterization of groundwater aquifers. The classic approach is to identify the parameters of some conceptual subsurface model by means of curve fitting some analytical expression to the measured drawdown. One of the drawbacks of classic analyzation techniques of pumping tests is the assumption of the existence of a single representative parameter value for the whole aquifer. Consequently, they cannot account for spatial heterogeneities. To address this limitation, a number of studies have proposed extensions of both Thiem’s and Theis’ formula. Using these extensions, it is possible to estimate geostatistical parameters like the mean, variance and correlation length of a heterogeneous conductivity field from pumping tests.

While these methods have demonstrated their ability to estimate such geostatistical parameters, their data worth has rarely been investigated within a Bayesian framework. This is particularly relevant since recent developments in the field of Bayesian inference facilitate the derivation of informative prior distributions for these parameters. Here, informative means that the prior is based on currently available background data and therefore may be able to substantially influence the posterior distribution. If this is the case, the actual data worth of pumping tests, as well as other subsurface characterization methods, may be lower than assumed.

To investigate this possibility, we implemented a series of numerical pumping tests in a synthetic model based on the Herten aquifer. Using informative prior distributions, we derived the posterior distributions over the mean, variance and correlation length of the synthetic heterogeneous conductivity field. Our results show that for mean and variance, we already get a substantially lowered data worth for pumping tests when using informative prior distributions, whereas the estimation of the correlation length remains mostly unaffected. These results suggest that with an increasing amount of background data, the data worth of pumping tests may fall even lower, meaning that more informative techniques for subsurface characterization will be needed in the future.

 

 

How to cite: Heße, F., Isachsen, L., Müller, S., and Sabine, A.: Bayesian Analysis of the Data Worth of Pumping Tests Using Informative Prior Distributions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8510, https://doi.org/10.5194/egusphere-egu2020-8510, 2020.

D476 |
EGU2020-9489
Gabrielle Rudi, Nathalie Lalande, Xavier Louchart, and Jean-Stéphane Bailly

Public authorities, in response to the Water Framework Directive, impose to take measures to protect catchments that provide drinking water when the levels of contamination exceed (or are susceptible to exceed) drinking water standards. Companies and public research institutes involved in the transformation of conventional agricultural practices are therefore engaged in the development of methods aiming at assessing the vulnerability of territories towards diffuse pollutions. This poster presents an uncertainty and sensitivity analysis on a model able to assess the vulnerability towards hydrological transfers of pesticides. This research work helps to enhance the reliability of the information that is given to public authorities regarding priority areas for reduction of pesticide use. 

The research is being conducted in an agricultural study area located in the center of France (30km2). The studied model makes calculations based on cartographic data (DTM, soil properties, hydrographic network, climate, land cover) to identify vulnerable plots and subcatchments, on the basis of institutional guidelines for pesticide risk transfer assessment. Considered uncertainties for the analysis are the accuracy and resolution of the input cartographic data, as well as the parameterization of the model. The results highlight that these uncertainties can influence, in some cases significantly, the outputs of the model and therefore information given to public authorities. This research collaboration between two French public research institutes (INRAE, AgroParisTech) and a private company (Envilys) on an operational case study allows to identify levers to enhance the reliability of outputs from vulnerability models and in turn the efficiency of measures for catchment protection. It also allows the determination of the resolution of vulnerability mapping outputs according to obtained uncertainties. 

How to cite: Rudi, G., Lalande, N., Louchart, X., and Bailly, J.-S.: Operational uncertainty and sensitivity analyses of a model assessing water catchment vulnerability to pesticides, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9489, https://doi.org/10.5194/egusphere-egu2020-9489, 2020.

D477 |
EGU2020-10348
Anna E. Sikorska-Senoner, Bettina Schaefli, and Jan Seibert

The quantification of extreme floods and associated return periods remains to be a challenge for flood hazard management and is particularly important for applications where the full hydrograph shape is required (e.g., for reservoir management). One way of deriving such estimates is by employing a comprehensive hydrological simulation framework, including a weather generator, to simulate a large set of flood hydrographs. In such a setting, the estimation uncertainties originate from the hydrological model, but also from the climate variability. While the uncertainty from the hydrological model can be described with common methods of uncertainty estimation in hydrology (in particular related to model parameters), the uncertainties from climate variability can only be represented with repeated realizations of meteorological scenarios. These scenarios can be generated with the help of the selected weather generator(s), which are capable of providing numerous and continuous long time series. Such generated meteorological scenarios are then used as input for a hydrological model to simulate a large sample of extreme floods, from which return periods can be computed based on ranking.

In such a simulation framework, many thousands of possible combinations of meteorological scenarios and of hydrological model parameter sets may be generated. However, these simulations are required at a high temporal resolution (hourly), needed for the simulation of extreme floods and for determining infrequent floods of a return period equal to or lower than 1000 years. Accordingly, due to computational constraints related to any hydrological model, one often needs to preselect meteorological scenarios and representative model parameter sets to be used within the simulation framework. Thus, some kind of an intelligent parameter selection for deriving the uncertainty ranges of extreme model simulations for such rare events would be very beneficial but is currently missing.

Here we present results from an experimental study where we tested three different methods of selecting a small number of representative parameter sets for a Swiss catchment. We used 100 realizations of 100 years of synthetic precipitation-streamflow data. We particularly explored the reliability of the extreme flood uncertainty intervals derived from the reduced parameter set ensemble (consisting of only three representative parameter sets) compared to the full range of 100 parameter sets available. Our results demonstrated that the proposed methods are efficient for deriving uncertainty intervals for extreme floods. These findings are promising for the simulation of extreme floods in comparable simulation frameworks for hydrological risk assessment.

How to cite: Sikorska-Senoner, A. E., Schaefli, B., and Seibert, J.: Navigating through extreme flood simulations with intelligently chosen parameter sets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10348, https://doi.org/10.5194/egusphere-egu2020-10348, 2020.

D478 |
EGU2020-10502
Mariaines Di Dato, Rohini Kumar, Estanislao Pujades, Timo Houben, and Sabine Attinger

River stream is the result of several complex processes operating at basin scale. Therefore, the river catchment can be conceptualized as a series of interlinked compartments, which are characterized by their own response time to a rainfall event. Each compartment generates a flow component, such as the direct runoff, the interflow and the baseflow. The latter, typically generating from groundwater, is the slower portion of stream flow and plays a key role in studying the hydrological droughts.

In many catchment or large-scale hydrologic models, the groundwater dynamics are typically described by a linear reservoir model, which depends on the state of the reservoir and the parameter, known as recession coefficient or characteristic time. The characteristic time can be considered as the time needed until an aquifer reacts to a certain perturbation. So far, the characteristic time has been estimated by analyzing the slope of the recession (discharge) curve. However, as this method assumes that the recharge is zero within the basin, it may lead to inaccurate estimate when such a hypothesis is not fulfilled in reality.

The present work proposes to infer the characteristic time by using a stochastic approach based on spectral analysis. The catchment aquifer can be viewed as a filter, which modifies an input signal (e.g., rainfall or recharge) into an output signal (e.g., the baseflow or the hydraulic head). Since the transfer function, namely the ratio between the spectrum of baseflow and the spectrum of recharge, is dependent on the aquifer characteristics, it can be used to infer the aquifer parameters. In particular, the characteristic time is evaluated by fitting the spectrum and the variance of the measured baseflow with the analytical stochastic solutions for the linear reservoir. We compare six different methods for hydrograph separation, thereby highlighting a systematic uncertainty in determining the characteristic time due to the choice of filter used. To reduce the uncertainty in fitting, we will use the mesoscale Hydrological Model (mHM) (Samaniego et al., 2010; Kumar et al., 2013) to generate realistic time series for recharge. We apply the spectral analysis method to several river basins in Germany, with the goal to define a regionalization rule for characteristic time.

 

References:

  • Samaniego L., R. Kumar, S. Attinger (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale. Water Resour. Res., 46, W05523, doi:10.1029/2008WR007327.
  • Kumar, R., L. Samaniego, and S. Attinger (2013): Implications of distributed hydrologic model parameterization on water fluxes at multiple scales and locations, Water Resour. Res., 49, doi:10.1029/2012WR012195

How to cite: Di Dato, M., Kumar, R., Pujades, E., Houben, T., and Attinger, S.: Evaluation of aquifer parameters at regional scale by spectral analysis of discharge time series, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10502, https://doi.org/10.5194/egusphere-egu2020-10502, 2020.

D479 |
EGU2020-10774
Moritz Gosses and Thomas Wöhling

Physically-based groundwater models allow highly detailed spatial resolution, parameterization and process representation, among other advantages. Unfortunately, their size and complexity make many model applications computationally demanding. This is especially problematic for uncertainty and data worth analysis methods, which often require many model runs.

To alleviate the problem of high computational demand for the application of groundwater models for data worth analysis, we combine two different solutions:

  1. a) the use of surrogate models as faster alternatives to a complex model, and
  2. b) a robust data worth analysis method that is based on linear predictive uncertainty estimation, coupled with highly efficient null-space Monte Carlo techniques.

We compare the performance of a complex benchmark model of a real-world aquifer in New Zealand to two different surrogate models: a spatially and parametrically simplified version of the complex model, and a projection-based surrogate model created with proper orthogonal decomposition (POD). We generate predictive uncertainty estimates with all three models using linearization techniques implemented in the PEST Toolbox (Doherty 2016) and calculate the worth of existing, “future” and “parametric” data in relation to predictive uncertainty. To somewhat account for non-uniqueness of the model parameters, we use null-space Monte Carlo methods (Doherty 2016) to efficiently generate a multitude of calibrated model parameter sets. These are used to compute the variability of the data worth estimates generated by the three models.

Comparison between the results of the complex benchmark model and the two surrogates show good agreement for both surrogates in estimating the worth of the existing data sets for various model predictions. The simplified surrogate model shows difficulties in estimating worth of “future” data and is unable to reproduce “parametric” data worth due to its simplification in parameter representation. The POD model was able to successfully reproduce both “future” and “parametric” data worth for different predictions. Many of its data worth estimates exhibit a high variance, though, demonstrating the need of robust data worth methods as presented here which (to some degree) can account for parameter non-uniqueness.

 

Literature:

Doherty, J., 2016. PEST: Model-Independent Parameter Estimation - User Manual. Watermark Numerical Computing, 6th Edition.

How to cite: Gosses, M. and Wöhling, T.: Robust data worth analysis with surrogate models in groundwater, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10774, https://doi.org/10.5194/egusphere-egu2020-10774, 2020.

D480 |
EGU2020-11854
Tuong Vi Tran, Johannes Buckel, Philipp Maurischat, Handuo Tang, Zhengliang Yu, Thomas Graf, Andreas Hördt, Fan Zhang, Georg Guggenberger, and Antje Schwalb

The aquifers on the Tibetan Plateau (TP) constitute as origin for major river systems, which are supplying millions of people all over Asia. Increasing population and tourism activities leading to larger water consumption. Hence, water supply is getting increasingly important. The TP is a sensitive system and is noticeable reacting climate change. Past decades are marked with, increasing trends of precipitation, melting of glaciers and degradation of permafrost and have generally lead to rising water levels in lakes on the TP. To ensure future water supply, aquifer characterisation and future prognosis on groundwater behavior are therefore necessary. However, due to the remote character of the TP, knowledge according to hydrogeological parameter is scarce. The aim of this study is therefore to estimate a range for aquifer parameter based on geophysical methods. The Zhagu basin, situated in the Nam Co Lake basin (second largest lake on the TP), is used as a case study. This project is part of the International Research Training Group “Geoecosystems in transition on the Tibetan Plateau” (TransTiP), funded by the DFG.

During several field work campaign in July 2018, May 2019 and September 2019 disturbed sediment samples were taken and were analyzed for grain size distribution. Selected sediment layer in the laboratory were tested. Outcome of this analysis is the porosity for each selected sediment layer. Another measurement during field work has been conducted, namely electrical resistivity tomography measurements (ERT). To get better approximation of porosity and sediment characteristics, Archie’s Law is used as model to estimate those properties and later on to compare it to field and laboratory results. Two approaches are implemented (i) calculates the bulk resistivity based on known porosity from the laboratory and known conductivity of pore water measured during field work (ii) calculates the porosity with known conductivity of pore water and the bulk conductivity. For analysis saturated sediment layers were chosen.

The investigation shows that both approaches are largely applicable and leading to almost same results and trends of each sediment layer. The best percentage deviation of the modeled bulk resistivity results to the measurement in the field could be achieved by position D11 which is situated near the Nam Co Lake showing a deviation of around 7%. Inside the catchment the percentage deviation is largely increasing. However, the application of Archie’s Law in combination with field and laboratory measurements allows to construct a porosity ranges for future groundwater flow calibration. In addition, the results emphasising the zonation of the subsurface in (un)saturated zones due to the small amount of resistivity.

Sediment profiles, ERT measurements, observations, interpretation and conclusion including the comparison of simulated resistivity and simulated porosity to field resistivity and porosity based on laboratory analysis will be shown and discussed in the contribution.

How to cite: Tran, T. V., Buckel, J., Maurischat, P., Tang, H., Yu, Z., Graf, T., Hördt, A., Zhang, F., Guggenberger, G., and Schwalb, A.: Aquifer parameter estimation for the Zhagu subcatchment (Tibetan Plateau) based on geophysical methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11854, https://doi.org/10.5194/egusphere-egu2020-11854, 2020.

D481 |
EGU2020-15234
Claire Lauvernet, Céline Helbert, and Bruno Sudret

Significant amounts of pollutant are measured in surface water, their presence due in part to the use of pesticides in agriculture. One solution to limit pesticide transfer by surface runoff is to implement vegetative filter strips (VFS) along rivers. The sizing of these strips is a major issue, with influencing factors that include local conditions (climate, soil, etc.). The BUVARD modeling toolkit was developed to design VFSs throughout France according to these properties. This toolkit includes the numerical model VFSMOD, which quantifies dynamic effects of VFS site-specific pesticide mitigation efficiency. In this study, a metamodeling (or model dimension reduction) approach is proposed to ease the use of BUVARD and to help users design VFSs that are adapted to specific contexts. Different reduced models, or surrogates, are compared: GAM, Polynomial Chaos Expansions, Kriging, and Mixed-kriging. Mixed-kriging is a kriging method that was implemented with a covariance kernel for a mixture of qualitative and quantitative inputs. Kriging and PCE are built by couple of modalities and Mixed-kriging  and GAM are built considering mixed quantitative and qualitative variables. The metamodel is a simple way to provide a relevant first guess to help design the pollution reduction device. In addition, the surrogate model is a relevant tool to visualize the impact that lack of knowledge of some parameters of filter efficiency can have when performing risk analysis and management.

How to cite: Lauvernet, C., Helbert, C., and Sudret, B.: Metamodeling methods that incorporate qualitative variables for improved design of vegetative filter strips., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15234, https://doi.org/10.5194/egusphere-egu2020-15234, 2020.

D482 |
EGU2020-18010
Imane Farouk, Emmanuel Cosme, Sammy Metref, Joel Gailhard, and Matthieu Le-Lay

A large number of hydrological forecasts are carried out daily by the hydro-meteorologists of the french electricity production agency (EDF). These forecasts are based on a MORDOR hydrological model [Boy, 1996]. Since its development, this model has been noted for its performance [Mathevet, 2005], and a new more advanced version proposing a semi-distributed (or SD) structure improves the quality of the simulations [Garavaglia et al., 2017].

However, many uncertainties such as calibration errors, unavailable observations, and the uncertainties linked to the data used as forcing for the model can have a very significant impact on the quality of the results. Data assimilation is a relevant method for reducing the uncertainties of forcings and then obtain better quality simulations. Previous studies show a gain in the contribution of a variational assimilation to initialize a semi-distributed hydrological model [Lee et al., 2011], but the variational methods are less effective with non-linear behaviors. Therefore the ensemble methods are more widely adopted, as the ensemble Kalman filter (or EnKF) assimilation method which can be found in various studies ([Han et al., 2012], [Clark et al., 2008], [Xie and Zhang, 2010], [Slater and Clark, 2006], [Chen et al., 2011], [Alvarez-Garreton et al., 2015]).

As part of our study, a particle filter has been implemented as an assimilation scheme in the semi-distributed hydrological model MORDOR-SD. Several types of observations, such as the flow at the outlet of the watershed or the snow stock, were used in this assimilation system. Some sensitivities experiments on the various parameters specific to the system as well as on the choice of the observations to be taken into account were carried out. This study will show the benefits obtained from the assimilation of in situ data on the quality of the simulations as well as on the forecasts. Performed in many different areas (the study covers several watersheds), the analysis of observation errors and the construction of a specific observation error model brings an additional benefit in the quality of the results.

 

How to cite: Farouk, I., Cosme, E., Metref, S., Gailhard, J., and Le-Lay, M.: Hydrological data assimilation using the particle filter in a semi-distributed model MORDOR-SD, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18010, https://doi.org/10.5194/egusphere-egu2020-18010, 2020.

D483 |
EGU2020-19396
Maximilian Ramgraber, Robin Weatherl, and Mario Schirmer

Increasingly intensive drought periods during the summer months put stresses even on traditionally water-rich regions such as Switzerland. In the particularly dry year of 2018, several Swiss municipalities were forced to place bans on agricultural irrigation, while others were forced to import water from neighbouring catchments to sustain water supply. The preparation for and management during such droughts demands sustainable management plans which are often informed by numerical models providing decision support.

Unfortunately, sustainable water resources management of alpine regions often demands a greater degree of system complexity than usual. This complexity must be reflected in the models used for decision-support: fixed head boundaries must be used cautiously, the aquifer’s depth and properties are often uncertain and highly heterogeneous, and inflow and recharge are similarly difficult to quantify. Considering these diverse sources of uncertainty renders the Bayesian parameter inference problem highly challenging.

Towards this end, we explore a technique known as Stein Variational Gradient Descent (SVGD). This variational method implements a series of smooth transformations resulting in a particle flow, incrementally transforming an ensemble of particles into samples of the posterior. The method has been shown to be able to reproduce non-Gaussian and even multi-modal distributions, provided the underlying posterior is sufficiently smooth.

In this study, we test this algorithm with a groundwater model of the catchment of Fehraltorf implemented in MODFLOW 6. We consider parameter uncertainty for the aquifer depth and topology, its hydraulic parameters, and control variables for recharge and inflow. We report the resulting water table and budget and discuss the optimization performance.

How to cite: Ramgraber, M., Weatherl, R., and Schirmer, M.: Water budget estimation under parameter uncertainty using Stein Variational Gradient Descent, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19396, https://doi.org/10.5194/egusphere-egu2020-19396, 2020.

D484 |
EGU2020-21508
Aline Schäfer Rodrigues Silva, Marvin Höge, Anneli Guthke, and Wolfgang Nowak

For reactive transport of solutes on aquifer-scale, measurements are usually costly and time-consuming and therefore observation data is scarce. Consequently, the system is often not fully understood and modelers cannot be sure which processes are relevant on the considered spatial and temporal scale. This lack of system understanding leads to so-called conceptual uncertainty, which is the uncertainty in choosing between competing hypotheses for a model formulation.

To account for conceptual uncertainty, modelers should work with several model alternatives that differ in their system representation. In the case of aerobic respiration and denitrification in a heterogeneous aquifer, several modeling concepts have been proposed. The approaches used in this study range from 2D spatially explicit to streamline-based models and vary considerably in their underlying assumptions and their computational costs. Typically, models that are more complex require more measurement data to constrain their parameters. Therefore, model complexity and the effort for acquiring field data have to be balanced.

In this study, we apply a concept called Bayesian model legitimacy analysis to assess which level of model complexity is justifiable given a certain amount of realistically available measurement data. This analysis reveals which number of measurements in a specific experimental setup is needed to justify a certain level of model complexity. Our results indicate that the complexity of the reference model (spatially explicit, dispersion and growth/decay of biomass included) is justifiable even by the smallest amount of synthetic measured data.

How to cite: Schäfer Rodrigues Silva, A., Höge, M., Guthke, A., and Nowak, W.: Comparing different approaches for modeling reactive transport on aquifer-scale – which level of complexity is legitimate?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21508, https://doi.org/10.5194/egusphere-egu2020-21508, 2020.

D485 |
EGU2020-21890
Jonas Rothermel and Maike Schumacher

Physical-based Land Surface Models (LSMs) have deepened the understanding of the hydrological cycle and serve as the lower boundary layer in atmospheric models for numerical weather prediction. As any numerical model, they are subject to various sources of uncertainty, including simplified model physics, unknown empirical parameter values and forcing errors, particularly precipitation. Quantifying these uncertainties is important for assessing the predictive power of the model, especially in applications for environmental hazard warning. Data assimilation systems also benefit from realistic model error estimates.

In this study, the LSM NOAH-MP is evaluated over the Mississippi basin by running a large ensemble of model configurations with suitably perturbed forcing data and parameter values. For this, sensible parameter distributions are obtained by performing a thorough sensitivity analysis, identifying the most informative parameters beforehand by a screening approach. The ensemble of model outputs is compared against various hydrologic and atmospheric feedback observations, including SCAN soil moisture data, GRACE TWS anomaly data and AmeriFlux evapotranspiration measurements. The long-term aim of this study is to improve land-surface states via data assimilation and to investigate their influence on short- to midterm numerical weather prediction. Thus, the uncertainty of the simulated model states, such as snow, soil moisture in various layers, and groundwater are thoroughly studied to estimate the relative impact of possible hydrologic data sets in the assimilation.

How to cite: Rothermel, J. and Schumacher, M.: Evaluation of a NOAH-MP Land Surface Model Ensemble over the Mississippi Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21890, https://doi.org/10.5194/egusphere-egu2020-21890, 2020.

D486 |
EGU2020-14597
Sabine M. Spiessl and Sergei Kucherenko

Probabilistic methods of higher order sensitivity analysis provide a possibility for identifying parameter interactions by means of sensitivity indices. Better understanding of parameter interactions may help to better quantify uncertainties of repository models, which can behave in a highly nonlinear, non-monotonic or even discontinuous manner. Sensitivity indices can efficiently be estimated by the Random-Sampling High Dimensional Model Representation (RS-HDMR) metamodeling approach. This approach is based on truncating the ANOVA-HDMR expansions up to the second order, while the truncated terms are then approximated by orthonormal polynomials. By design, the sensitivity index of total order (SIT) in this method is approximated as the sum of the indices of first order (SI1’s) plus all corresponding indices of second order (SI2’s) for a considered parameter. RS-HDMR belongs to a wider class of methods known as polynomial chaos expansion (PCE). PCE methods are based on Wiener’s homogeneous chaos theory published in 1938. It is a widely used approach in metamodeling. Usually only a few terms are relevant in the PCE structure. The Bayesian Sparse PCE method (BSPCE) makes use of sparse PCE. Using BSPCE, SI1 and SIT can be estimated. In this work we used the SobolGSA software [1] which contains both the RS-HDMR and BSPCE methods.

We have analysed the sensitivities of a model for a generic LILW repository in a salt mine using both the RS-HDMR and the BSPCE approach. The model includes a barrier in the near field which is chemically dissolved (corroded) over time by magnesium-containing brine, resulting in a sudden significant change of the model behaviour and usually a rise of the radiation exposure. We investigated the model with two sets of input parameters: one with 6 parameters and one with 5 additional ones (LILW6 and LILW11 models, respectively). For the time-dependent analysis, 31 time points were used.

The SI1 indices calculated with both approaches agree well with those obtained from the well-established and reliable first-order algorithm EASI [2] in most investigations. The SIT indices obtained from the BSPCE method seem to increase with the number of simulations used to build the metamodel. The SIT time curves obtained from the RS-HDMR approach with optimal choice of the polynomial coefficients agree well with the ones from the BSPCE approach only for relatively low numbers of simulations. As, in contrast to RS-HDMR, the BSPCE approach takes account of all orders of interaction, this may be a hint for the existence of third- or higher-order effects.

Acknowledgements

The work was financed by the German Federal Ministry for Economic Affairs and Energy (BMWi). We would also like to thank Dirk-A. Becker for his constructive feedback.

References

[1]         S. M. Spiessl, S. Kucherenko, D.-A. Becker, O. Zaccheus, Higher-order sensitivity analysis of a final repository model with discontinuous behaviour. Reliability Engineering and System Safety, doi: https://doi.org/10.1016/j.ress.2018.12.004, (2018).

[2]          E. Plischke, An effective algorithm for computing global sensitivity indices (EASI). Reliability Engineering and System Safety, 95: 354–360, (2010).

How to cite: Spiessl, S. M. and Kucherenko, S.: Comparison of two metamodeling approaches for sensitivity analysis of a geological disposal model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14597, https://doi.org/10.5194/egusphere-egu2020-14597, 2020.

D487 |
EGU2020-13897
Daniel Erdal, Sinan Xiao, Wolfgang Nowak, and Olaf A. Cirpka

Global sensitivity analysis and uncertainty quantification of nonlinear models may be performed using ensembles of model runs. However, already in moderately complex models many combinations of parameters, which appear reasonable by prior knowledge, can lead to unrealistic model outcomes, like perennial rivers that fall dry in the model or simulated severe floodings that have not been observed in the real system. We denote these parameter combinations with implausible outcome as “non-behavior”. Creating a sufficiently large ensemble of behavioral model realizations can be computationally prohibitive, if the individual model runs are expensive and only a small fraction of the parameter space is behavioral. In this work, we design a stochastic, sequential sampling engine that utilizes fast and simple surrogate models trained on past realizations of the original, complex model. Our engine uses the surrogate model to estimate whether a candidate realization will turn out to be behavioral or not. Only parameter sets that with a reasonable certainty of being behavioral (as predicted by the surrogate model) are simulated using the original, complex model. For a subsurface flow model of a small south-western German catchment, we can show high accuracy in the surrogate model predictions regarding the behavioral status of the parameter sets. This increases the fraction of behavioral model runs (actually computed with the original, complex model) over total complex-models runs to 20-90%, compared to 0.1% without our method (e.g., using brute-force Monte Carlo sampling).  This notable performance increase depends on the choice of surrogate modeling technique. Towards this end, we consider both Gaussian Process Emulation (GPE) and models based on polynomials of active variables determined by Active Subspace decomposition as surrogate models. For the GPE-based surrogate model, we also compare random search and active learning strategies for the training of the surrogate model.

How to cite: Erdal, D., Xiao, S., Nowak, W., and Cirpka, O. A.: Effective sampling of behavioral subsurface parameter realizations assisted by surrogate models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13897, https://doi.org/10.5194/egusphere-egu2020-13897, 2020.

D488 |
EGU2020-17384
Lennart Schüler and Sabine Attinger

Streamflow observations are integrated signals of a catchment. This data is only weakly correlated to local observations (e.g. soil moisture and groundwater heads) or local parameters (e.g. hydraulic conductivity) of the catchment. On the one hand, this makes it next to impossible to estimate model parameters from streamflow observations alone. On the other hand, local observations only make parameter estimation possible in their immediate proximity. With data scarcity in mind, this multi-variate data assimilation alone has limited potential to solving the problem of estimating model parameters.
Therefore, we propose to not apply data assimilation to the model parameters directly, but to the global parameters of the multi-scale regionalization (MPR, Samaniego et al. 2010) approach. This approach relates a very limited number of global parameters through transfer functions to the model parameters. By doing so, the number of parameters to be estimated can be drastically reduced, saving computing time and with robust transfer functions, the local parameters can be estimated not only in the proximity of observations, but also throughout the catchment.
Using the DA-MPR approach, we investigate different experiment setups for estimating model parameters, e.g. a stationary cosmic ray sensor vs. a mobile one or how many local observations are actually needed in order to uniquely identify the model parameters.

Samaniego L., R. Kumar, S. Attinger (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale. Water Resour. Res., 46

How to cite: Schüler, L. and Attinger, S.: A Direct Application of Data Assimilation to Multi-Scale Regionalized Parameters, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17384, https://doi.org/10.5194/egusphere-egu2020-17384, 2020.