Development and application of decision support systems to aquifers and underground reservoirs requires reliable and physically based methods to infer key parameters controlling multiphase flow and contaminant fluxes of conservative or reactive substances in the subsurface. Underground environments are complex and extremely heterogeneous exhibiting variations on a multiplicity of scales. Addressing heterogeneity in all its manifestations is the focus of exciting and intense forefront research and industrial activities.
This session (i) invites presentations on recent developments in understanding, measuring, and modelling subsurface flow and solute transport processes in both the saturated and unsaturated zones, as well as across boundaries; (ii) is aimed at providing an opportunity for specialists to exchange information and to introduce various existing and novel alternative deterministic and stochastic models of subsurface flow and transport to the general hydrological community, with critical and timely applications to environmental and industrially relevant settings.
Focus is placed on recent key developments in novel theoretical aspects and associated computational tools, fate of new contaminants, and field/laboratory applications dealing with accurate and efficient prediction and quantification of uncertainty for flow, conservative and reactive transport processes in the subsurface, in the presence of multiple information at different scales, ranging from the pore level to the intermediate and basin scales.

This session is also organized to honor Ghislain de Marsily. Prof. Ghislain de Marsily will provide a solicited presentation on "Historical perspectives on the development of stochastic methods in groundwater modelling”.

Convener: Monica Riva | Co-conveners: Jesús Carrera, Daniel Fernandez-Garcia, Xavier Sanchez-Vila, Craig T. Simmons
| Attendance Mon, 04 May, 14:00–15:45 (CEST)

Files for download

Session materials Download all presentations (34MB)

Chat time: Monday, 4 May 2020, 14:00–15:45

Chairperson: Monica Riva, Jesus Carrera, Xavier Sanchez Vila, Daniel Fernandez Garcia, Craig Simmons
D433 |
Ulrich Maier, Alexandru Tatomir, and Martin Sauter

Reduction of atmospheric greenhouse gas emissions have become a main focus of research and policy debates and are most likely among the primary environmental concerns of the upcoming decades. One of several options is carbon capture and storage (CCS) after electricity production. Storage of carbon dioxide in geological reservoirs is attributed to three different processes, i) filling of pore space within the reservoir by gaseous or supercritical CO2 (pore trapping), ii) dissolution of the CO2 into the formation water (solubility trapping) and iii) precipitation of carbonate as mineral phase (mineral trapping). The potential of the latter is considerably uncertain, but has probably the most long-term potential of carbon sequestration in the subsurface. Underlying concepts of geochemical equilibria computation are discribed for the conditions of pressure and temperature in deep reservoirs up to 300°C and 1000 atmospheres. The geochemical codes Phreeqc and MIN3P have recently been upgraded for that purpose and were applied in the study. Models using field data from the Heletz oil field (Israel) sandstone formation are presented, focusing on the shift of saturation index (SI) of carbonates due to injection of CO2. Alterations of the mineral phase over time become visible and potentials to precipitate were observed for the minerals Ankerite > Dolomite > Siderite ~ Calcite > Magnesite, and for the mineral Dawsonite during early stages when only Na+ is present in high ionic concentrations. Observed variability of water chemistry and the database records provide an amount of uncertainty, which was used as input to delineate the range of the mineralization potential. Simple approaches of principal component analysis leading to sensitivity coefficients are shown.

How to cite: Maier, U., Tatomir, A., and Sauter, M.: Hydrogeochemical modelling of mineral precipitation potentials in carbon capture and storage (CCS), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11680, https://doi.org/10.5194/egusphere-egu2020-11680, 2020

D434 |
Paula Rodriguez-Escales, Carme Barba, Xavier Sanchez-Vila, and Albert Folch

Redox potential measurements are a sink of multiple processes and factors related to the hydrochemistry of a water.  Normally, by themselves, they do not provide enough information to describe all the processes occurring in a system and they are considered only as “an indicator” that combined with a more detailed hydrochemistry can provide information of the driving processes. There are different reasons why these measurement are not quantitatively valid. First of all, sampling plays an important role. The most common method to determine Eh in groundwater is by using an Eh probe and a cell flow, which implies, by itself, mixing of waters. On the other hand, the Eh reproducibility is also conditioned by the amount of processes considered in a numerical model. Eh depends on several geochemical processes, which at the same time, they are depending on flow and heat transport. The last achievements in sensoring science has allowed to develop sensor probes that allows the Eh measurements in a non-invasive and a continuous way.

Considering this, in this work we have monitored intensively an infiltration pond (in the context of Managed Aquifer Recharge) in order to develop a proper model to reproduce the Eh. The monitoring was based in the use of non-invasive Eh probes, which registered the Eh every 15 min during a year. During that year, four hydrochemical campaigns were also developed in order to quantify the hydrochemistry of the site. On the other hand, the model considered the flow of the system, the heat transport and a set of geochemical processes which were also depending on temperature. The main processes were the generation of organic matter in the own system, the oxidation of organic carbon using different TEAPs, nitrification and different secondary geochemical processes related, specially, to iron and manganese geochemistry.

How to cite: Rodriguez-Escales, P., Barba, C., Sanchez-Vila, X., and Folch, A.: Modeling the redox potential during the infiltration in a recharge pond located in the Llobregat river basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19505, https://doi.org/10.5194/egusphere-egu2020-19505, 2020

D435 |
| Highlight
Wolfgang Nowak, Teng Xu, Sebastian Reuschen, Harrie-Jan Hendricks Franssen, and Alberto Guadagnini

Geostatistical inversion modeling methods aim at characterizing spatial distributions of (typically hydraulic) heterogeneous properties from indirect information (e.g., piezometric heads, concentrations), while quantifying their uncertainties. Many methods have been developed, but only a few large intercomparison studies have been performed in the past decades. We present a benchmarking initiative for geostatistical inversion with the goal to enable a truly objective and accurate intercomparison and testing of new and existing methods.

This initiative defines an agreed-upon set of benchmarking scenarios. The benchmarking set focuses on addressing fully-saturated groundwater flow in multi-Gaussian log-hydraulic conductivity fields. Our study provides reference solutions and illustrates the high-end algorithms we advance and develop to compute these solutions on massive high-performance computing equipment. We rely on Monte-Carlo Markov-Chain algorithms with a modified Metropolis-Hastings sampler, following the idea of preconditioned Crank-Nicholson MCMC (pCN-MCMC). In this technique, the acceptance probability of MCMC only depends on likelihood ratios, while being independent from the geostatistical prior. This largely improves acceptance rates and so reduces computational costs.

To further improve the accuracy and efficiency for Bayesian inversion of multi-Gaussian log-hydraulic conductivity fields, we combine pCN-MCMC with parallel tempering. Parallel tempering can handle the challenges associated with the need to explore large parameter spaces with possibly multi-modal distributions: it improves the efficiency of exploring the target posterior by exchange swaps between cold chains and hot chains that run in parallel, where the hot chains mainly explore the parameter space and colder chains exploit the identified high-likelihood regions.

Our new algorithm, hereafter termed pCN-PT, is tested against (a) accurate analytical solutions (kriging) in a high-dimensional, linear setting; (b) rejection sampling in a high-dimensional, non-linear problem with only few measurements; and (c) pCN-MCMC in multiple independent runs in a high dimensional, non-linear scenario with sufficient measurements. These tests are also performed in the established benchmarking scenarios. We invite all interested researchers to test and compare different inverse modeling method(s) in these benchmarking scenarios.

How to cite: Nowak, W., Xu, T., Reuschen, S., Hendricks Franssen, H.-J., and Guadagnini, A.: High-end solution techniques and accurate reference solutions: towards a community-wide benchmarking effort for stochastic inverse modeling of groundwater flow, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19114, https://doi.org/10.5194/egusphere-egu2020-19114, 2020

D436 |
| Highlight
Alberto Guadagnini, Monica Riva, Shlomo P. Neuman, and Martina Siena

Characterization of spatial heterogeneity of attributes of porous media is critical in several environmental and industrial settings. Quantities such as, e.g., permeability, porosity, or geochemical parameters of natural systems are typically characterized by remarkable spatial variability, their degree of heterogeneity being typically linked to the size of observation/measurement/support scale as well as to length scales associated with the domain of investigation. Here, we address the way stochastic representations of multiscale heterogeneity can be employed to assess documented manifestations of scaling of statistics of hydrological and soil science variables. As such, we focus on perspectives associated with interpretive approaches to scaling of the main statistical descriptors of heterogeneity observed at diverse scales. We start from the geostatistical framework proposed by Riva et al. (2015), who rely on the representation of the heterogeneous structure of hydrological variables by way of a Generalized Sub-Gaussian (GSG) model. The latter describes the random field of interest as the product of a zero-mean, generally (but not necessarily) multi-scale Gaussian random field (G) and a subordinator (U), which is independent of G and consists of statistically independent identically distributed non-negative random variables. The underlying Gaussian random field generally displays a multi-scale (statistical) nature which can be captured, for example, through a geostatistical description based on a Truncated Power Variogram (TPV) model. In this study we (i) generalize the original GSG model formulation to include alternative distributional forms of the subordinator and (ii) apply such a theoretical framework to analyze datasets associated with differing processes and observation scales. These include (i) measurements of surface topography of a (millimeter-scale) calcite sample resulting from induced mineral dissolution and (ii) neutron porosity data sampled from a (kilometer-scale) borehole. We finally merge all of the above mentioned elements within a geostatistical interpretation of the system based on the GSG approach where a Truncated Power Variogram (TPV) model is employed to represent the underlying correlation structure. By doing so, we propose to rely on these models to condition the spatial statistics of such fields on multiscale measurements via a co-kriging approach.


Riva, M., S.P. Neuman, and A. Guadagnini (2015), New scaling model for variables and increments with heavy-tailed distributions, Water Resour. Res., 51, 4623-4634, doi:10.1002/2015WR016998.

How to cite: Guadagnini, A., Riva, M., Neuman, S. P., and Siena, M.: Geostatistical representation of multiscale heterogeneity of porous media through a Generalized Sub-Gaussian model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11650, https://doi.org/10.5194/egusphere-egu2020-11650, 2020

D437 |
Jordi Bruno

In August 1977, Ghislain de Marsily published in Science an article with the provocative title: “Nuclear Waste Disposal: Can the Geologist Guarantee Isolation?", together with E. Ledoux, A Barbreau and J. Margat. It was a joint publication between Ecole de Mines de Paris, IPSN, CEA and BRGM and it could be pointed out as the foundation of the French Scientific programme regarding High Level Nuclear Waste (HLNW) Management . The paper explored the various alternatives regarding HLNW management and concluded that deep geological disposal was the most feasible alternative. The authors discussed also the key processes controlling radionuclide migration from a geological repository and concluded that retardation by rock sorption (ion-exchange) was the critical parameter, provided the rest of the waste and groundwater parameters were kept under reasonable values.

Since then and particularly in the 80’ and 90’s, Ghislain de Marsily has played a fundamental role in devising a strategy towards safe geological nuclear waste disposal in France, Europe and the rest of world. This, he has done by a combination of key scientific contributions as well as his participation in many scientific committees concerning HLNW management.

In my presentation I will discuss how the scientific, but also the personal contributions of Ghislain de Marsily helped to pave the way for the development of HLNW concepts and programmes all around the world.

How to cite: Bruno, J.: The impact of Ghislain de Marsily in Nuclear Waste Management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6576, https://doi.org/10.5194/egusphere-egu2020-6576, 2020

D438 |
Andrew Frampton and Liangchao Zou

There is a need for improved understanding of the mechanisms controlling solute transport in fractured crystalline rocks in order to address long-term safety analysis of repositories for spent nuclear fuel. In this contribution, flow and transport in three-dimensional discrete fracture networks with internal heterogeneity in aperture and permeability is investigated using a numerical DFN model. The fracture networks are obtained using field data of sparsely fractured crystalline rock from the Swedish candidate repository site for spent nuclear fuel. Then, heterogeneity textures with different correlation length and variance are created and mapped to each individual fracture of the network to represent internal fracture roughness. We demonstrate how the structure and variability of textures on the scale of individual fractures leads to different transport and dispersion behaviour at the scale of the network. Key thresholds for cases where flow dispersion is controlled by single-fracture heterogeneity versus network-scale heterogeneity are identified. Furthermore, we highlight enhanced flow channelling for cases where small-scale structure continues across intersections in a network, and highlight challenges for extension to large scale and site-specific modelling.

How to cite: Frampton, A. and Zou, L.: Dispersion in small-scale discrete fracture networks with internal fracture roughness: Challenges for site-scale modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21854, https://doi.org/10.5194/egusphere-egu2020-21854, 2020

D439 |
Guillem Sole-Mari, Daniel Fernàndez-Garcia, Xavier Sanchez-Vila, and Diogo Bolster

Hydrological models are unable to fully resolve subsurface flow and transport down to the microscale. Instead, modelers usually work with upscaled flow and transport properties that represent the behavior of the system at a given coarse scale. While this approach is justified from a practical standpoint, it disregards the local heterogeneity of porous media flows, which tend to produce mixing-limited reactive transport behaviors that cannot be captured by classical modeling approaches. While some innovative methods have been suggested in the past in order to address this problem, none of them has proposed a mathematical formulation which can potentially reproduce the generation, transport and decay of local concentration fluctuations and their impact on chemical reactions, for general initial and boundary conditions. Here, we propose a Lagrangian approach based on the random motion of fluid particles that locally mix following a Multi-Rate Interaction by Exchange with the Mean (MRIEM) formulation. Concentration fluctuations in the proposed model display the typical behavior associated to transport in porous media with mixing-limited conditions. Experimental results of reactive transport are successfully reproduced by the model.

How to cite: Sole-Mari, G., Fernàndez-Garcia, D., Sanchez-Vila, X., and Bolster, D.: Reactive transport in porous media with local mixing limitation: A Lagrangian modeling approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22079, https://doi.org/10.5194/egusphere-egu2020-22079, 2020

D440 |
Emanuela Bianchi Janetti, Monica Riva, and Alberto Guadagnini

We introduce, develop and test a novel Groundwater Probabilistic Risk Model, GPRM, aimed at assessing (and preventing) negative issues related to water resources management and exploitation. We apply GPRM to a highly heterogeneous regional field case, located in Northern Italy. Different risk pathways are presented formally forming a fault tree model, which enables identification of all basic events contributing to an (undesired) system failure. The latter is quantified in terms of depletion of a natural springs system representing a key feature of the considered groundwater system. The proposed GPRM allows to include the effect of multiple sources of uncertainty in our knowledge and description of the system on the evaluation of the overall probability of system failure due to different pumping schemes. In this context, we consider two probabilistic models based on different reconstruction of the aquifer geological structure. In each conceptual model, hydraulic conductivity associated with the geomaterials composing the aquifer and the boundary conditions are affected by uncertainty. Our results demonstrate that the application of GPRM to the field case allows (i) to quantify the risk associated with springs depletion due to increasing exploitation of the aquifer; (ii) to quantify how different sources of uncertainty (conceptual model uncertainty and model parameters’ uncertainty) affects this risk; (iii) to determine the optimal pumping scheme; and (iv) to identify the most vulnerable springs, where depletion first occurs.

How to cite: Bianchi Janetti, E., Riva, M., and Guadagnini, A.: Natural springs’ protection and probabilistic risk assessment under uncertain conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8618, https://doi.org/10.5194/egusphere-egu2020-8618, 2020

D441 |
Ana Gonzalez-Nicolas Alvarez, Wolfgang Nowak, Michael Sinsbeck, and Marc Schwientek

Commonly, chemical catchment regimes are described by a simple regression slope of log-concentrations versus log-discharges measured in the catchment outlet river. The slope value of these plots defines the chemical regime of a catchment. A slope=-1 corresponds to a constant contaminant release subject to dilution by rainfall (an unrealistic extreme but needed as a base flow); whereas a slope=0 means that there are chemostatic effects in the catchment or a washout of contaminants at a constant concentration. However, reality shows that actual time-series measurements of discharge and concentrations conflict with this naive representation since the measurements show temporal hysteresis that defies regression assumptions (i.e. that regression residuals must be uncorrelated). To represent this time interaction beyond regression, we design a simple stochastic time-series model that accounts for fluctuating concentration release and transport with memory. In this work, we also establish how to get the observation data required for a robust estimation of the slope with the least effort. To show the capability of our proposed model and method, we apply a retrospective optimal design of experiments to a high-frequency data series of nitrate concentration (collected by online probes) and discharge of a real catchment in Germany. We thin out the data by applying frequency and event-based monitoring strategies to find out the key components of the strategies that best predict the catchment behavior. Results indicate that our catchment under study (the Ammer catchment in southwestern Germany) is relatively close to a chemostatic type catchment and that our stochastic model, in fact, provides more accurate results for small data sets. Also, optimal data collection schemes for this purpose should be event-based, considering both high and low extremes of discharge that are spread out over time.

How to cite: Gonzalez-Nicolas Alvarez, A., Nowak, W., Sinsbeck, M., and Schwientek, M.: Characterize the catchment regime by applying optimal monitoring strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4843, https://doi.org/10.5194/egusphere-egu2020-4843, 2020

D442 |
Valérie Plagnes, David Quirt, Antonio Benedicto, and Patrick Ledru

A multidisciplinary approach combining a groundwater hydrogeochemical survey and a 3D groundwater flow model was applied to unconformity-type U mineralization in the Athabasca Basin (Canada), as a new supplementary guide for uranium exploration. This approach was developed at the McClean Lake Operation site (eastern part of the basin), where several uranium deposits have already been mined and others are not yet mined. The goal of ongoing exploration in this area is to find new deposits in the vicinity of known deposits to facilitate possible future mining.

Groundwater levels were measured in 60 wells and groundwater sampling was carried out in 31 of these wells, some of these wells are screened in bedrock below the unconformity and others in sandstones above the unconformity. Among these wells, we included 4 wells located near a known ore body (SABRE sector) to better evaluate the potential of our approach to identify the presence of U mineralization.

The results show that in this study area, the U concentration and saturation index maps are not good indicators of U mineralization as U concentrations are very low for all samples due to the strong reducing conditions. However, 5 of the wells show remarkable geochemical composition: the highest total dissolved solids, high Cl concentration and strong relationships between Cl and concentrations of Na, K, Mg, Ca, Fe as well as Sr and Ba, suggesting that these ions may have come from a common source. Four of these five samples belong to the deposit of the SABRE sector, but the fifth well is located upstream of this region, far from a known ore body. A 3-D groundwater model was developed for the entire basin and the flow path ending at this well screen was traced to its source by reverse particle tracking. In the structure of the groundwater model, graphite-rich fault zones are considered the main geological structures controlling groundwater flows. The up-gradient geochemical plume deciphered from the backflows allows the identification of new exploration targets. This approach appears to be an appropriate method for prioritizing locations for future exploration drilling.

How to cite: Plagnes, V., Quirt, D., Benedicto, A., and Ledru, P.: Hydrogeological modelling applied to mineral exploration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22496, https://doi.org/10.5194/egusphere-egu2020-22496, 2020

D443 |
Nicolae Suciu, Cristian Daniel Alecsa, Imre Boros, Florian Frank, Peter Knabner, Mihai Nechita, Alexander Prechtel, and Andreas Rupp

Solving the flow problem is the first step in modeling contaminant transport in natural porous media formations. Since typical parameters for aquifers often lead to advection-dominated transport problems, accurate flow solutions are essential for reliable simulations of the effective dispersion of the solute plumes. The numerical feasibility of the flow problem for realistic parameters accounting for the heterogeneity of the aquifer and the spatial scale of the transport problem is addressed in a benchmark study.

The study aims to investigate the accuracy and the convergence properties of several numerical approaches for simulating steady state flows in heterogeneous aquifers. Finite difference, finite element, discontinuous Galerkin, spectral, and random walk methods are tested on two-dimensional benchmark flow problems. The heterogeneity of the aquifer system is described by log-normal hydraulic conductivity fields with Gaussian and exponential correlation structures. For given integral scale both correlation models predict the same effective coefficients, but they pose very different numerical challenges: while the Gaussian correlation ensures the sample-smoothness of the fields, the exponential correlation does not fulfil the theoretical requirements and the numerical representations of the samples are rather noisy.

Realizations of log-normal hydraulic conductivity fields are generated with a Kraichnan algorithm in closed form as finite sums of random periodic modes, which allow direct code verification by comparisons with manufactured reference solutions. The quality of the methods is assessed for increasing variance of the log-hydraulic conductivity fields, which quantifies the heterogeneity, and for different numbers of random modes, which account for the spatial scale of the simulation. Experimental orders of convergence are calculated from successive refinements of the grid. The numerical methods are further validated by comparisons between statistical inferences obtained from Monte Carlo ensembles of numerical solutions and theoretical first-order perturbation results.

It is found that while for Gaussian correlation of the log-conductivity field all the methods perform well, in exponential case their accuracy deteriorates and, for large variance and number of modes, the benchmark problems are practically not solvable with reasonably large computing resources, for all the methods considered in this study.

How to cite: Suciu, N., Alecsa, C. D., Boros, I., Frank, F., Knabner, P., Nechita, M., Prechtel, A., and Rupp, A.: Numerical benchmark study for flow in highly heterogeneous aquifers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10504, https://doi.org/10.5194/egusphere-egu2020-10504, 2020

D444 |
Buse Yetişti, Nadim K Copty, Paolo Trinchero, and Xavier Sanchez-Vila

Pumping tests are often used for the estimation of subsurface flow parameters. Research has indicated that traditional geostatistical techniques expressed in terms of two-point correlations (i.e., the covariance of flow parameters at two points is only a function of separation distance) may not be adequate to fully represent complex patterns of flow and transport in heterogeneous subsurface systems. To address this issue, the concept of flow connectivity has been introduced to describe how different regions of the aquifer relate to each other. In this study, the impact of point-to-point flow connectivity on radially convergent flow tests towards a well is investigated numerically. A Monte Carlo approach is adopted whereby a large number of heterogeneous aquifer systems with different levels of connectivity (Gaussian, connected high-transmissivity fields, and connected low-transmissivity fields) are synthetically generated and then used to simulate pumping tests. Various test interpretation methods are then used to estimate apparent flow parameters from the time-drawdown curves, and examine how the estimated parameters relate to the underlying heterogeneous aquifer systems. Results indicate that the estimated transmissivity using only drawdown data corresponding to early times is dominated by the point transmissivity distribution in the vicinity of the well. The estimated transmissivity value gradually approaches the geometric mean of the full transmissivity field as a longer time-drawdown dataset is included in the interpretation. On the other hand, the storage coefficient estimated from late drawdown data is strongly sensitive to aquifer point-to-point flow connectivity and the relative locations of the observation and pumping wells. The relations between the estimated storage coefficient and different aquifer connectivity functions are also examined.

How to cite: Yetişti, B., Copty, N. K., Trinchero, P., and Sanchez-Vila, X.: Impact of Flow Connectivity on the Interpretation of Pumping Test Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20299, https://doi.org/10.5194/egusphere-egu2020-20299, 2020

D445 |
Marco Bianchi, Andrew Hughes, Majdi Mansour, Johanna Michaela Scheidegger, and Christopher Jackson

The Chalk is the most important regional aquifer in England supplying the majority of the groundwater used in the country. Traditionally, the Chalk has been interpreted as a dual-porosity aquifer consisting of a low-permeability, high-porosity matrix and a fracture component with associated relatively high secondary permeability, allowing groundwater flow. However, these two components alone cannot fully explain the groundwater flow regime and aquifer productivity indicating that the distribution of the hydrogeological properties is the result of more complex interplay of several regional and local factors. For instance, transmissivity generally exhibits a non-linear decline with depth controlled by variations in the spacing and aperture of the primary and secondary (solution) fractures. Topography is another important regional factor determining a spatial distribution of transmissivity (T) and storage coefficient (S) with generally higher values within valleys and lower values in the interfluves. The topographic factor is widely recognised, and it has been applied in several previous numerical modelling studies. However, these studies do not consider the local variability exhibited within an extensive dataset of more than 1000 pumping tests, while local adjustments of the initial topography-based T and S distributions are considered during the calibration step of the model. In this work, a hybrid geostatistical approach has been developed and applied for modelling the distribution of the hydrogeological properties of the Chalk. The approach combines, for the first time for the Chalk, local hard data from pumping tests with soft data accounting for the regional topographic trend. In particular, similar to the classic regression kriging approach, stochastic realisations of the T distribution in the unconfined region of the Chalk are generated from the combination of two components: 1) a non-linear deterministic model of the relationship between measured T values and the distance to valleys; 2) a sequential Gaussian simulation (SGS) component generating equally probable realizations of the residuals conditioned to the local data. Traditional conditional sequential Gaussian simulation was used instead to generate T and S spatial distributions in the confined region. To test the representativeness of the generated distributions, realisations of the hydrogeological parameters were considered for groundwater flow simulations based on a transient 2-D finite difference model coupled to a regional recharge model. Comparison between observed and simulated values for groundwater levels and river flows at reference locations showed a generally good agreement. The model was then used to quantify the importance of local hydrogeological data for improving model predictions versus alternative conceptualisations solely based on regional trends and model calibration.

How to cite: Bianchi, M., Hughes, A., Mansour, M., Scheidegger, J. M., and Jackson, C.: Stochastic hydrogeological parameterisation and modelling of the Chalk of England, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18685, https://doi.org/10.5194/egusphere-egu2020-18685, 2020

D446 |
Bhavya Ravinder and Elango Lakshmanan

A well-designed environmental monitoring plan is essential for safety of uranium mining and processing operations. Evaluating the possible uncertainties in a numerical model helps in enhancing the model output and also increases the reliability over the model results. For a radionuclide transport model, distribution co-efficient is a sensitive parameter and major source of uncertainty in results. In this study, an approach to quantify input source of uncertainty of distribution co-efficient in an engineered tailings pond in Northern Karnataka, India has been carried out. Probabilistic analysis such as Response Surface Method and Monte Carlo Simulation are used to propagate uncertainty. This study considers uncertainty associated with intrinsic heterogeneity of natural systems and estimates the probability that dose rate value through drinking water pathway around the tailings pond exceeds the WHO guidelines for drinking water. The radionuclides considered in this study are 238U, 234U, 230Th and 226Ra. This study can be used to study the impact of distribution co-efficient on the radionuclide transport model.

Key words: Numerical modelling, Tailings pond area, Uranium mining, Uncertainty,  Distribution coefficient


How to cite: Ravinder, B. and Lakshmanan, E.: Influence of distribution co-efficient on radionuclide transport modelling of uranium from a tailings pond in northern Karnataka, India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1194, https://doi.org/10.5194/egusphere-egu2020-1194, 2019

D447 |
Judith Eeckman, Hélène Roux, Bertrand Bonan, Clément Albergel, and Audrey Douniot

The representation of soil moisture is a key factor for the simulation of flash flood in the Mediterranean region. The MARINE hydrological model is a distributed model dedicaded to flash flood simulation. Recent developments of the MARINE model lead to an improvement of the subsurface flow representation : on the one hand, the transfers through the subsurface take place in a homogeneous soil column based on the volumic soil water content instead of the water height. On the other hand, the soil column is divided into two layers, which represent respectively the upper soil layer and the deep weathered rocks. The aim of this work is to assess the performances of these new representations of the subsurface flow with respect to the soil saturation dynamics during flash flood events. The performances of the model are estimated with respect to three soil moisture products: i) the gridded soil moisture product provided by the LDAS-Monde assimilation chain. LDAS-Monde is based on the ISBA-a-gs land surface model and integrates high resolution spatial remote sensing data from the Copernicus Global Land Service for vegetation through data assimilation; ii) the upper soil moisture measurements taken from the SMOSMANIA observation network ; iii) The satellite derived surface soil moisture data from Sentinel1. The case study is led over two french mediterranean catchments impacted by flash flood events over the 2017-2019 period and where one SMOSMANIA station is available. Additionnal tests for the initialisation of MARINE water content for the two soil layers are assessed. Results show first that the dynamic of the soil moisture both provided by LDAS-Monde and simulated for the upper soil layer in MARINE are locally consistent with the SMOSMANIA observations. Secondly, the use of soil water content instead of water height to describe lateral flows in MARINE is cleary more relevant with respect to both LDAS-Monde simulations and SMOSMANIA stations. The dynamic of the deep layer moisture content also appears to be consistent with the LDAS-Monde product for deeper layers. However, the bias on these values strongly rely on the calibration of the new two-layers model. The opportunity of improving the two-layers model calibration is then discussed. Finally, the impact of the soil water content initialisation is shown to be significant mainly during the flood rising, and also to be dependent on the model calibration. In conclusion, the new developments presented for the representation of subsurface flow in the MARINE model appear to enhance the soil moisture simulation during flash floods, with respect to both the LDAS-Monde product and the SMOSMANIA observation network.

How to cite: Eeckman, J., Roux, H., Bonan, B., Albergel, C., and Douniot, A.: An assessment of soil moisture in the MARINE flash flood model using in situ measurements, reanalysis and satellite derived estimates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1611, https://doi.org/10.5194/egusphere-egu2020-1611, 2019

D448 |
| Highlight
Craig T. Simmons

Professor Ghislain de Marsily is an internationally renowned scientist famed for his contributions to groundwater hydrology and water management. He is a pioneer in the development of stochastic hydrogeology. This presentation will outline and explore the significant contributions made by de Marsily to hydrogeology as a whole and to stochastic hydrogeology in particular. It will examine the effect of his work on defining the discipline of hydrogeology as we know it today, and will go on to show the significant impact his students and colleagues continue to have, inspired by his passion, ideas and enthusiasm for a more sustainable, equitable future for all.

How to cite: Simmons, C. T.: In Honour of Distinguished Scientist and Seminal Hydrogeologist Professor Ghislain de Marsily, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6514, https://doi.org/10.5194/egusphere-egu2020-6514, 2020

D449 |
| Highlight
Jesús Carrera

I review early developments of the stochastic modeling approach. It is generally believed that it is an American contribution. Indeed, North-Americans (notably Lynn Gelhar and Allan Freeze, but also Eduardo Alonso) pointed to the importance of spatial variability of hydraulic conductivity in controlling large scale water flow and solute transport in the mid 1970’s (Matheron’s much earlier 1967 solution did not become broadly known until much later). However, the formulation of an approach to solve the problem was the result of work by French mining engineers at Fontainebleau. They had developed the field of Geostatistics, initially for the assessment of mineral reserves. It was natural to apply these concepts to groundwater. It was Ghislain de Marsily who framed the basic concepts of the geostatistical approach to address spatial variability, which remains essentially unchanged to this day.

How to cite: Carrera, J.: Development of the stochastic approach to groundwater hydrology: a personal account of G. de Marsily contributions., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6951, https://doi.org/10.5194/egusphere-egu2020-6951, 2020

D450 |
Ryma Aissat, Alexandre Pryet, Marc Saltel, and Alain Dupuy

Large scale, physically-based groundwater models have been used for many years for water resources management and decision-support. Improving the accuracy and reliability of these models is a constant objective. The characterization of model parameters, in particular hydraulic properties, which are spatially heterogeneous is a challenge. Parameter estimation algorithms can now manage numerous model runs in parallel, but the operation remains, in practice, largely constrained by the computational burden. A large-scale model of the sedimentary, multilayered aquifer system of North Aquitania (MONA), in South-West France, developed by the French Geological Survey (BRGM) is used here to illustrate the case. We focus on the estimation of distributed parameters and investigate the optimum parameterization given the level of spatial heterogeneity we aim to characterize, available observations, model run time, and computational resources. Hydraulic properties are estimated with pilot points. Interpolation is conducted by kriging, the variogram range and pilot point density are set given modeling purposes and a series of constraints. The popular gradient-based parameter estimation methods such as the Gauss–Marquard–Levenberg algorithm (GLMA) are conditioned by the integrity of the Jacobian matrix. We investigate the trade-off between strict convergence criteria, which insure a better integrity of derivatives, and loose convergence criteria, which reduce computation time. The results obtained with the classical method (GLMA) are compared with the results of an emerging method, the Iterative Ensemble Smoother (IES). Some guidelines are eventually provided for parameter estimation of large-scale multi-layered groundwater models.

How to cite: Aissat, R., Pryet, A., Saltel, M., and Dupuy, A.: Estimation of distributed parameters at regional scale by history-matching of a multi-layered sedimentary aquifer , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10181, https://doi.org/10.5194/egusphere-egu2020-10181, 2020

D451 |
Philippe Ackerer and Frédérick Delay

Since the pioneer work of Emsellem and de Marsily (WRR, 1971), many parameter estimation methods by inverse methods in hydrogeology are based on the minimization of an objective function using descent methods, which requires the computation of the gradient of an objective function. In many cases, the number of parameters to be estimated is large despite parameterization, and the standard computation of the gradient components by sensitivity coefficients may require a lot of computer time. An alternative is the computation of the adjoint variables which require a calculation similar to the forward problem, irrespective of the number of sought parameters.

The computation of the adjoint variable is usually embedded in the code used to compute the state variable. We discuss here an alternative that consists in (i) write the partial differential equation for the adjoint variable, (ii) writing an independent code for the adjoint variable, (iii) solved the adjoint problem on an independent mesh, different of the mesh used to compute the state variable with coarser time and space discretization to speed up the computation of the adjoint variable. We will present the methodology and discuss the use of coarser discretizations since coarser discretization can impact the accuracy of the computed gradients and lead to additional iterations to reach the objective function’s minimum.

How to cite: Ackerer, P. and Delay, F.: Using the adjoint state variable for parameter estimation by inverse methods with parsimony., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14129, https://doi.org/10.5194/egusphere-egu2020-14129, 2020

D452 |
Raoul Collenteur, Steffen Birk, Gernot Klammler, and Mark Bakker

Groundwater recharge remains a notoriously difficult flux to estimate, despite ongoing scientific efforts. In recent years, time series modeling using impulse response functions has gained popularity to simulate groundwater levels and is quickly becoming a common tool for hydrogeologists. Several approaches have been developed to estimate recharge from time series models for both linear and non-linear systems (e.g., [1], [2], and [3]). In this study, we introduce a novel approach to estimate groundwater recharge from observed groundwater levels in nonlinear systems (i.e., twice the precipitation does not necessarily lead to twice the recharge). We extend a time series model using impulse response functions with a non-linear unsaturated zone module that simulates recharge. The model parameters are estimated by fitting the simulated to the observed groundwater levels, with the groundwater recharge as an intermediate model result. 

The method is tested on a time series of groundwater levels observed in Southeastern Austria (Wagna), where lysimeter data of seepage to the groundwater is available for model validation. The simulated groundwater recharge suggests an event-based recharge behavior, with most recharge occurring shortly after larger precipitation events. This finding agrees with the behavior observed in the lysimeter data. The estimated recharge fluxes show a high correlation with the observed seepage on time scales from years to months or weeks, while daily recharge rates show larger errors. Advantages of the method include limited data requirements (only precipitation, potential evapotranspiration, and groundwater time series are required) and the possibility to correct for other factors causing groundwater level fluctuations (e.g., pumping, river levels). This makes it possible to apply the method in locations where little system knowledge (e.g., soil profiles) is available.

[1] Besbes, M. and De Marsily, G. (1984) From infiltration to recharge: use of a parametric transfer function, Journal of Hydrology.
[2] Peterson, T.J. and Fulton, S. (2019) Joint estimation of gross recharge, groundwater usage, and hydraulic properties within HydroSight, Groundwater.
[3] Obergfell, C., Bakker, M. and Maas, K. (2019) Estimation of average diffuse aquifer recharge using time series modeling of groundwater heads, Groundwater.

How to cite: Collenteur, R., Birk, S., Klammler, G., and Bakker, M.: Estimation of groundwater recharge from time series modeling of groundwater levels in non-linear systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15104, https://doi.org/10.5194/egusphere-egu2020-15104, 2020

How to cite: Collenteur, R., Birk, S., Klammler, G., and Bakker, M.: Estimation of groundwater recharge from time series modeling of groundwater levels in non-linear systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15104, https://doi.org/10.5194/egusphere-egu2020-15104, 2020

How to cite: Collenteur, R., Birk, S., Klammler, G., and Bakker, M.: Estimation of groundwater recharge from time series modeling of groundwater levels in non-linear systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15104, https://doi.org/10.5194/egusphere-egu2020-15104, 2020

D453 |
Philippe Renard, Christoph Jäggli, Yasin Dagasan, Przemyslaw Juda, and Julien Straubhaar

One challenge in stochastic hydrogeological modeling is to solve the inverse problem when the parameter fields take a discrete set of values. This typically occurs when considering different rock types having a large contrast in parameter values. Situations of this kind are particulary hard because the usual techniques based on derivatives (sensitivity coefficients) or covariances are inefficient. In this presentation, we will present the Posterior Population Expansion (PoPEx) method. It is an ensemble based technique designed to identify categorical parameter fields in a Bayesian perspective. The method generates iteratively an ensemble of categorical fields using any geostatistical technique and evaluates their likelihood values.To illustrate the method, we will employ a multiple-points statistic technique, but the approach is general. During the inversion process, the relation between observed state variables and parameter values is derived from the ensemble and used to constrain the generation of the next categorical fields. The method is shown to be more efficient than more classical Markov chain Monte Carlo approaches and to provide accurate uncertainty estimates on a set of examples. As the algorithm still requires to compute the likelihood for a significant number of fields, we also explore how Generative Adversarial Networks could be used to accelerate PoPEx by predicting rapidly the misfit.

How to cite: Renard, P., Jäggli, C., Dagasan, Y., Juda, P., and Straubhaar, J.: PoPEx - An adaptative importance sampler for the categorical inverse problem, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10263, https://doi.org/10.5194/egusphere-egu2020-10263, 2020

D454 |
mickaele Le Ravalec, Véronique Gervais, and Frédéric Roggero

Production forecasting is part of the existence of the oil and gas industry: it contributes to generate improvements in operations.

A key tool to tackle this problem is the building of reservoir models that describe the properties of the underground hydrocarbon reservoirs. Clearly, the value of such models strongly depends on their abilities to accurately predict the displacements of fluids within reservoirs. This is the reason why it is essential that reservoir models reproduce at least the data already collected. Data-consistent models are more reliable.

The data considered are split into two groups: static and dynamic data. Static data do not vary with time. They include for instance measurements on core samples extracted from wells or logs used to describe electrofacies and petrophysical variations along wells. However, such direct measurements of geological and petrophysical properties are very sparse and sample only a small reservoir volume. They have to be supplemented by indirect measurements, mainly 3-D seismic. The second group of data includes dynamic data, i.e., data which vary with time because they depend on fluid flows. They mainly comprise production data, i.e., data measured at wells such as bottom hole pressures, oil production rates, gas-oil ratios, tracer concentrations, etc. Anyway, we end up with only little information about the spatial distributions of facies, porosity or permeability within the targeted hydrocarbon reservoirs. These facies/petrophysical properties can be considered as realizations of random functions. They are very specific because of two essential features: they include a huge number of unknown values and they have a spatial structure.

The purpose of reservoir modeling is to identify facies and petrophysical realizations that make it possible to numerically reproduce the dynamic data while still respecting the static ones. Different approaches can be envisioned.

A first possibility consists in randomly generating realizations, then in simulating fluid flow for each of them to see whether they reproduce or not the required data. The process is repeated until identifying a suitable set of facies/petrophysical realizations. The second approach is pretty close. The idea behind is still to screen the realization space, but without performing any fluid flow simulation to check the suitability of the realizations. This strongly depends on the definition of a meaningful criterion to characterize the dynamic behavior of the considered set of realizations without running flow simulations. We may also randomly generate a starting set of facies/petrophysical realizations and run an optimization process aiming to minimize an objective function by adjusting the realizations. A key issue is then how to simultaneously adjust so many parameters while preserving the consistency with respect to the static data. This motivated many research works over the last 20 years, resulting in the development of several parameterization techniques. One of the very first was the pilot point method introduced by de Marsily (1984). Since, variants and other parameterization techniques have been proposed. We aim to review some of them and focus on how useful they are depending on the problem to be faced.

How to cite: Le Ravalec, M., Gervais, V., and Roggero, F.: An overview of parameterization techniques for history-matching, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21225, https://doi.org/10.5194/egusphere-egu2020-21225, 2020