HS1.3.1

EDI
Revisiting good modelling practices – where are we today?

Many papers have advised on carefully considering the methods we choose for our modelling studies as they potentially affect our modelling results and conclusions. However, there is no common and consistently updated rulebook on what good modelling practice is and how it has evolved since e.g. Klemes (1986), Refsgaard & Henriksen (2004) or Jakeman et al. (2006). In recent years several papers have proposed useful practices such as benchmarking (e.g. Seibert et al., 2018), controlled model comparison (e.g. Clark et al., 2011), careful selection of calibration periods (e.g. Motavita et al., 2019) and methods (e.g. Fowler et al., 2018 ), or testing the impact of subjective modelling decisions along the modelling chain (Melsen et al., 2019). However, none of the proposed methods have become quite as common and indispensable as the split sample test (KlemeŠ, 1986), despite their very justified existence.

This session hopes to provide a platform for a visible and ongoing discussion on what ought to be the current standard for an appropriate modelling protocol to acquire robust and reliable results considering uncertainty in all its facets. We aim to bring together, highlight and foster work that applies, develops, or evaluates procedures for a robust modelling workflow or that investigates good modelling practices. We invite research that aims to improve the scientific basis of the entire modelling chain and puts good modelling practice in focus again. This might include (but is not limited to) contributions on:

(1) Benchmarking model results
(2) Developing robust calibration and evaluation frameworks
(3) Going beyond common metrics in assessing model performance and realism
(4) Conducting controlled model comparison studies
(5) Developing modelling protocols
(6) Investigating subjectivity along the modelling chain
(7) Uncertainty propagation along the modelling chain
(8) Communicating model results and their uncertainty to end users of model results
(9) Evaluating implications of model limitations and identifying priorities for future model development and data acquisition planning

Convener: Diana SpielerECSECS | Co-conveners: Janneke RemmersECSECS, Keirnan FowlerECSECS, Joseph Guillaume, Lieke MelsenECSECS
Presentations
| Tue, 24 May, 08:30–10:00 (CEST)
 
Room 2.31

Session assets

Session materials

Presentations: Tue, 24 May | Room 2.31

Chairpersons: Diana Spieler, Janneke Remmers, Lieke Melsen
08:30–08:35
|
EGU22-5215
|
Virtual presentation
Claude Gout and Marie-Christine Cacas-Stentz

Deep subsurface dynamic models allow simulating the interaction of multiple physical processes at regional and geological scale. In the past three decades, O&G industry developed so called Basin and Petroleum Systems Models to improve the prediction of hydrocarbons accumulation and reduce risks of exploration wells failure. By simulating the geological history of a sedimentary basin from its origin, these thermo-hydro-mechanical and chemical (THMC) models provide at present day a balanced distribution of static and dynamic properties of a huge volume of rocks.

 

For the last years, one of these THMC simulators has been extended to more generic application, such as geothermal potential assessment of sedimentary basins, large scale aquifers systems appraisal for massive CO2 sequestration or quantification of present-day methane seepage from shallow biogenic gas production.

 

At the basin scale, data to describe the subsurface are very diverse and scattered and the uncertainty of representativeness of basin geological models is large, especially if one expects to obtain results in quantitative terms on connected pore volumes, temperatures, pressures, stress or fluid composition.

This scarcity of data requires geoscientists to describe alternative scenarios that are compatible with the observational data.  The description of a 4D model (3D structure through geological time) of a sedimentary basin is a long and complex task and the creation and analysis of multiple digital scenarios is therefore almost impossible in reasonable timeframe.

 

We have developed and proofed the concept of interactive basin model that allows simulating while interpreting, hence comparing scenarios while interpreting. In the concept implementation, the processes of surface and subsurface data analysis, 3D scenario model building, simulation parameters setup, THMC simulation, results visualisation and analysis and scenario comparison is performed in a single “real-time” loop.

The concept also allows the incremental building of a geological basin model. Therefore, one can start by building a coarse model of the full sedimentary basin that is continuously watertight and consistent. Then by visualising the result of the simulation in terms of present-day temperature, pressure, stress, and fluid chemistry fields compared instantaneously with the available data, it can be improved to a more complete and consistent representation. This interactive loop avoids the need for costly and complex inversion and allows the geologist to quickly explore the consistency of his or her assumptions.

 

Ultimately, this interactive modelling protocol based on advanced multi-physics simulation tools should become an essential weapon for rapidly defining the basis for assessing the potential, risks and balances between human activity and the nature of an often poorly documented deep underground.

It is complementary to specific tools for data analysis or uncertainty and risk assessment, such as specialised simulators like reservoir or aquifer models.

How to cite: Gout, C. and Cacas-Stentz, M.-C.: An interactive geological basin model: supporting the fast-track assessment of large-scale subsurface potential in the context of the ecological transition, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5215, https://doi.org/10.5194/egusphere-egu22-5215, 2022.

08:35–08:37
08:37–08:42
|
EGU22-13083
|
Virtual presentation
Mark Thyer, Jason Hunter, David McInerney, and Dmitri Kavetski

Probabilistic predictions describe the uncertainty in modelled streamflow, which is a critical input for many environmental modelling applications.  A residual error model typically produces the probabilistic predictions in tandem with a hydrological model that predicts the deterministic streamflow. However, many objective functions that are commonly used to calibrate the parameters of the hydrological model make (implicit) assumptions about the errors that do not match the properties (e.g. of heteroscedasticity and skewness) of those errors. The consequence of these assumptions is often low-quality probabilistic predictions of errors, which reduces the practical utility of probabilistic modelling. Our study has two aims:

1. Evaluate the impact of objective function inconsistency on the quality of probabilistic predictions;

2. To demonstrate how a simple enhancement to a residual error model can rectify the issues identified with inconsistent objective functions in Aim 1, and thereby improve probabilistic predictions in a wide range of scenarios.

Our findings show that the enhanced error model enables high-quality probabilistic predictions to be obtained for a range of catchments and objective functions, without requiring any changes to the hydrological modelling or calibration process. This advance has practical benefits that are aimed at increasing the uptake of probabilistic predictions in real-world applications, in that the methods are applicable to existing hydrological models that are already calibrated, simple to implement, easy to use and fast. Finally, these methods are available as an open-source R-shiny application and an R-package function.

How to cite: Thyer, M., Hunter, J., McInerney, D., and Kavetski, D.: High-quality probabilistic predictions for existing hydrological models with common objective functions    , EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13083, https://doi.org/10.5194/egusphere-egu22-13083, 2022.