In complex systems, such as terrestrial ecosystems uncertain information (whether in observation, measurement, interpretation or models) is the norm, and this impinges on most knowledge that earth scientists generate. It is important to quantify and account for uncertainty in our models and predictions otherwise results can be misleading. This is particularly important when predictions are to be used in a decision-making process where the end user needs to be able to properly evaluate the risk involved.

Quantitative estimation of uncertainty is a difficult challenge, that continually calls for the development of more refined tools. Many diverse methods have been developed, such as non-linear kriging in spatial prediction, stochastic simulation modelling and other error propagation approaches and even methods including the use of expert elicitation, but many challenges still remain. A second and often overlooked challenge with uncertainty is how to communicate it effectively to the end users such as scientists, engineers, policy makers, regulators and the general public.

In this session, we will examine the state of the art of both uncertainty quantification and communication in earth systems sciences. We shall give attention to three components of the problem: 1) new methods and applications of uncertainty quantification, 2) how to use such information for risk assessment, and 3) how to communicate it to the end-user. Dealing with uncertainty across all these three layers is a truly multidisciplinary task, requiring input from diverse disciplines (such as earth science, statistics, economics and psychology) to ensure that it is successful. The main aim of this session is to connect the three components of the problem, offering multiple perspectives on related methodologies, connecting scientists from different fields dealing with uncertainty and favouring the development of multidisciplinary approaches.

Co-organized by EOS4
Convener: Alice Milne | Co-conveners: Kirsty Hassall, Gerard Heuvelink, Lorenzo MenichettiECSECS, Nadezda Vasilyeva
| Attendance Wed, 06 May, 14:00–15:45 (CEST)

Files for download

Download all presentations (59MB)

Chat time: Wednesday, 6 May 2020, 14:00–15:45

Chairperson: Alice Milne
D2138 |
| Highlight
Gabriella Zsebeházi and Beatrix Bán

There is a growing need to develop climate services both at national and international level, to bridge the gap between the providers and the end-users of climate information. Several national climate services are aiming to serve the local users’ needs by creating web portals. Thanks to this trend, the number of available climate data (both measured and modelled) is rapidly growing and often there is not any personal contact between the users and the climate scientists via the web portals. Therefore, it is important to make this service usable and informative and train the potential users about the nature, strengths and limits of climate data.

Within the framework of a national funded project (KlimAdat), the regional climate model projections of the Hungarian Meteorological Service are extended and a representative climate database is developed. Regular workshops are organised, where we get hands-on information about the requirements and give training about climate modelling in exchange. One of the most discussed issue during the workshops is tackling with uncertainty information of climate projections in climate change adaptation studies. The future changes are quantified in probabilistic form, applying ensemble technique, i.e. several climate model simulations prepared with different global and regional climate models and anthropogenic scenarios are evaluated simultaneously.

In order to help the users orienting through the mushrooming climate projections, a user guide is prepared. Topics are e.g. how to select model simulations, how to take into account model validation results and what is the difference between signal and noise. The guideline is based on 24 simulations of the 12-km resolution Euro-CORDEX regional climate models, driven by the RCP4.5 and RCP8.5 scenarios. Two target groups are distinguished based on the required level of post-processing climate data: 1) climate impact modellers, who need large amount of raw or bias corrected data to drive their own impact model; 2) decision makers and planners, who need heavily processed but lightweight data. The purpose of our guideline is to provide insight into the customized methodologies used at the Hungarian Meteorological Service for fulfilling users’ needs.

How to cite: Zsebeházi, G. and Bán, B.: Supporting users to implement uncertainty of climate change information in adaptation studies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18769, https://doi.org/10.5194/egusphere-egu2020-18769, 2020

D2139 |
Maria Moreno de Castro

The presence of automated decision making continuously increases in today's society. Algorithms based in machine and deep learning decide how much we pay for insurance,  translate our thoughts to speech, and shape our consumption of goods (via e-marketing) and knowledge (via search engines). Machine and deep learning models are ubiquitous in science too, in particular, many promising examples are being developed to prove their feasibility for earth sciences applications, like finding temporal trends or spatial patterns in data or improving parameterization schemes for climate simulations. 

However, most machine and deep learning applications aim to optimise performance metrics (for instance, accuracy, which stands for the times the model prediction was right), which are rarely good indicators of trust (i.e., why these predictions were right?). In fact, with the increase of data volume and model complexity, machine learning and deep learning  predictions can be very accurate but also prone to rely on spurious correlations, encode and magnify bias, and draw conclusions that do not incorporate the underlying dynamics governing the system. Because of that, the uncertainty of the predictions and our confidence in the model are difficult to estimate and the relation between inputs and outputs becomes hard to interpret. 

Since it is challenging to shift a community from “black” to “glass” boxes, it is more useful to implement Explainable Artificial Intelligence (XAI) techniques right at the beginning of the machine learning and deep learning adoption rather than trying to fix fundamental problems later. The good news is that most of the popular XAI techniques basically are sensitivity analyses because they consist of a systematic perturbation of some model components in order to observe how it affects the model predictions. The techniques comprise random sampling, Monte-Carlo simulations, and ensemble runs, which are common methods in geosciences. Moreover, many XAI techniques are reusable because they are model-agnostic and must be applied after the model has been fitted. In addition, interpretability provides robust arguments when communicating machine and deep learning predictions to scientists and decision-makers.

In order to assist not only the practitioners but also the end-users in the evaluation of  machine and deep learning results, we will explain the intuition behind some popular techniques of XAI and aleatory and epistemic Uncertainty Quantification: (1) the Permutation Importance and Gaussian processes on the inputs (i.e., the perturbation of the model inputs), (2) the Monte-Carlo Dropout, Deep ensembles, Quantile Regression, and Gaussian processes on the weights (i.e, the perturbation of the model architecture), (3) the Conformal Predictors (useful to estimate the confidence interval on the outputs), and (4) the Layerwise Relevance Propagation (LRP), Shapley values, and Local Interpretable Model-Agnostic Explanations (LIME) (designed to visualize how each feature in the data affected a particular prediction). We will also introduce some best-practises, like the detection of anomalies in the training data before the training, the implementation of fallbacks when the prediction is not reliable, and physics-guided learning by including constraints in the loss function to avoid physical inconsistencies, like the violation of conservation laws. 

How to cite: Moreno de Castro, M.: Uncertainty Quantification and Explainable Artificial Intelligence, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21281, https://doi.org/10.5194/egusphere-egu2020-21281, 2020

D2140 |
Kirsty Hassall, Joanna Zawadzka, Alice Milne, Gordon Dailey, Jim Harris, Ron Corstanje, and Andrew Whitmore

Soil quality and health (SQH) are terms used extensively to characterise soils. However, the exact definitions of quality and health are often qualitative with differing meanings to different stakeholders. Collecting and combining these differing viewpoints is a non-trivial task. In this work, we will discuss how we have used the Bayes Net framework to define a hierarchical structure that enables a subjective concept such as soil quality and health to be quantified from multiple sources of information including diverse sources of expert knowledge and linking this through to national databases.

Information within a Bayes Net is encapsulated through a set of conditional probability tables that describe the conditional dependencies of all variables of interest. It is well known that humans are particularly poor at estimating such probabilities which, when a Bayes Net relies upon experts from differing disciplines and stakeholders from disparate application areas to quantify their beliefs through these conditional probability tables, is often a major limitation to these techniques. Here, we demonstrate an elicitation web app that mitigates some of the difficulties associated with quantifying subjective opinion. Moreover, we show how an inference network of known associations aids in the extraction of information from increasingly subjective sources within the hierarchical framework.

How to cite: Hassall, K., Zawadzka, J., Milne, A., Dailey, G., Harris, J., Corstanje, R., and Whitmore, A.: Soil Quality and Health – can it be quantified?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6902, https://doi.org/10.5194/egusphere-egu2020-6902, 2020

D2141 |
Gábor Szatmári and László Pásztor

Digital soil mapping (DSM) aims to provide spatial soil information for a wide range of studies (e.g. agro-environmental management, nature conservation, rural development, water and food security etc.). For this purpose, advanced statistical methods are in use for inferring the spatial variations of soil. Nowadays, there is a heap of evidences that researchers and stakeholders are not just interested in the maps of soil properties, functions and/or services but in their uncertainties as well. This is indispensable to support decision making process. In DSM various uncertainty quantification methods are in use, however, only a few studies have addressed the issue of comparing them. In this study, we compared the suitability of several commonly applied digital soil mapping methods to quantify uncertainty with regard to a survey of soil organic carbon stock in Hungary. To fairly represent the wide range of DSM methods, the followings were selected: universal kriging (UK), sequential Gaussian simulation (SGS), random forest plus kriging (RFK) and quantile regression forest (QRF). For RFK two uncertainty quantification methods were adopted based on kriging variance (RFK-1) and bootstrapping (RFK-2). We used a control dataset consisting of 200 independent SOC stock observations for validating not just the spatial predictions but their uncertainty quantifications as well. For validating the uncertainty quantifications we applied accuracy plots (a.k.a. prediction interval coverage probability plots) and a modified version of G-statistics. According to our results, QRF and SGS provided the best quantifications of uncertainty. UK and RFK-2 overestimated whereas RFK-1 underestimated the uncertainty. Based on our results we could draw a conclusion that there is a need to validate the uncertainty quantifications before using them for decision making. Furthermore, special attention should be paid to the assumptions made in uncertainty quantification.


Acknowledgment: Our research was supported by the Hungarian National Research, Development and Innovation Office (NRDI; Grant No: KH126725) and the Premium Postdoctoral Scholarship of the Hungarian Academy of Sciences (PREMIUM-2019-390) (Gábor Szatmári).

How to cite: Szatmári, G. and Pásztor, L.: Comparison of uncertainty quantification methods on the example of soil organic carbon stock mapping in Hungary, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7095, https://doi.org/10.5194/egusphere-egu2020-7095, 2020

D2142 |
Timo Breure, Alice Milne, Richard Webster, Stephan M. Haefele, Jacqueline A. Hannam, and Ronald Corstanje

Spectral measurements are increasingly used to predict soil properties. Libraries of soil spectra are built and statistical models are used to relate the spectra to wet chemistry measurements. These relationships can then be used to predict the properties of new samples. An important  consideration is the uncertainty associated with the prediction. Often to reduce this error calibration is done at field level. This is time and resource intensive, however, and there is scope to use existing spectral libraries. Our aim was to quantify the uncertainty in the prediction of soil properties from spectral measurements using a local library and compare this to predictions made using a regional library.   

To investigate this, we considered two case study fields in the Cambridgeshire fens (UK) that were planted with lettuce. These fields contain complex soils which are a combination of peat with underlying alluvial and marine silts that became elevated features in the landscape due to peat oxidation and shrinkage. These elevated features are captured by a 2 m x 2 m LiDAR raster used in our study (UK Environment Agency). We took a total 467 soil samples across the fields and made spectral measurements (near- and mid-infrared). A subset of the soil samples underwent wet chemistry analysis for available pH, P, K, total N and soil particle size fraction. For the regional library we use soil the National Soil Inventory spectral database and its respective wet chemistry reference values.

We used partial least squares to regress the soil spectra for the local and regional spectral libraries against the wet chemistry reference values. These two models were then used to predict the soil properties for both fields. We then mapped the variation in each soil property and the associated uncertainty by kriging.  The variation in some of the soil variables was clearly affected by elevation and there were signs of spatial trend and so we used universal kriging to map the soil properties. To reduce bias, we used residual maximum likelihood estimation (REML) to estimate the variogram by fitting a linear mixed model with the trend accounted for as fixed effects.  We compared these different maps to assess how the calibration regression from local and regional spectral libraries translates itself in uncertainty of kriged maps for five different soil properties within each field.


How to cite: Breure, T., Milne, A., Webster, R., Haefele, S. M., Hannam, J. A., and Corstanje, R.: Quantifying the uncertainty in the prediction of soil properties from soil-spectra using local and regional spectral libraries, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18713, https://doi.org/10.5194/egusphere-egu2020-18713, 2020

D2143 |
Philipp Baumann, Anatol Helfenstein, Andreas Gubler, Reto Meuli, Armin Keller, Juhwan Lee, Raphael A. Viscarra Rossel, and Johan Six

Soil data at different scales are needed for assessments and monitoring of soil functions. Soil diffuse reflectance spectroscopy using visible–Near Infrared and mid-Infrared energies can be used to estimate a range of soil properties, rapidly and inexpensively. However the spectroscopic modeling is challenging because of the large soil diversity and its complex composition. We developed a National Soil Spectral library (SSL) (n = 4339) using samples from (i) the Swiss Soil Monitoring Network (NABO; 7 sampling campaigns at 71 agricultural locations since 1985, n = 592) and (ii) the National Biodiversity Monitoring (BDM) Program (n = 4295, 1094 locations across a 5x5 km grid). The SSL will provide spectroscopic models for estimation of functional soil properties at different scales (e.g. total carbon (C) and nitrogen, organic C, texture, pH and cation exchange capacity). We used a rule-based algorithm, Cubist, for the modelling. The models were tuned across full combinations of {5, 10, 20, 50, 100} committees and {2, 5, 7, 9} neighbors, using 5 times repeated 10-fold cross-validation grouped by location. Further, transfer learning with RS-LOCAL tuning was performed for each of the 71 monitoring sites separately by a hold out approach in order to select optimal instances from the remaining SSL. Total soil C in the reference data ranged from 0.1% to 58.3% C and the best Cubist model had a cross-validated RMSE of 0.82% C. The RS-LOCAL approach (RMSEmean = 0.14 %) was on average 2.5 times more accurate for the estimation of C over time at each of the 71 NABO sites compared to the general Cubist approach. Our results suggest that data-driven selection of SSL instances targeted to closely related soils produces less biased estimation of soil properties over time at smaller geographic extents. The general Cubist calibration models are useful when reference analyses in a new study area are scarce. In conclusion, the Swiss SSL models can be used to cost-efficiently estimate a range of soil properties for a diverse applications and purposes in Switzerland.

How to cite: Baumann, P., Helfenstein, A., Gubler, A., Meuli, R., Keller, A., Lee, J., A. Viscarra Rossel, R., and Six, J.: Development of a Swiss National Soil Spectral Model Library using data-driven modeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22569, https://doi.org/10.5194/egusphere-egu2020-22569, 2020

How to cite: Baumann, P., Helfenstein, A., Gubler, A., Meuli, R., Keller, A., Lee, J., A. Viscarra Rossel, R., and Six, J.: Development of a Swiss National Soil Spectral Model Library using data-driven modeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22569, https://doi.org/10.5194/egusphere-egu2020-22569, 2020

How to cite: Baumann, P., Helfenstein, A., Gubler, A., Meuli, R., Keller, A., Lee, J., A. Viscarra Rossel, R., and Six, J.: Development of a Swiss National Soil Spectral Model Library using data-driven modeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22569, https://doi.org/10.5194/egusphere-egu2020-22569, 2020

D2144 |
Nicolas P.A. Saby, Thomas Opitz, Bifeng Hu, Blandine Lemercier, and Hocine Bourennane

The assumption of spatial and temporal stationarity does not hold for many ecological and environmental processes. This is particularly the case for many soil processes like carbon sequestration, often driven by factors such as biological dynamics, climate change and anthropogenic influences. For better understanding and predicting such phenomena, we develop a Bayesian inference framework that combines the integrated nested Laplace approximation (INLA) with the stochastic partial differential equation approach (SPDE). We put focus on modeling complex temporal trends varying through space with an accurate assessment of uncertainties, and on spatio-temporal mapping of processes that are only partially observed.

We model observed data through a latent (i.e., unobserved) smooth process whose additive components are endowed with Gaussian process priors. We use the SPDE approach to implement flexible sparse-matrix approximations of the Matérn covariance for spatial fields. The separate specification of the spatially varying linear trend allows us to conduct component-specific statistical inferences (range and variance estimates, standard errors, confidence bounds), and to provide maps to stakeholders for time-invariant spatial patterns, spatial patterns in slopes of time trends, and the associated uncertainties. For observed data following a Gaussian distribution, we add independent measurement errors, but more general response distributions of the data can be implemented. We also include in our model covariate information on parent material, climate and seasonality.

The INLA method and its implementation in the R-INLA library provide a rich toolbox for statistical space-time modelling while sidestepping typical convergence problems arising with simulation-based techniques using Markov Chain Monte–Carlo codes for large and complex hierarchical models such as ours. Uncertainties arising in model parameters and in pointwise spatio-temporal predictions are naturally captured in the posterior distributions computed through INLA using appropriate approximation techniques, and we can communicate on them through maps of various properties. Moreover, INLA also allows for direct simulation from the estimated posterior model, such that we can conduct statistical inferences on more complex functionals of the multivariate predictive distributions by analogy with MCMC frameworks.

Soil organic carbon is a major compartment of the global carbon cycle and small variations of its level can largely impact atmospheric CO2 concentrations. In the context of global climate change, it is important to be able to quantify and explain spatial and temporal variability of SOC in order to forecast future changes. In this work, we used this approach to study possible trends in space and time of soil carbon stock of three agricultural fields in France. Fitted models reveal significant temporal trends with strong spatial heterogeneity. The Matérn model and SPDE approach provide a flexible framework with respect to field design.

How to cite: P.A. Saby, N., Opitz, T., Hu, B., Lemercier, B., and Bourennane, H.: Bayesian uncertainty quantification of spatio-temporal trends in soil organic carbon using INLA and SPDE, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9154, https://doi.org/10.5194/egusphere-egu2020-9154, 2020

D2145 |
Lorenzo Menichetti, Göran Ågren, PIerre Barré, Fernando Moyano, and Thomas Kätterer

The conventional soil organic matter (SOM) decay paradigm considers SOM quality as the dominant decay limitation, and it is modelled with simple first-order decay kinetics. This view and modelling approach is criticized for being too simplistic and unreliable for predictive purposes. It is still under debate if first-order models can correctly capture the variability in temporal SOM decay observed between different environments. The hypothesis needs to be tested statistically, but this implies the use of a dynamic model with multiple degrees of freedom to describe the observations. Since we want to test the general validity of the SOC decay theory the test must also include multiple sites, and rises the problem of how to describe the unavoidable local variability. This defines a multivariate space where the hypothesis must be tested which, considering also the known problem of an equifinality “by design” in biogeochemical models, generate difficulties.

To address this issue, we calibrated a first-order model (Q) on six long-term bare fallow field experiments across Europe within a Bayesian framework assuming some general and some local parameters. Following conventional SOM decay theory, we assumed that parameters directly describing SOC decay (rate of SOM quality change and decomposer metabolism) are thermodynamically constrained, therefore valid for all sites. Initial litter input quality and edaphic interactions (both local by definition) and microbial efficiency (possibly affected by nutrient stoichiometry) were instead assumed to be site-specific. Initial litter input quality explained most observed kinetics variability, and the model predicted a convergence toward a common kinetics over time, while site-specific variables played no detectable role. All these characteristics could be represented with posterior probability distributions and their comparison provided the hypothesis testing.

According to our analysis the decay of decades-old SOM seemed mostly influenced by OM chemistry and was well described by first order kinetics and a single set of general kinetics parameters.

How to cite: Menichetti, L., Ågren, G., Barré, P., Moyano, F., and Kätterer, T.: Testing the first-order SOC decay hypothesis over multiple sites through Bayesian uncertainty representation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14373, https://doi.org/10.5194/egusphere-egu2020-14373, 2020

D2146 |
Cuijuan Liao, Yizhao Chen, Yuanyuan Huang, Xingjie Lu, Xiaomeng Huang, Yishuang Liang, and Yiqi Luo

As the largest carbon reservoir in biosphere, soil organic carbon (SOC) has been extensively studied. However, the large uncertainty of modeling SOC  impedes the accurate prediction of global carbon dynamics in response to climate change. Thus, evaluating and tracing the sources of large uncertainty in predicting SOC dynamics by Earth system models are the urgently needed to improve our understanding and predicting capability. Although great efforts have been made to predict land C storage using multiple models, disentangle uncertainty sources among models are still extremely difficult. To take this challenge, we developed a Matrix-based ensemble Model Inter-comparison Platform (MeMIP). MeMIP is an integrated platform to quantify the various sources of uncertainty under a unified framework. MeMIP is embedded a new community-based ESM, Community Integrated Earth System Model (CIESM) and implemented in the super-computing cluster in Wuxi, China. Within the MeMIP, multiple SOC decomposition schemes from different land models (e.g. CLM-CENTURY, CLM-BGC, LPJ-GUESS, JULES and CABLE) have been re -constructed in a unified matrix model format. With the unified format of matrix model, the inter-model differences can be quantitatively attributed to the sources by using the traceability analysis. In this study, we analyzed how SOC decomposition schemes and the vertical resolved SOC exchange structure (VR structure) influences SOC prediction with the three-dimension parameter output (NPP, residence time and carbon storage potential) space. The results indicate that model with the VR structure result in significantly higher SOC predictions and introduced higher uncertainty than single layer models. It is mainly due to increased residence time, which is also very sensitive to future warming. The identified major uncertain components are targets for improvement via data assimilation. Overall, MeMIP provides a modeling platform that not only unifies all land carbon cycle models in the matrix form but also offers traceability analysis to identify sources of uncertainty and data assimilation to constrain multiple model ensemble prediction.

How to cite: Liao, C., Chen, Y., Huang, Y., Lu, X., Huang, X., Liang, Y., and Luo, Y.: A unified diagnostic platform to quantify the source of uncertainty in modelling global SOC dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2833, https://doi.org/10.5194/egusphere-egu2020-2833, 2020

D2147 |
Nadezda Vasilyeva, Artem Vladimirov, and Taras Vasiliev

The aim of our study is the source of uncertainty in soil organic carbon (SOC) models which comes from the model structure. For that we have developed a family of mathematical models for SOC dynamics with switchable biological and physical mechanisms. Studies mechanisms include microbial activity with constant or dynamic carbon use efficiency (CUE) and constant or dynamic microbial turnover rate; priming effect: decay of stable SOC pool in the presence of labile SOC pool; temperature and moisture dependencies of SOC decomposition rates; dynamic adsorption strength and occlusion. Model SOC cycle includes measurable C pools in soil size and density fractions, each comprised of two estimated theoretical C pools (labile and stable - biochemical C cycle). Reaction rates of the biochemical cycle are modified according to its physical state: decay accelerates with size, accelerates with the amount of adsorbed C (density: heavy to light) and decelerates with soil microaggregation (occluded state). The models family was tested on C and 13C dynamics detailed data of a long-term bare fallow chronosequence.

Analysis of SOC models family with different combinations of mechanisms showed that the best (estimated by BIC) description of SOC dynamics in physical fractions was with microbially-explicit models only in case of a feedback via dynamics of microbial turnover and CUE. First, we estimated uncertainty of all mechanism-specific parameters for every model in the family. We calculated density distributions for parameters characterizing functional properties and stability of soil components (such as energy of activation, adsorption capacity, CUE, 13C distillation coefficient) for the models family weighted with models likelihoods. These parameter values were then compared with common experimental values.

We discuss the use of the study results to estimate relevance of observed parameter and structural uncertainties for global SOC projections obtained using different model structures.

How to cite: Vasilyeva, N., Vladimirov, A., and Vasiliev, T.: Model structure uncertainty of SOC dynamics studied in a single modeling framework, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12514, https://doi.org/10.5194/egusphere-egu2020-12514, 2020

D2148 |
Artem Vladimirov, Taras Vasilyev, and Nadezda Vasilyeva

In this study we apply the family of mathematical models for soil organic carbon (SOC) dynamics to estimate the effect of SOC model structural uncertainty on global scale C  projections. The model family features switchable biological and physical mechanisms (such as explicit microbes, dynamic CUE and turnover, priming effect, dynamic adsorption strength and physical occlusion) in a single modeling framework where mechanisms can be turned on and off without affecting model parameters that are not involved in a given mechanism. The model family fit to experimental chronosequence data provided uncertainty ranges for mechanism-specific parameters and individual models likelihood.

Selected models were run with litter fall, soil surface temperature and moisture from Earth System Model (ESM) simulation as an input, while model parameters were randomly distributed according to their uncertainties. Variance of obtained model trajectories in a given time frame was assumed as a lower estimate for model prediction uncertainty. Different models in the family were compared by their prediction uncertainty in addition to their likelihoods to obtain the final estimate for efficiency of a certain model for ESM.

How to cite: Vladimirov, A., Vasilyev, T., and Vasilyeva, N.: Effect of soil C model structural uncertainty on global projections, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12481, https://doi.org/10.5194/egusphere-egu2020-12481, 2020

D2149 |
Hans Jørgen Henriksen, Ernesto Pasten-Zapata, Peter Berg, Rafael Pimentel, Guillaume Thirel, Andrea Lira-Loarca, and Christiana Photiadou

As part of the ERA-NET Cofund for Climate Service from JPI-Climate, Expert Elicitation (EE) has been investigated as a tool for uncertainty reduction in the research project AQUACLEW in European case studies. Results from the elicitation can be compared to quantitative approaches to determine whether we have the knowledge and skills to differentiate good-performing models from an ensemble of models. EE could thus be a potential method to refine the climate-impact production chain, in cases where a quantitative validation of the ensemble is not feasible. 
To implement the EE on selective case studies of AQUACLEW we have developed a framework of the procedure. This protocol is then used as training material by experts who are invited to a one-day workshop. In this document an introduction provides background information about the project including a short description of the five case studies involved in the elicitation. A subset of the EURO-CORDEX EUR-11 ensemble of climate models based on three General Circulation Models and four Regional Climate Models is described. Finally, the hydrological models used in three of the five case studies are described along with results on their skills to simulate the observations at the selected study sites. 
As an example, the Danish case study focuses on agricultural production in central Denmark. Climate change in the Danish case is expected to affect soil moisture and wetness conditions during winter and spring, where more precipitation is foreseen, and dryness during summer and early fall, where less precipitation is expected. More wetness/higher groundwater levels during winter and spring will adversely affect the farming field work in connection with sowing as well as crop growth on water logged fields leading to needs for increased drainage of fields. Dryer summers will adversely affect crop yield and lead to needs for increased irrigation. Hence both flooding and drought has been examined together with the resulting effect on the root zone moisture content, the groundwater level and the river discharge. Focus is given to uncertainty of the projections of future conditions which is a function of both emission scenario, choice of climate model and agro-hydrological model. 
The presentation will focus on the training material consisting of a structured, condensed text and comparable illustrations across case studies and selected modelling approaches. Results of the EE workshop held in March 2020 will be discussed with lessons learned and viability of the EE tool. 

How to cite: Henriksen, H. J., Pasten-Zapata, E., Berg, P., Pimentel, R., Thirel, G., Lira-Loarca, A., and Photiadou, C.: Expert elicitation as tool for climate and hydrological model uncertainty reduction , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13152, https://doi.org/10.5194/egusphere-egu2020-13152, 2020

D2150 |
Billy Andrews, Jennifer Roberts, Zoe Shipton, Gareth Johnson, Sabina Bigi, and M.Chiara Tartarello

The characterisation of natural fracture networks using outcrop analogues is important in understanding subsurface fluid flow and rock mass characteristics in fractured lithologies. It is well known from decision sciences that subjective bias can significantly impact the way data is gathered and interpreted, introducing scientific uncertainty.

This study investigates the scale of and nature of subjective bias on fracture data collected by geoscientists using four commonly used approaches (linear scanlines, circular scanlines, topology sampling and window sampling) both in the field and in workshops using field photographs.

We observe considerable variability between each participant’s interpretation of the same scanline, and this variability is seen regardless of participants’ level of geological experience. Geologists appear to be either focussing on the detail or focussing on gathering larger volumes of data; personal character traits that affect the recorded fracture network attributes. As a result, the fracture statistics that are derived from field data can vary considerably for the same scanline, depending on which geologist collected the data. Additionally, the personal bias of geologists collecting the data affects the scanline size (minimum length of linear scanlines, radius of circular scanlines or area of a window sample) needed to collect a statistically representative amount of data.

Based on our findings and on understanding of bias reduction in decision sciences, we suggest protocols to recognise, understand and limit the effect of subjective bias on fracture data biases during data collection.

Our work shows the capacity for cognitive biases to introduce uncertainty into observation-based data. Fracture statistics derived from field data often inputs into geological models that are used for a range of applications, from understanding fluid flow to characterising rock strength, and so these uncertainties have ramifications for propagation into a range of outcomes. Importantly, our findings that personal bias can affect data collection have implications well beyond the geosciences.

How to cite: Andrews, B., Roberts, J., Shipton, Z., Johnson, G., Bigi, S., and Tartarello, M. C.: Do different geologists see the same fractures? Quantifying subjective bias in fracture data collection. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21848, https://doi.org/10.5194/egusphere-egu2020-21848, 2020

D2151 |
Zoe Shipton, Jen Roberts, Emma Comrie, Yannick Kremer, Lunn Rebecca, and Caine Jonathan

Mental models are a human’s internal representation of the real world and have an important role in the way a human understands and reasons about uncertainties, explores potential options, and makes decisions. However, they are susceptible to biases. Issues associated with mental models have not yet received much attention in geosciences, yet systematic biases can affect the scientific process of any geological investigation; from the inception of how the problem is viewed, through selection of appropriate hypotheses and data collection/processing methods, to the conceptualisation and communication of results. This presentation draws on findings from cognitive science and system dynamics, with knowledge and experiences of field geology, to consider the limitations and biases presented by mental models in geoscience, and their effect on predictions of the physical properties of faults in particular. We highlight a number of biases specific to geological investigations and propose strategies for debiasing. Doing so will enhance how multiple data sources can be brought together, and minimise controllable geological uncertainty to develop more robust geological models. Critically, we argue that there is a need for standardised procedures that guard against biases, permitting data from multiple studies to be combined and communication of assumptions to be made. While we use faults to illustrate potential biases in mental models and the implications of these biases, our findings can be applied across the geoscience discipline.

How to cite: Shipton, Z., Roberts, J., Comrie, E., Kremer, Y., Rebecca, L., and Jonathan, C.: Fault Fictions: Systematic biases in the conceptualization of fault zone architecture, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21294, https://doi.org/10.5194/egusphere-egu2020-21294, 2020

D2152 |
Rafael Pimentel, María José Polo, María José Pérez-Palazón, Stefan Achleitner, Manuel Díez-Minguito, Andreas Huber, Philip Kruse, Andrea Lira, Johannes Lückenkötter, and Maria-Helena Ramos

By definition a climate service (CS) is a provision of climate information to assist decision-making. Therefore, CS users are the crucial agent in the CS production chain. User role needs to go further than only making use of the CS, their function must be taken into account during CS design and implementation. This can be accomplished by creating a feedback loop, in which users interact with CS developers. Nevertheless, the a priori user knowledge (i.e. their background, expectations of CS, previous experiences with CS) can condition user role in this co-development process. Identifying this previous knowledge and how this can condition user perception about CS is not easy. On-line surveys and personal interviews which are the most extended technique to gather information about users, on the one hand, are not usually designed to dig into the user a priori knowledge, and on the other hand, can be influenced by many subjective aspects.

This work tries to assess the role of user previous knowledge and the perception that users have about CS. An experiment was designed and carried out with about 100 final year bachelor and master engineering students (agronomic, civil, forestry, geotechnical, hydraulic) across Europe (Germany, Austria, France and Spain) as potential CS users with similar initial knowledge. In the experiment the student population was split into two samples. Specific CS training was given to one, no training to the other. Therefore, users with and without a priori knowledge about CS were simulated. Then a role game, in which they become consultants hired by a water management authority to make a decision regarding the management of a lake, was played.  Different levels of information (i.e. ensemble mean, ensemble spread, robustness of climate model) are provided to the students along the game to evaluate basic climate concepts.

Experiment results show that previous knowledge has a role in the decision taken by the users. Trained users required more complex information before being willing to make a decision, while non-trained ones trust less complex information. No significant differences were found between countries or the two educational levels. 

This work was funded by the project AQUACLEW, which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by FORMAS (SE), DLR (DE), BMWFW (AT), IFD (DK), MINECO (ES), ANR (FR) with co-funding by the European Commission [Grant 690462].

How to cite: Pimentel, R., Polo, M. J., Pérez-Palazón, M. J., Achleitner, S., Díez-Minguito, M., Huber, A., Kruse, P., Lira, A., Lückenkötter, J., and Ramos, M.-H.: Assessing the role of a priori user knowledge in climate services perception: An experiment with university students across Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11442, https://doi.org/10.5194/egusphere-egu2020-11442, 2020