SM1.2 | Is uncertainty useful?
EDI
Is uncertainty useful?
Co-organized by EMRP2/GI6
Convener: Andrew Curtis | Co-conveners: Alison Malcolm, Klaus Mosegaard, Andreas Fichtner, Xin Zhang
Orals
| Mon, 24 Apr, 10:45–12:30 (CEST)
 
Room -2.47/48
Posters on site
| Attendance Mon, 24 Apr, 14:00–15:45 (CEST)
 
Hall X2
Orals |
Mon, 10:45
Mon, 14:00
Assessing the uncertainty in observations and in scientific results is a fundamental part of the scientific process. In principle uncertainty estimates allow data of different types to be weighted appropriately in joint interpretations, allow existing results to be tested against new data, allow potential implications of the results to be tested for relative significance, allow differences between best-fit model estimates to be explained, and allow quantitative risk assessments to be performed. In practice, uncertainty estimation can be theoretically challenging, computationally expensive, model-dependent and subject to expert biases. This session will explore the value or otherwise of the significant effort that is required to assess uncertainty in practice.

We welcome contributions from the solid Earth sciences for and against the calculation and use of uncertainties. We welcome those that extend the use of subsurface model uncertainties for important purposes, and which demonstrate the value of uncertainties. We also welcome contributions which argue against the value of uncertainties, perhaps particularly given the cost of their assessment. Uses of uncertainties may include value of information (VOI) calculations, the use of models for forecasting new qualities that can be tested, the reconciliation of historically diverse models of the same structures or phenomena, or any other result that fits the overall brief of demonstrating value. Arguments against the value of uncertainty may include anything from pragmatic uses of uncertainty estimates that have demonstrably failed to be useful, to philosophical issues of how it is possible even to define uncertainty in model-based contexts. All pertinent contributions are welcome, as is a lively discussion!

Orals: Mon, 24 Apr | Room -2.47/48

Chairpersons: Andrew Curtis, Alison Malcolm, Klaus Mosegaard
10:45–10:50
10:50–11:10
|
EGU23-1524
|
SM1.2
|
solicited
|
On-site presentation
Anya Reading, Tobias Stål, Ross Turner, Felicity McCormack, Ian Kelly, Jacqueline Halpin, and Niam Askey-Doran

Uncertainty, as applied to geophysical and multivariate initiatives to constrain aspects of Earth-ice interactions for East Antarctica, provides a number of approaches to appraise and interrogate research results.  We discuss a number of use cases: 1) making use of multiple uncertainty metrics; 2) making comparisons between spatially variable maps of inferred properties such as geothermal heat flow; 3) extrapolating crustal structure given the likelihood of tectonic boundaries; and 4) providing research results for interdisciplinary studies in forms that facilitate ensemble approaches.

 

It proves extremely useful to assess a research finding, such as a mapped geophysical property, through multiple uncertainty metrics (e.g., standard deviation, information entropy, data count).  However, a thoughtful appraisal of multiple metrics could be misleading, i.e., potentially not useful in isolation, in a case where there are significant unquantified uncertainties.  Uncertainties supplied with the mapped geophysical properties can potentially be extended to capture this broader range, but that range in turn could become less helpful as the fine detail in the quantified uncertainty will be lost.

 

In the case of a property such as geothermal heat flow, indirectly determined for East Antarctica, insights can be drawn by subtracting a forward model map from an empirically determined result (e.g. Aq1) to yield the non-steady state components excluded in the forward model.  In such investigations, including the maximum and minimum possible difference between maps is essential to understand which non-steady state anomalies are real, and which could be artifacts attributable to (quantified) uncertainty.

 

In further use cases, we show how the few available seismic measurements that constrain the crust and upper mantle structure of East Antarctica can be placed in context, given the likelihood of major tectonic boundaries beneath the ice, and link this to published constraints on the seismic structure (and hence, rheology) of the deeper lithosphere.  In terms of how the solid Earth interacts with the ice sheet above, the impact of fine scale-length variations in spatial uncertainty may be investigated in relation to, for example, ice sheet modelling. For a large region and relatively unexplored region such as East Antarctica, uncertainty yields many and varied insights. 

How to cite: Reading, A., Stål, T., Turner, R., McCormack, F., Kelly, I., Halpin, J., and Askey-Doran, N.: Insights from the spatial variability of (multiple) uncertainties: Earth-ice interactions for East Antarctica , EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1524, https://doi.org/10.5194/egusphere-egu23-1524, 2023.

11:10–11:20
|
EGU23-17483
|
SM1.2
|
On-site presentation
Lucy Bailey, Mike Poole, Oliver Hall, and Lucia Gray

The ability to quantify uncertainty effectively in complex systems is not only useful, but essential in order to make good decisions or predictions based on incomplete knowledge.  Conversely, failure to quantify uncertainty, and a reliance on making assumptions, prevents a proper understanding of the uncertain system, and leads to poor decision-making. 

In our work to implement a geological disposal facility (GDF) for higher-activity radioactive waste, we need to be very confident in our demonstration of safety of the facility over geological timescales (hundreds of thousands of years). There are inevitably large uncertainties about the evolution of a system over such timescales.  We have developed a strategy for managing and quantifying uncertainty which we believe is more generally applicable to complex systems with large uncertainties.  At the centre of the strategy are three concepts: a top-down, iterative approach to building a model of the ‘total system’; a probabilistic Bayesian mathematical treatment of uncertainty; and a carefully designed methodology for quantifying uncertainty in model parameters by expert judgement that mitigates cognitive biases which usually lead to over-confidence.

Our total system model is a probabilistic model built using a top-down approach. It is run many times as a Monte-Carlo simulation, where in each realisation, parameter values are sampled from a probability density function representing the uncertainty. It is built with the performance measures of interest in mind, starting as simply as possible, then iteratively adding detail for those parts of the model where previous iterations have shown the performance measures to be most sensitive.  It sits well with a similar iterative approach to data gathering, the aim being developing understanding of what parts of the total system really matter. 

In our experience, to be both a tractable and effective strategy, it is essential that the level of detail and complexity in any quantitative analysis, is commensurate with the amount of uncertainty.  There needs to be a recognition that initial consideration of the system in too great a level of detail is futile when the uncertainty is large.  It is through iterative learning, understanding the sensitivities to the total system and refining our analysis and data gathering in areas of significance, that we can handle even complex uncertainty and develop a sound basis for confidence in decision making.

We are now looking to explore new techniques to iterative learning that involve maximising the information that can be gained, even from initially sparse datasets, to aid in confident decision-making.

 

How to cite: Bailey, L., Poole, M., Hall, O., and Gray, L.: Quantifying Uncertainty for Complex Systems, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-17483, https://doi.org/10.5194/egusphere-egu23-17483, 2023.

11:20–11:30
|
EGU23-17147
|
SM1.2
|
On-site presentation
Daniel Straub, Wolfgang Betz, Mara Ruf, Amelie Hoffmann, Daniel Koutas, and Iason Papaioannou

In science and engineering, models are used for making predictions. These predictions are associated with uncertainties, mainly due to limitations in the models and data availability. While these uncertainties might be reduced with further analysis and data collection, that is often not an option because of constrained resources. Whenever the resulting predictions serve as a basis for decision making, it is important to appraise the uncertainty, so that decision makers can understand how much weight to give to the predictions. In addition, performing uncertainty and sensitivity analysis at intermediate stages of a study can help to better focus the model building process on those elements that contribute most to the uncertainty. Decision sensitivity metrics, which are based on the concept of value of information, enable to identify which uncertainties most affect the conclusions drawn from the model outcomes. We have found that such decision sensitivity metrics can be a powerful tool to understand and communicate an acceptable level of uncertainty associated with model predictions.

In this contribution, we will discuss the general principles of decision-oriented sensitivity measures for dealing with uncertainty and will demonstrate them on two real-life cases: (1) the use of geological models for the choice of the nuclear waste deposit site in Switzerland, and (2) the use of flood risk models for decisions on flood protection along the Danube river.

 

How to cite: Straub, D., Betz, W., Ruf, M., Hoffmann, A., Koutas, D., and Papaioannou, I.: Addressing uncertainty in models for improved decision making, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-17147, https://doi.org/10.5194/egusphere-egu23-17147, 2023.

11:30–11:40
|
EGU23-11807
|
SM1.2
|
On-site presentation
Thomas Mejer Hansen and Rasmus Bødker Madsen

“All models are wrong but some are useful” (most often credited to George Cox) is a commonly used aphorism, probably because it resonates with some truth to many. We argue though, that it would be more correct to say “All deterministic models are wrong but some are useful “. Here, a deterministic model refers to any single, and in some quantitative way ‘optimal’ model, typically the results of minimizing some objective function. A deterministic model may be useful to use as a base for making decisions, but, it may also lead to disastrous results. The real disturbing issue with deterministic models is that we do not know whether it is useful for a specific application, because of a lack of uncertainties.

On the other hand, a probabilistic model, that is described by a probability density, or perhaps by many realizations of a probability density, can represent in principle arbitrarily complex uncertainty. In the simplest case where the probabilistic model is represented by a maximum entropy uncorrelated uniform distribution, one can say that “The simplest probabilistic model is true but not very useful.“.  It is true in the sense that the real Earth model is represented by the probabilistic model, i.e. it is a possible realization from the probabilistic model, but not very useful, as little to no information about the Earth can be inferred.

In an ideal case, a probabilistic model can be set up from a variety of different sources, such that it is both informative (low entropy), and consistent with an actual subsurface model in which case we can say “An informative probabilistic model can be true and also very useful.“. Any uncertainty in the probabilistic model can then be propagated to any other related uncertainty assessment using simple Monte Carlo methods. In such a case clearly, uncertainty is useful.

In practice though, when a probabilistic Earth model has been constructed from different sources (such as structural geology, well logs, and geophysical data) then one will often find that the uncertainty of each source of information will be underestimated, such that the combined model will describe too little uncertainty. This can lead to potentially worse decision-making than when using a deterministic model (that one knows is not correct), as one may take a decision related to a low probability of a risky scenario that may simply be related to the underestimation and/or bias of the uncertainty.

We will show examples of constructing both deterministic and probabilistic Earth models, based on a variety of geo-based information. We hope to convince the audience, that a probabilistic model can be designed such that it is consistent with the actual subsurface, and at the same time provides an optimal base for decision-makers and risk analysis.

In the end, we argue that: Uncertainty is not only useful but essential, to any decision-making, but also that it is of utmost importance that the underlying information is quantified in an unbiased way. If not, a probabilistic model may simply provide a complex base in which to take wrong decisions.

How to cite: Hansen, T. M. and Madsen, R. B.: Why probabilistic models are often true, but can be either useful or useless., EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11807, https://doi.org/10.5194/egusphere-egu23-11807, 2023.

11:40–11:50
|
EGU23-15776
|
SM1.2
|
On-site presentation
Guillaume Caumon, Julien Herrero, Thomas Bodin, and Paul Baville

Sedimentary strata are essential archives of the past conditions of the earth, and host significant natural resources in the subsurface. However, inferring the features of strata at depth (e.g., geometry, connectivity, physical or geological properties), remains a challenge prone to many uncertainties. Classically, the layers and their geometry are first interpreted from boreholes, geological outcrops and geophysical images, then layer properties can be addressed with geostatistical techniques and inverse methods. Theoretical models considering horizon depth uncertainty have been proposed decades ago, and geostatistical simulation can sample petrophysical uncertainties, but these approaches leave the number of layers fixed and are rely on conformable layering assumptions which are seldom met. We review some recent developments in well correlation in the frame of relative chronostratigraphy, which addresses the problem of locating potential gaps in the stratigraphic record. We also present some first results of the integration of the number of layers in inverse problems using a reversible jump Monte Carlo method. These two elements open interesting perspectives to jointly address topological, geometrical and petrophysical uncertainties at multiple scales in sedimentary basins. Although such uncertainties can have significant impact on quantitative geological and geophysical model forecasts, many computational challenges still lie ahead to appropriately sample uncertainties. Harnessing these challenges should open the way to finding, on a case-by-case basis, the suitable level of detail between detailed stratigraphic architectures and effective medium representations.

How to cite: Caumon, G., Herrero, J., Bodin, T., and Baville, P.: Assessing and reducing stratigraphic uncertainty in the subsurface: where are we standing?, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15776, https://doi.org/10.5194/egusphere-egu23-15776, 2023.

11:50–12:00
|
EGU23-4591
|
SM1.2
|
ECS
|
solicited
|
On-site presentation
Wan-Lin Hu

Uncertainties of geological structural geometry constructed based on seismic reflections can stem from data acquisition, processing, analysis, or interpretation. Uncertainties arising from structural interpretations and subsequent estimates of geological slip have been particularly less quantified and discussed. To illustrate the implications of interpretation uncertainties for seismic potential and structural evolution, I use an example of a shear fault-bend fold in the central Himalaya. I apply a simple solution from the kinematic model of shear fault-bend folding to resolve the geological input slip of given structure and then compare the result with a previous study to show how differences in structural interpretations could impact dependent conclusions. The findings show that only a little variance in interpretations owing to subjectivity or an unclear seismic image could yield geological slip rates differing by up to about 10 mm/yr, resulting in significantly different scenarios of seismic potential. To reduce unavoidable subjectivity, this study also suggests that the epistemic uncertainty in raw data should be included in interpretations and conclusions.

How to cite: Hu, W.-L.: How do differences in interpreting seismic images affect estimates of geological slip rates?, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4591, https://doi.org/10.5194/egusphere-egu23-4591, 2023.

12:00–12:10
|
EGU23-3610
|
SM1.2
|
ECS
|
Virtual presentation
Advancing Stein Variational Gradient Descent for geophysical uncertainty estimation
(withdrawn)
Muhammad Izzatullah, Matteo Ravasi, and Tariq Alkhalifah
12:10–12:20
|
EGU23-2930
|
SM1.2
|
Virtual presentation
Ciaran Beggan and William Brown

Models of the Earth’s main magnetic field, such as the International Geomagnetic Reference Field (IGRF), are described by spherical harmonic (Gauss) coefficients to degree and order 13, which allows the continuous evaluation of the field at any location and time on or above the surface. They are created from satellite and ground-based magnetometer data and describe the large-scale variation (spatial scale of 3000 km) of the magnetic field in space and time under quiet conditions.

In its technical form, the model is a spectral representation and thus its formal uncertainty (as a wavelength) is of limited advantage to the spatial value expected by the average user.  To address this, we estimated the large-scale time-invariant spatial uncertainty of the IGRF based on the globally averaged misfit of the model to ground-based measurements at repeat stations and observatories between 1980 and 2021. As an example, we find the 68.3% confidence interval is 87 nT in the North (X) component, 73 nT in the East (Y) component and 114 nT in vertical (Z) component. These values represent an uncertainty of around 1 part in 500 for the total component which, for the (average) compass user is well below instrumental detectability.

For advanced users, in applications such as directional drilling, higher resolution models (<30 km) are required and the associated uncertainties are thus further divided into random and global as well as correlated and uncorrelated parts. However, the distribution of errors is Laplacian not Gaussian and communicating the subtleties of long-tailed distributions to end-users is often a difficult task. We describe the different types of uncertainties for magnetic field models and how these are used (or not) in industrial applications.

How to cite: Beggan, C. and Brown, W.: Defining spatial uncertainty in main field magnetic models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2930, https://doi.org/10.5194/egusphere-egu23-2930, 2023.

12:20–12:30
|
EGU23-10003
|
SM1.2
|
ECS
|
On-site presentation
Dominik Strutz and Andrew Curtis

The design of geophysical surveys or experiments (henceforth, the experimental design) significantly influences the uncertainty in scientific results that can be inferred from recorded data. Typical aspects of experimental designs that can be varied are locations of sensors, sensor types, and the modelling or data processing methods to be applied to recorded data. To tighten constraints on the solution to any inverse or inference problem, and thus to rule out as many false possibilities as possible, the design should be optimised such that it is practically achievable within cost and logistical constraints, and such that it maximises expected post-experimental information about the solution. 

Bayesian experimental design refers to a class of methods that use uncertainty estimation methods to quantify the expected gain in information about target parameters provided by an experiment, and to optimise the design of the experiment to maximise that gain. Information gain quantifies the decrease in uncertainty caused by observing data. Expected information gain is an estimate of the gain in information that will be offered by any particular design post-experiment. Bayesian experimental design methods vary the design so as to maximise the expected information gain, subject to practical constraints. 

We introduce variational experimental design methods that are novel to geophysics, and discuss their benefits and limitations in the context of geophysical applications. The family of variational methods relies on functional approximations of probability distributions, and in some cases, of the model-data relationships. They can be used to design experiments that best resolve either all model parameters, or the answer to a specific question about the system studied. Their potential advantage over some other design methods is that finding the functional approximations used by variational methods tends to rely more on optimisation theory than the more common stochastic uncertainty analysis used to approximate Bayesian uncertainties. This allows the wealth of understanding of optimisation methods to be applied to the full Bayesian design problem. 

Variational design methods are demonstrated by optimising the design of an experiment consisting of seismometer locations on the Earth’s surface, so as to best estimate seismic source parameters given arrival time data obtained at seismometers. By designing separate experiments to constrain the hypocentres and epicentres of events, we show that optimal designs may change substantially depending on which questions about the subsurface we wish the experiment to help us to answer. 

By accounting for differing expected uncertainties in travel time picks depending on the picking method used, we demonstrate that the data processing method can be optimised as part of the design process, provided that expected uncertainties are available from each method.

How to cite: Strutz, D. and Curtis, A.: Variational Experimental Design Methods for Geophysical Applications , EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10003, https://doi.org/10.5194/egusphere-egu23-10003, 2023.

Posters on site: Mon, 24 Apr, 14:00–15:45 | Hall X2

Chairpersons: Andreas Fichtner, Xin Zhang
X2.41
|
EGU23-17474
|
SM1.2
Klaus Mosegaard

Probabilistic formulations of inverse problems are most often based on Bayes Rule, which is considered a powerful tool for integration of data information and prior information about potential solutions. However, since its introduction it has become apparent that the Bayesian inference paradigm presents a number of difficulties, especially in the phase where the problem is mathematically formulated.

 

Perhaps the most notable difficulty arises because Bayes Theorem is usually formulated as a relation between probability densities on continuous manifolds. This creates an acute crisis because of a problem described by the French mathematician Joseph Bertrand (1889), and later investigated by Kolmogorov and Borel. According to Kolmogorov's (1933/1956) investigations, conditioning of a probability density is underdetermined: In different parameterizations (reference frames), conditional probability densities express different probability distributions. Surprisingly, this problem is persistently neglected in the scientific literature, not least in applications of Bayesian inversion. We will explore this problem and show that it is a serious threat to the objectivity and quality of Bayesian computations including Bayesian inversion, computation of Bayes Factors, and trans-dimensional inversion.

 

Another difficulty in Bayesian Inference methods derives from the fact that data uncertainties, and prior information on the unknown parameters, are often unknown or poorly known. Because they are required in the calculations, statisticians have invented

hierarchical methods to compute parameters (known as hyper-parameters) controlling these uncertainties. However, since both the data uncertainties and the prior information on the unknowns are supposed to be known 'a priori', but are calculated 'a posteriori', this creates another crisis, namely a violation of causality. We will take a close look at the consequences of this mixing of 'prior' and 'posterior', and show how it potentially jeopardizes the validity of Bayesian computations.

 

How to cite: Mosegaard, K.: Inconsistency and violation of causality in Bayesian inversion paradigms, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-17474, https://doi.org/10.5194/egusphere-egu23-17474, 2023.

X2.42
|
EGU23-14914
|
SM1.2
Andrew Curtis and Xin Zhang

This work discusses the use of full waveform inversion (FWI) with fully nonlinear estimates of uncertainty, to monitor changes in the Earth’s subsurface due to dynamic processes. Typically, FWI is used to produce high resolution 2D and 3D static subsurface images by exploiting information in full acoustic, seismic or electromagnetic waveforms, and has been applied at global, regional and industrial spatial scales. To avoid the over-interpretation of poorly constrained parts of resulting subsurface images or models, it is necessary to know their uncertainty – the range of possible subsurface models that are consistent with recorded data and other pertinent constraints. Almost all estimates of uncertainty on the results of FWI approximate the model-data relationships by linearisation to make the calculation computationally efficient; unfortunately this throws those uncertainty estimates into question, since their raison d’etre is to account for possible model and data variations which are themselves related nonlinearly.

In a related abstract and associated manuscript we use variational inference to achieve the first Bayesian uncertainty analysis for 3D FWI that is fully nonlinear (i.e., involves no linearisation of model-data relationships: https://arxiv.org/abs/2210.03613 ). Variational inference refers to a class of methods that optimize an approximation to the probability distribution that describes post-inversion parameter uncertainties.

Here we extend those methods to perform nonlinear uncertainty analysis for 4D (time-varying 3D) FWI monitoring of the subsurface. Specifically we apply stochastic Stein variational gradient descent (sSVGD) to seismic data generated synthetically for two 3D seismic surveys acquired over a changing 3D subsurface structure based on the 3D overthrust model (Aminzadeh et al., 1997: SEG/EAGE 3-D Modeling Series No. 1). Iterated linearised inversion of each data set fails to image changes (~1%) in the wave speed of the medium, both when each inversion begins independently from the same (good) reference model, or when the best-fit model from inversion of the first survey’s data was used as reference model for the second inversion. Nonlinear inversion of each data set from the same prior distribution also fails to detect these ~1% changes. However, the changes can be imaged and their uncertainty estimated if variational methods applied to invert data from the second survey are initiated from their final state in the inversion of the first survey data. In addition, the methods then converge far more rapidly, compared to running each inversion independently.

We conclude that the probability distributions describing 3D seismic velocity uncertainty are sufficiently complex that the computations of 3D parameter uncertainty for each survey independently have not converged sufficiently to detect small 4D changes. However, the change in these probability distributions between surveys must be sufficiently small that the final solution found from the first survey could evolve robustly into the second survey solution, such that changes are resolved above the uncertainty using variational methods. Nevertheless, this change must be sufficiently complex that linearised methods can not evolve smoothly from one solution to the next, explaining why linearised methods fail, and highlighting why the estimation of nonlinear uncertainties is so important for imaging and monitoring applications.

 

How to cite: Curtis, A. and Zhang, X.: On Monitoring Changes in the Earth’s Subsurface using 4D Bayesian Variational Full Waveform Inversion, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14914, https://doi.org/10.5194/egusphere-egu23-14914, 2023.

X2.43
|
EGU23-8679
|
SM1.2
Alison Malcolm, Maria Kotsi, Gregory Ely, and Jean Virieux

Determining if uncertainty quantification is worth it or not is closely related to how that uncertainty is computed and the associated computational cost. For seismic imaging, it is typically done using Markov chain Monte Carlo algorithms (McMC). Solving an inverse problem using McMC means exploring and characterizing the ensemble of all plausible models through more or less point-wise random walk in the data misfit landscape. This is typically done using Bayes’ theorem via the computation of a posterior probability density function. Even though this can sound naively simple, it can come with a significant computational burden given the dimension of the problem to be solved and the expense of the forward solver. This is because as the number of dimensions grow, there are exponentially more possible guesses the algorithm can make, while only a few of these models will be accepted as plausible. More advanced uncertainty quantification methods such as Hamiltonian Monte Carlo (HMC) could be beneficial because they can handle higher dimensions because efficient sampling of the model space through pseudo-mechanical trajectories in the data misfit landscape is expected. In order for an HMC algorithm to efficiently sample the model space of interest and provide meaningful uncertainty estimates, three hyper-parameters need to be tuned for trajectory design: the Leapfrog steps L, the Leapfrog stepsize ε, and the Mass Matrix M. There has been already work showing how one can choose L and ε; however designing the appropriate M is far more challenging. We consider a time-lapse seismic scenario and use a local acoustic solver for fast forward solutions. We then use Singular value decomposition, in the vicinity of the true model, to transform our time-lapse optimal model to a system of normal coordinates and use only a few of the eigenvalues and eigenvectors of the Hessian as oscillators. By doing so, we can efficiently understand the impact of the initial conditions and the choice of M and gain insight on how to design M in the standard system. This gives us an intuitive way to understand the mass matrix, allowing us to determine whether gains from the HMC algorithm are worth the cost of determining the parameters.

How to cite: Malcolm, A., Kotsi, M., Ely, G., and Virieux, J.: Is Hamiltonian Monte Carlo (HMC) really worth it? An alternative exploration of hyper-parameter tuning in a time-lapse seismic scenario, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8679, https://doi.org/10.5194/egusphere-egu23-8679, 2023.

X2.44
|
EGU23-8767
|
SM1.2
|
ECS
Changxiao Sun, Alison Malcolm, and Rajiv Kumar

Due to the nonlinearity of inversion as well as the noise in the data, seismic inversion results certainly have uncertainties. Whether quantifying these uncertainties is useful depends at least in part on the computational cost of computing them.  Bayesian techniques dominate uncertainty quantification for seismic inversion.  The goal of these methods is to estimate the probability distribution of the model parameters given the observed data. The Markov Chain Monte Carlo algorithm is widely employed for approximating the posterior distribution. However, generating the posterior samples by combining the prior and the likelihood is intractable for large problems and challenging for smaller problems. We apply a machine learning method called normalizing flows, which consists of a series of invertible and differentiable transformations, as an alternative to the sampling-based methods. In our work, the normalizing flows method is combined with full waveform inversion(FWI) using a numerically exact local solver to quantify the uncertainty of time-lapse changes. We integrate uncertainty quantification(UQ) and FWI by estimating UQ on the images generated by FWI making it computationally practical. In this way, a reasonable posterior probability distribution is directly predicted and produced by transforming from a normal distribution, measuring the amount and spread of variation in FWI images by sample mean and standard deviation. In our numerical results, the method for calculating the posterior distribution of the model is verified to be practical and advantageous in terms of effectiveness.

How to cite: Sun, C., Malcolm, A., and Kumar, R.: Can Normalizing Flows make Uncertainty Quantification Practical for Time-Lapse Seismic Monitoring, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8767, https://doi.org/10.5194/egusphere-egu23-8767, 2023.

X2.45
|
EGU23-4074
|
SM1.2
Tobias Stål, Anya M. Reading, Matthew J. Cracknell, Jörg Ebbing, Jacqueline A. Halpin, Ian D. Kelly, Emma J. MacKie, Mohamed Sobh, Ross J. Turner, and Joanne M. Whittaker

Antarctic subglacial properties impact geothermal heat, subglacial sedimentation, and glacial isostatic adjustment; critical parameters for predicting the ice sheet's response to warming oceans. However, the tectonic architecture of the Antarctic interior is unresolved, with results dependent on datasets or extrapolation used. Most existing deterministic suggestions are derived from qualitative observations and often presented as robust results; however, they hide possible alternative interpretations.

 

Using information entropy as a measure of certainty, we present a robust tectonic segmentation model generated from similarity analysis of multiple geophysical and geological datasets. The use of information entropy provides us with an unbiased and transparent metric to communicate the ambiguities from the uncertainties of qualitative classifications. Information theory also allows us to test and optimise the methods and data to evaluate how choices impact the distribution of alternative output maps. We further discuss how this metric can quantify the predictive power of parameters as a function of regions with different tectonic settings.

How to cite: Stål, T., Reading, A. M., Cracknell, M. J., Ebbing, J., Halpin, J. A., Kelly, I. D., MacKie, E. J., Sobh, M., Turner, R. J., and Whittaker, J. M.: Using information entropy to optimise and communicate certainty of continental scale tectonic models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4074, https://doi.org/10.5194/egusphere-egu23-4074, 2023.

X2.46
|
EGU23-5116
|
SM1.2
|
Nicola Piana Agostinetti and Raffaele Bonadio

Surface wave (SW) dispersion curves are widely used to retrieve 1D S-wave profiles of the Earth at different depth-scale, from local to global models. However, such models are generally constructed with a number of assumptions which could bias the final results. One of the most critical issue is the assumption of a diagonal error covariance matrix as representative of the data uncertainties. Such first-order approximation is obviously wrong for any SW practitioner, given the smoothness of dispersion curves, and could lead to overestimate the information content of the dispersion curves themselves.

In this study, we compute realistic errors (i.e. represented by a non-diagonal error covariance matrix) for Surface Wave dispersion curves, computed from earthquakes data. Given the huge amount of data available worldwide, realistic errors can be easily estimated using empirical formulations (i.e. repeated measurements of the same quantity). Such approach leads to the computation of a full-rank empirical covariance matrix which can be used as input in standard Likelihood computation (e.g. to drive a Markov chain Monte Carlo, McMC, sampling of a Posterior Probability Distribution, PPD, in case of a Bayesian workflow).

We apply our approach to field measurements recorded along one decade in the British Islands. We first compute the empirical error covariance matrices for 12 two-stations dispersion curves, under different assumptions, and, then, we invert the curves using a standard trans-dimensional McMC algorithm, to find relevant 1D S-wave profiles for each curve. We perform both an inversion considering the full-rank error covariance matrix, and one inversion using a diagonal version of the same matrix. We compare the retrieved profiles with published results. Our main finding is that 1D profiles obtained using a full-rank error covariance matrix are often similar to profiles obtained with a diagonal matrix and published profiles obtained with different approaches. However, relevant differences occur in a number of cases, which leads to potentially question some details in 1D models. Given the extreme easiness of computing the full-rank error covariance matrix, we strongly suggest to include realistic error computation in SW studies.

How to cite: Piana Agostinetti, N. and Bonadio, R.: Realistic uncertainties for Surface Wave dispersion curves and their influences on 1D S-wave profiles, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5116, https://doi.org/10.5194/egusphere-egu23-5116, 2023.

X2.47
|
EGU23-5661
|
SM1.2
|
ECS
Kuan-Yu Ke, Frederik Tilmann, Trond Ryberg, and Stefan Mroczek

Geophysical inverse problems (seismic tomography) are often significantly underdetermined meaning that a large range of parameter values can explain the observed data well within data uncertainties. Markov chain Monte Carlo (McMC) algorithms based on Voronoi cell parameterizations have been used for quantifying uncertainty in seismic tomography for a number of years. Since surface waves constrain absolute shear velocities and receiver functions (RFs) image discontinuities beneath receiver locations, joint inversion of both data types based on McMC become a popular method to reveal the structure near Earth's surface with uncertainty estimates.

 

Joint inversion is usually performed in two steps: first invert for 2-D surface wave phase (or group) velocity maps and then invert 1-D surface wave and RFs jointly to construct a 3-D spatial velocity structure. However, in doing so, the valuable information of lateral spatial variations in velocity maps and dipping discontinuities in RFs may not be preserved and lead to biased 3-D velocity structure estimation. Hence, the lateral neighbors in the final 3-D model typically preserve little of the 2-D lateral spatial correlation information in the phase and group velocity maps.

 

A one-step 3-D direct inversion based on the reversible jump McMC and 3-D Voronoi tessellation is proposed to improve the above issues by inverting for 3-D spatial structure directly from frequency-dependent traveltime measurements and RFs. We take into account the dipping interfaces according to the Voronoi parameterisation, meaning that back azimuth and incidence angle of individual RFs must be taken into account. We present synthetic tests demonstrating the method. Individual inversion of surface wave measurements and RFs show the limitation of inverting the two data sets separately as expected: surface waves are poor at constraining discontinuities while RFs are poor at constraining absolute velocities. The joint solution gives a better estimate of subsurface properties and associated uncertainties. Compared to two-step inversion which may produce bias propagating between two steps and lose valuable lateral structure variations, the direct 3-D direct inversion not only produces more intuitively reasonable results but also provides more interpretable uncertainties.

How to cite: Ke, K.-Y., Tilmann, F., Ryberg, T., and Mroczek, S.: 3-D joint inversion of surface wave and receiver functions based on the Markov chain Monte Carlo, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5661, https://doi.org/10.5194/egusphere-egu23-5661, 2023.

X2.48
|
EGU23-15486
|
SM1.2
|
ECS
Wolfgang Szwillus

Determining the thermochemical structure of the mantle is crucial for understanding its evolution and dynamics. Temperature variations have long been known as important driving forces of mantle convection; however compositional differences can also influence dynamics. Additionally, compositional differences can act as indicators left behind by processes operating in the past. Both aspects have played a role in the ongoing discussions on the Large Low Shear Wave Velocity Provinces (LLSVP), the proposed Bridgmanite Enriched Ancient Mantle Structures (BEAMS) and the fate of subducted oceanic crust.

A prerequisite for determining compositional differences in terms of major oxides with geophysical techniques is a joint determination of several geophysical properties. A single geophysical property (density, velocity) could almost always be explained by temperature or composition variations alone – except in pathological edge cases. The geophysical signature of composition lies in the pointwise relation between properties. This pointwise relation can be distorted by spectral filtering or inversion smoothing and damping.

In this contribution, I parametrize the mantle as a collection of discrete spatial anomalies in terms of seismic velocity and density. Surface wave phase speed and satellite gravity data are used to constrain the anomalies. A transdimensional Monte Carlo Markov Chain method is used to generate ensembles of solutions that try to balance model complexity and data fit. An important aspect of this setup is that the two data sets used are complementary: While satellite gravity data are available (nearly) globally with homogeneous quality, coverage of phase speed data depends on the spatial distribution of seismic stations and large earthquakes. Conversely, the gravity field lacks true depth sensitivity, which surface wave data can provide by combining several frequencies.

I will present synthetic investigations that aim at determining how accuracy and coverage affect the simultaneous recoverability of seismic velocity and density.

How to cite: Szwillus, W.: Sensitivity of surface wave and gravity data to velocity and density structure in the mantle – insights from transdimensional inversion, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15486, https://doi.org/10.5194/egusphere-egu23-15486, 2023.

X2.49
|
EGU23-2190
|
SM1.2
|
ECS
Tianjue Li, Jing Chen, and Ping Tong

Precise determination of earthquake hypocenter (longitude, latitude and depth) and its origin time is of fundamental importance for not only understanding the seismogenic process but also revealing the Earth’s interior structure. Instrumental coverage plays the first-order role in determining earthquake locations. For earthquakes that occurred in the continental interior, it is favorable to have seismic stations with full azimuthal coverage; nonetheless, precise determination of earthquake depth is often challenging due to its tradeoff with earthquake origin time. The situation is even worse for earthquakes that occurred in offshore regions, e.g., Pacific ring of fire, because regional seismic stations are mostly installed on the continent. To deal with those challenges aforementioned, we propose to constrain the earthquake hypocenter by jointly using first arrivals (P and S waves) and depth phase traveltimes. The theoretical travelling times of these phases are precisely and efficiently calculated in 3D velocity model through solving the Eikonal equation. Once the earthquake hypocenter is well constrained, we further improve the accuracy of the origin time. We tested and verified the proposed earthquake location strategy in the Ridgecrest area (southern California), which serves as an end member of continental setting, and central Chile, which serves as another end member of offshore setting. The station coverage is complete in the Ridgecrest area. We have identified and picked first arrivals and sPL phases at local distances. On the contrary, seismic stations are only installed on the continent in central Chile. We have identified and picked first arrivals and sPn phases at regional distances. Determined earthquakes have comparable location accuracy as the regional catalog in the horizontal plane, while the depth uncertainty has been reduced greatly. Our study shows that incorporating depth phases into the earthquake location algorithm together with first arrivals can greatly increase earthquake location accuracy, especially earthquake depth, which will lay the solid foundation for wide-scope topics in earth science studies.

How to cite: Li, T., Chen, J., and Tong, P.: Locating earthquake hypocenter using first arrivals and depth phase in 3D model at local and regional distances, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2190, https://doi.org/10.5194/egusphere-egu23-2190, 2023.