EGU21-16089
https://doi.org/10.5194/egusphere-egu21-16089
EGU General Assembly 2021
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.

Accelerating inverse problems in seismology using adjoint-based machine learning

Lars Gebraad, Sölvi Thrastarson, Andrea Zunino, and Andreas Fichtner
Lars Gebraad et al.
  • ETH Zürich, Institute of Geophysics, Zürich, Switzerland (larsgebraad@gmail.com)

Uncertainty quantification is an essential part of many studies in Earth science. It allows us, for example, to assess the quality of tomographic reconstructions, quantify hypotheses and make physics-based risk assessments. In recent years there has been a surge in applications of uncertainty quantification in seismological inverse problems. This is mainly due to increasing computational power and the ‘discovery’ of optimal use cases for many algorithms (e.g., gradient-based Markov Chain Monte Carlo (MCMC). Performing Bayesian inference using these methods allows seismologists to perform advanced uncertainty quantification. However, oftentimes, Bayesian inference is still prohibitively expensive due to large parameter spaces and computationally expensive physics.

Simultaneously, machine learning has found its way into parameter estimation in geosciences. Recent works show that machine learning both allows one to accelerate repetitive inferences [e.g. Shahraeeni & Curtis 2011, Cao et al. 2020] as well as speed up single-instance Monte Carlo algorithms using surrogate networks [Aleardi 2020]. These advances allow seismologists to use machine learning as a tool to bring accurate inference on the subsurface to scale.

In this work, we propose the novel inclusion of adjoint modelling in machine learning accelerated inverse problems. The aforementioned references train machine learning models on observations of the misfit function. This is done with the aim of creating surrogate but accelerated models for the misfit computations, which in turn allows one to compute this function and its gradients much faster. This approach ignores that many physical models have an adjoint state, allowing one to compute gradients using only one additional simulation.

The inclusion of this information within gradient-based sampling creates performance gains in both training the surrogate and the sampling of the true posterior. We show how machine learning models that approximate misfits and gradients specifically trained using adjoint methods accelerate various types of inversions and bring Bayesian inference to scale. Practically, the proposed method simply allows us to utilize information from previous MCMC samples in the algorithm proposal step.

The application of the proposed machinery is in settings where models are extensively and repetitively run. Markov chain Monte Carlo algorithms, which may require millions of evaluations of the forward modelling equations, can be accelerated by off-loading these simulations to neural nets. This approach is also promising for tomographic monitoring, where experiments are repeatedly performed. Lastly, the efficiently trained neural nets can be used to learn a likelihood for a given dataset, to which subsequently different priors can be efficiently applied. We show examples of all these use cases.

 

Lars Gebraad, Christian Boehm and Andreas Fichtner, 2020: Bayesian Elastic Full‐Waveform Inversion Using Hamiltonian Monte Carlo.

Ruikun Cao, Stephanie Earp, Sjoerd A. L. de Ridder, Andrew Curtis, and Erica Galetti, 2020: Near-real-time near-surface 3D seismic velocity and uncertainty models by wavefield gradiometry and neural network inversion of ambient seismic noise.

Mohammad S. Shahraeeni and Andrew Curtis, 2011: Fast probabilistic nonlinear petrophysical inversion.

Mattia Aleardi, 2020: Combining discrete cosine transform and convolutional neural networks to speed up the Hamiltonian Monte Carlo inversion of pre‐stack seismic data.

How to cite: Gebraad, L., Thrastarson, S., Zunino, A., and Fichtner, A.: Accelerating inverse problems in seismology using adjoint-based machine learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16089, https://doi.org/10.5194/egusphere-egu21-16089, 2021.

Displays

Display file