SM2.3 | Challenges and Opportunities for Machine Learning in Solid Earth Geophysics
EDI
Challenges and Opportunities for Machine Learning in Solid Earth Geophysics
Convener: Jannes Münchmeyer | Co-conveners: Sophie Giffard-Roisin, Fabio Corbi, Chris Marone
Orals
| Tue, 16 Apr, 16:15–17:55 (CEST)
 
Room D3
Posters on site
| Attendance Wed, 17 Apr, 10:45–12:30 (CEST) | Display Wed, 17 Apr, 08:30–12:30
 
Hall X1
Orals |
Tue, 16:15
Wed, 10:45
Within the last decade, machine learning has established itself as an indispensable tool across many disciplines of solid earth geophysics. Remote sensing, seismic data exploration, and laboratory data analysis are only a few fields in which novel machine learning tools have already enabled substantially improved workflows and new discoveries. At the same time, it has become apparent that machine learning is not a silver bullet. While machine learning has illuminated new directions and deeper understanding in several areas, some studies applying machine learning have not achieved improvements over classical workflows, suggesting some problems might not be accessible with current machine learning methods.

In this session, we want to trace out the frontiers of machine learning in geophysics. What is the state of the art and what are the obstacles preventing application of machine learning or further improvements of existing methods? At the same time, we want to discuss how the novel machine learning methods impact scientific progress. Which discoveries have already been enabled by the current state of the art and what would be required to further advance science? To answer these questions, we aim to bring together machine learning experts and practitioners with an interest in machine learning from all disciplines of solid earth geophysics.

Orals: Tue, 16 Apr | Room D3

Chairpersons: Jannes Münchmeyer, Sophie Giffard-Roisin, Fabio Corbi
16:15–16:35
|
EGU24-8924
|
solicited
|
On-site presentation
Léonard Seydoux, René Steinmann, Sarah Mouaoued, Reza Esfahani, and Michel Campillo

Exploring large datasets of continuous seismic data is a challenging task. When targeting signals of interest with a good a priori knowledge on the signal's properties, it is possible to design a dedicated processing pipeline (earthquake detection, noise reduction, etc.). Many other sources can sign up in the data, with characteristics that differ from the targetted ones (changes in noise frequency, modulating signals, etc.). In this case, it is difficult to design a processing pipeline that will be robust to all the possible sources. In this work, we propose to use unsupervised learning to explore the data and reveal patterns in an interpretable way. We extract relevant features of continuous seismic data with a deep scattering network —a deep convolutional neural network with interpretable feature maps— and experiment various classical machine learning tools (clustering, dimensionality reduction, etc.) to reveal and interpret patterns in the data. We apply this method to various cases including to a decade of continuous data in the region of Guerrero, Mexico, and interpret the results in terms of seismicity and external datasets.

How to cite: Seydoux, L., Steinmann, R., Mouaoued, S., Esfahani, R., and Campillo, M.: Revealing and interpreting patterns from continuous seismic data with unsupervised learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8924, https://doi.org/10.5194/egusphere-egu24-8924, 2024.

16:35–16:45
|
EGU24-15571
|
ECS
|
Virtual presentation
Joachim Rimpot, Clément Hibert, Jean-Philippe Malet, Germain Forestier, and Jonathan Weber

Continuous seismological datasets offer insights for the understanding of the dynamics of many geological structures (such as landslides, ice glaciers, and volcanoes) in relation to various forcings (meteorological, climatic, tectonic, anthropic) factors. Recently, the emergence of dense seismic station networks has provided opportunities to document these phenomena, but also introduced challenges for seismologists due to the vast amount of data generated, requiring more sophisticated and automated data analysis  techniques. To tackle this challenge, supervised machine learning demonstrates promising performance; however, it necessitates the creation of training catalogs, a process that is both time-consuming and subject to biases, including pre-detection of events and subjectivity in labeling. To address these biases, manage large data volumes and discover hidden signals in the datasets, we introduce a Self-Supervised Learning (SSL) approach for the unsupervised clustering of continuous seismic data. The method uses siamese deep neural networks to learn from the initial data. The SSL model works by increasing the similarity between pairs of images corresponding to several representations (seismic traces, spectrograms) of the seismic data. The images are positioned in a 512-dimensional space where possible similar events are grouped together. We then identify groups of events using clustering algorithms, either centroid-based or density-based. 

The processing technique is applied to two dense arrays of continuous seismological datasets acquired at the Marie-sur-Tinée landslide and the Pas-de-Chauvet rock glacier, both located in the South French Alps. Both datasets include over a month of continuous data from more than 50 stations. The processing technique is then applied to the continuous data streams from either a single station or from the whole station network. The clustering products show a high number of distinct clusters that could potentially be considered as produced by different types of sources. This includes the anticipated main types of seismicity observed in these contexts: earthquakes, rockfalls, natural and anthropogenic noises as well as potentially yet unknown sources. Our SSL-based clustering approach streamlines the exploration of large datasets, allowing more time for detailed analysis of the mechanisms and processes active in these geological structures.

How to cite: Rimpot, J., Hibert, C., Malet, J.-P., Forestier, G., and Weber, J.: Self-Supervised Learning Strategies for Clustering Continuous Seismic Data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15571, https://doi.org/10.5194/egusphere-egu24-15571, 2024.

16:45–16:55
|
EGU24-16930
|
ECS
|
On-site presentation
Alexandra Renouard, Peter Stafford, Saskia Goes, Alexander Whittaker, and Stephen Hicks

The intelligible understanding of natural phenomena such as earthquakes is one of the main epistemic aims of science. Its very aims are shaped by technological discoveries that can change the cognitive fabric of a research field. Artificial intelligence, of which machine learning (ML) is one of the fundamental pillars, is the cutting-edge technology that promises the greatest scientific breakthroughs. In particular, great hopes are placed in ML
models as a source of inspiration for the formulation of new concepts or ideas, thanks to their ability to represent data at different levels of abstraction inaccessible to humans alone.
However, the opacity of ML models is a major obstacle to their explanatory potential. Although efforts have recently been made to develop ML interpretability methods that condense the complexity of ML models into human-understandable descriptions of how they work and make decisions, their epistemic success remains highly controversial. Because they are based on approximations of ML models, these methods can generate misleading explanations that are overfitted to human intuition and give an illusory sense of scientific understanding.
In this study, we address the question of how to limit the epistemic failure of ML models. To answer it, we use the example of an ML model trained to provide insights into how to better forecast newly emerging earthquakes associated with the expansion of hydrocarbon production in the Delaware Basin, West Texas. Through this example, we show that by changing our conception of explanation models derived from interpretability methods,
i.e. idealised scientific models rather than simple rationalisations, we open up the possibility of revealing promising hypotheses that would otherwise have been ignored. Analysis of our interpreted ML model unveiled a meaningful linear relationship between stress perturbation distribution values derived from ML decision rules and earthquake probability, which could be further explored to forecast induced seismicity in the basin and beyond. This observation also helped to validate the ML model for a subsequent causal approach to the factors underlying earthquakes.

How to cite: Renouard, A., Stafford, P., Goes, S., Whittaker, A., and Hicks, S.: How to Limit the Epistemic Failure of Machine Learning Models?, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16930, https://doi.org/10.5194/egusphere-egu24-16930, 2024.

16:55–17:05
|
EGU24-12197
|
On-site presentation
Fabrice Cotton, Reza Esfahani, and Henning Lilienkamp

The exponential growth of seismological data and machine learning methods offer new perspectives for analysing the factors controlling seismic ground motions and predicting earthquake shaking for earthquake engineering. However, the first models (e.g. Derras et al., 2012) using "simple" neural networks to predict seismic motions did not convince the earthquake engineering community, which continued to use more conventional models. We analyse the weaknesses (from the perspective of engineering seismology) of this first generation of ML-based ground motion models and explain why this first generation did not provide sufficient added value compared to conventional models.  Based on this experience, we propose two evolutions and new methods that have advantages over conventional methods and therefore have greater potential.  A first class of models (e.g. Lilienkamp et al., 2022), based on a U-net neural network, predicts spatial variations in seismic motions (e.g. site effects in three-dimensional basins) by considering seismic motions in map form. A second class of approaches) combines AI methods (conditional generative adversarial networks,  Esfahani et al., 2023) and hybrid databases (observations and simulations selected for their complementarity) to train simulation models capable of generating not only a few parameters (e.g. PGA) describing ground motions, but the full acceleration time histories. We will discuss the potential advantages of this new generation of ML-based methods compared to conventional methods, but also the challenges (and proposed solutions) to best combine simulations and observations, and to calibrate both the best estimate and the variability of future ground motions.

How to cite: Cotton, F., Esfahani, R., and Lilienkamp, H.: Failures, successes and challenges of machine-learning-based engineering ground-motion models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12197, https://doi.org/10.5194/egusphere-egu24-12197, 2024.

17:05–17:15
|
EGU24-12438
|
ECS
|
On-site presentation
Exploring the Potential of Deep Learning models in Focal Mechanism Computation: A Case Study from Switzerland
(withdrawn)
Maria Mesimeri, Dario Jozinovic, Men-Andrin Meier, Tobias Diehl, John Clinton, and Stefan Wiemer
17:15–17:25
|
EGU24-14239
|
ECS
|
On-site presentation
Laura Laurenti, Christopher Johnson, Elisa Tinti, Fabio Galasso, Paul Johnson, and Chris Marone

Earthquake forecasting and prediction are going through achievements in short-term early warning systems, hazard assessment of natural and human-induced seismicity, and prediction of laboratory earthquakes.

In laboratory settings, frictional stick-slip events serve as an analog for the complete seismic cycle. These experiments have been pivotal in comprehending the initiation of failure and the dynamics of earthquake rupture. Additionally, lab earthquakes present optimal opportunities for the application of machine learning (ML) techniques, as they can be generated in long sequences and with variable seismic cycles under controlled conditions. Indeed, recent ML studies demonstrate the predictability of labquakes through acoustic emissions (AE). In particular, Time to Failure (TTF) (defined as the time remaining before the next main labquake and retrieved from recorded shear stress) has been predicted for the main lab-event considering simple AE features as the variance.

A step forward in the state of the art is the prediction of Time To Failure (TTF) by using raw AE waveforms. Here we use deep learning (DL) to predict not only the TTF of the mainshock with raw AE time series but also the TTF of all the labquakes, foreshocks or aftershocks, above a certain amplitude. This is a great finding for several reasons, mainly: 1) we can predict TTF by using traces that don’t contain EQ (but only noise); 2) we can improve our knowledge of seismic cycle predicting also TTF of foreshocks and aftershocks.

This work is promising and opens new opportunities for the study of natural earthquakes just by analyzing the continuous raw seismogram. In general laboratory data studies underscore the significance of subtle deformation signals and intricate patterns emanating from slipping and/or locked faults before major earthquakes. Insights gained from laboratory experiments, coupled with the exponential growth in seismic data recordings worldwide, are diving into a new era of earthquake comprehension.

How to cite: Laurenti, L., Johnson, C., Tinti, E., Galasso, F., Johnson, P., and Marone, C.: Deep learning to predict time to failure of lab foreshocks and earthquakes from fault zone raw acoustic emissions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14239, https://doi.org/10.5194/egusphere-egu24-14239, 2024.

17:25–17:35
|
EGU24-12357
|
ECS
|
On-site presentation
Alexander Bauer, Jan Walda, and Conny Hammer

Seismic waveforms of teleseismic earthquakes are highly complex since they are a superposition of numerous phases that correspond to different wave types and propagation paths. In addition, measured waveforms contain noise contributions from the surroundings of the measuring station. The regional distribution of seismological stations is often relatively sparse, in particular in regions with low seismic hazard such as Northern Germany. However, a detailed knowledge of the seismic wavefield generated by large earthquakes can be crucial for highly precise measurements or experiments that are carried out for instance in the field of particle physics, where seismic wavefields are considered noise. While synthetic waveforms for cataloged earthquakes can be computed for any point on the Earth’s surface, they are based on a highly simplified Earth model. As a first step towards the prediction of a dense seismic wavefield in a region with sparsely distributed stations, we propose to train a convolutional neural network (CNN) to predict measured waveforms of large earthquakes from their synthetic counterparts. For that purpose, we compute synthetic waveforms for numerous large earthquakes of the past years with the IRIS synthetics engine (Syngine) and use the corresponding actual measurements from stations in Northern Germany as labels. Subsequently, we test the performance of the trained neural network for events not part of the training data. The promising results suggest that the neural network is able to largely translate the synthetic waveforms to the more complex measured ones, indicating a means to overcome the lack of complexity of the Earth model underlying the synthetic waveform computation and paving the way for a large-scale prediction of the seismic wavefield generated by earthquakes.

 
 

 

 

How to cite: Bauer, A., Walda, J., and Hammer, C.: Deep learning prediction of measured earthquake waveforms from synthetic data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12357, https://doi.org/10.5194/egusphere-egu24-12357, 2024.

17:35–17:45
|
EGU24-10219
|
ECS
|
Virtual presentation
Tong Zhou

Objectives and Scope:

Deep learning's efficacy in seismic interpretation, including denoising, horizon or fault detection, and lithology prediction, hinges on the quality of the training dataset. Acquiring high-quality seismic data is challenging due to confidentiality, and alternative approaches like using synthetic or augmented data often fail to adequately capture realistic wavefield variations, ambient noise, and complex multipathing effects such as multiples. We introduce an innovative seismic data augmentation method that incorporates realistic geostatistical features and synthetic multiples, enhancing the training and transferability of deep neural networks in multi seismic applications.

Methods and Procedures:

Our method comprises two primary steps: (1) Creating augmented impedance models from existing seismic images and well logs, and (2) Simulating seismic data from these models. The first step merges Image-Guided Interpolation (IGI, Hale et al., 2010) and Sequential Gaussian Simulation (SGS) to generate models that retain original structural features of the input seismic image and introduce random small-scale features aligned with the geostatistical properties of the input seismic data. The second step employs reflectivity forward modeling method (Kennett, 1984) to simulate both primary and multiple seismic data trace-by-trace. This approach, summing up infinite order internal multiples, effectively reproduces the full properties of reflection wavefields, which is a good approximation in areas without rapidly changing structures.

Results and Observations:

Our numerical tests validate the method's effectiveness. The IGI technique interpolates well log data into gridded velocity models, maintaining seismic horizons and smoothing fault features. The SGS method then generates stochastic velocity model implementations preserving the geostatistical distribution of the input seismic data. The resulting reflectivity forward modeling successfully distinguishes between multiples and primaries, facilitating the creation of nuanced training datasets and labels.

Further tests involve training two Transformer-based seismic fault detection neural networks: one with conventional data lacking multiples and another with our augmented data incorporating multiples. While both networks exhibit similar validation performance, their generalization capabilities differ markedly. The network trained with conventional data shows reduced accuracy and fault detection reliability on synthetic field data. In contrast, the network trained with our augmented data demonstrates better precision, accuracy, and recall on the same dataset.

Significance and Novelty:

Our approach generates augmented seismic data that retains the original seismic cubes' and well logs' geostatistical features and multiples, crucial for training deep learning models with high transferability for various seismological tasks. This method's novelty lies in its consideration of geostatistical characteristics, wavelet fluctuations, and multiples. The resulting data is more complex, varied, and realistic compared to conventional augmentation methods. Neural networks trained on this data exhibit enhanced transferability over those trained with traditional synthetic data incorporating only random noise. This advancement represents a significant leap in seismic data processing and interpretation, particularly for deep learning applications in geophysics.

How to cite: Zhou, T.: Seismic data augmentation with realistic geostatistical features and synthetic multiples for multi deep learning tasks, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10219, https://doi.org/10.5194/egusphere-egu24-10219, 2024.

17:45–17:55
|
EGU24-9621
|
ECS
|
Virtual presentation
Jing Hu

The use of P-wave receiver function and surface wave dispersion data is crucial in exploring the structure of the Earth's crust and upper mantle. Typically, to address the ambiguity resulting from using a single type of dataset for inversion, these two types of seismic data, which have different sensitivities to shear wave velocity structure, are jointly inverted to achieve a detailed velocity structure. However, methods that rely on a linearized iterative joint inversion approach depend on the initial model selection, while non-linear joint inversion frameworks based on model parameter space search are computationally intensive. To address these challenges, this study suggests employing a deep learning strategy for the joint inversion of P-wave receiver function and surface wave dispersion data. Two distinct neural networks are developed to extract features from the P-wave receiver function and surface wave dispersion data, and different loss functions are tested to train the proposed neural network. The proposed method has been applied to actual seismic data from South China, and the results are comparable to those obtained by jointly inverting body wave first travel-time, P-wave receiver function, and surface wave dispersion data.

How to cite: Hu, J.: Joint inversion of P-wave receiver function and surface wave dispersion data based on deep learning. , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9621, https://doi.org/10.5194/egusphere-egu24-9621, 2024.

Posters on site: Wed, 17 Apr, 10:45–12:30 | Hall X1

Display time: Wed, 17 Apr 08:30–Wed, 17 Apr 12:30
Chairpersons: Jannes Münchmeyer, Chris Marone, Sophie Giffard-Roisin
X1.79
|
EGU24-4154
|
ECS
Bolin Li, Sjoerd de Ridder, and Andy Nowacki

Distributed acoustic sensing (DAS), a technology that exhibits great potential for subsurface monitoring and imaging, has been regarded as a preeminent instrument for vibration measurements. In light of the tremendous amount of seismic data, numerous channels, and elevated noise levels, it becomes imperative to suggest an appropriate denoise procedure that is compatible with DAS data. In this regard, unsupervised deep learning with data clustering generally exhibits superior performance in facilitating the efficient analysis of sizable unlabeled data sets devoid of human bias. In addition, the clustering method is capable of detecting seismic waves, microseismic turbulence, and even unidentified new types of negligible seismic events, in contrast to a number of conventional denoising techniques. While current approaches reliant on f-k analysis remain valuable, they fail to fully exploit the information present in the wavefield due to their inability to identify the characteristic moveout observed in seismic data. In order to denoise DAS data more effectively, we investigate the capacity of the curvelet transform to extend existing deep scattering network methodologies. In this paper, we propose a novel clustering approach for the denoise processing of DAS data that utilises the Gaussian Mixture Model (GMM), curvelet transform, and unsupervised deep learning. 

The DAS data are initially subjected to the curvelet transform in order to derive the curvelet coefficients at various scales and orientations, which can be regarded as the first layer of extracted features. Following this, a deeper layer of features is obtained by applying the curvelet transform to the coefficients in the first layer. The aforementioned process continues in this manner until the depth of the layer satisfies the algorithm-determined expectation. By concatenating the curvelet coefficients from each layer, the original DAS data's features are generated. Afterwards, the signal is reduced to two dimensions using principal component analysis (PCA), which simplifies its interpretation by projecting the high-dimensional features onto two principal components, which facilitates the clustering of the features by GMM for achieving the final clustered results.

This methodology operates without the need for labels of DAS data and is highly appropriate for managing the substantial quantity and numerous channels of DAS. We used a variety of approaches, such as Bayesian information criteria and silhouette analysis, to determine the optimal number of clusters in GMM and evaluate the algorithm's clustering performance. We demonstrate the method on downhole data acquired during stimulation of the Utah FORGE enhanced geothermal system, and the results appear quite satisfactory, indicating that it can be utilised effectively to denoise DAS signals.

How to cite: Li, B., de Ridder, S., and Nowacki, A.: Clustering distributed acoustic sensing signals via curvelet transform and unsupervised deep learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4154, https://doi.org/10.5194/egusphere-egu24-4154, 2024.

X1.80
|
EGU24-5031
|
ECS
I-Hsin Chang, Chun-Ming Huang, and Hao Kuo-Chen

Responding to challenges from increasing seismic data, our study leverages deep learning to enhance seismic data processing's automation and efficiency. Recognizing Taiwan's unique geological structure, we have developed deep learning models using data from dense seismic arrays since 2018. We have integrated the Transformer model with GAN training techniques for phase picking. Our latest system, SeisBlue, has evolved from phase picking and earthquake location to include magnitude and focal mechanism estimation, primarily using SeisPolar, a CNN model for P-wave polarity classification, crucial for focal mechanism analysis. Additionally, our redesign of the seismic monitoring process emphasizes data pipelines and integrates software engineering technologies, including hardware, system environment, database, data pipelines, model version control, task monitoring, data visualization, and Web UI interaction. The model shows high proficiency in identifying P-wave polarity and deciphering focal mechanisms, with an accuracy of 85%, and precision and recall rates for three categories [positive, negative, undecidable] at [87%, 77%, 53%] and [84%, 80%, 54%], respectively. It notably achieves about 70% Kagan angle under 40 degrees for focal mechanism analysis. This semi-automated workflow, from data processing to phase picking, earthquake location, magnitude determination, focal mechanism estimation and Web UI, significantly boosts seismic monitoring's efficiency and accuracy. It facilitates quicker and more meaningful engagement for researchers in subsequent studies, marking a notable advancement in seismic monitoring and deep learning application.

How to cite: Chang, I.-H., Huang, C.-M., and Kuo-Chen, H.: SeisPolar: Seismic Wave Polarity Module for the SeisBlue Deep Learning Seismology Platform, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5031, https://doi.org/10.5194/egusphere-egu24-5031, 2024.

X1.81
|
EGU24-5148
|
ECS
Flavia Tavani, Pietro Artale Harris, Laura Scognamiglio, and Men-Andrin Meier

One of the main tasks in seismology is the source characterization after an earthquake, in particular the estimate of the orientation of the fault on which an earthquake occurs and the direction of the slip. Currently, most seismological observatories compute moment tensor solutions for earthquakes above a certain magnitude threshold, but, for small to moderate earthquakes (i.e. aftershock sequences), or for large but close in time events, focal mechanism by first arrival polarities are often the only source information available (Sarao et al., 2021).

Focal mechanisms are important to better define the activated faults, to help in understanding the seismotectonic process, to improve the predicted ground shakings for early warning, the tsunami alert and the seismic hazard assessment. For these purposes, it becomes essential to produce and disseminate an estimate of the earthquake source parameters even for small events. Recently, machine learning techniques have gained significant attention and usage in various fields, including seismology where these algorithms have emerged as powerful tools in providing new insight into the earthquakes data analysis such as the prediction of the seismic wave's first arrivals polarities which can be used to compute focal mechanisms.

We present here a workflow developed to obtain earthquake focal mechanisms starting from the first p-wave polarities estimated through the method proposed by Ross et al (2018).

Our procedure consists of two stages: in the first stage, we use a combination of the available INGV web services (Bono et al., 2021) and the ObsPy functions to download the earthquake hypocentral location. We recover the waveforms recorded by the stations in the 0 -120 km distance range, and we create an input file with the appropriate information required for the prediction of the polarities for each waveform. We then use the convolutional neural network (CNN) proposed in Ross et al (2018) to obtain the polarities for each waveform, which can be UP, DOWN, or UNKNOWN. The second stage of the developed procedure aims to use the polarities that have been predicted to determine the focal mechanisms of the selected earthquakes. To do this, we use a modern Python implementation of HASH code (originally proposed in Fortran by Hardebeck et al. 2002, 2003) called SKASH (Skoumal et al. submitted). Finally, we present an application of this procedure to the September 2023, Marradi (Central Italy), seismic sequence that has been characterized by a magnitude Mw 4.9 mainshock followed by over 70 aftershocks in the magnitude range 2 - 3.4. Here, we focused on the estimation of the focal mechanism for events down to M 2.0. The application of the presented workflow permits to gain useful information about the kinematics of the earthquakes in the sequence, obtaining thus a more precise characterization of the activated structures.

How to cite: Tavani, F., Artale Harris, P., Scognamiglio, L., and Meier, M.-A.: Toward a Polarity Focal Mechanism Estimation via Deep Learning for small to moderate Italian earthquakes, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5148, https://doi.org/10.5194/egusphere-egu24-5148, 2024.

X1.82
|
EGU24-6072
|
ECS
Jan Premus and Jean-Paul Ampuero

Dynamic source inversion of earthquakes consists of inferring frictional parameters and initial stress on a fault consistent with co-seismic seismological and geodetic data and dynamic earthquake rupture models. In a Bayesian inversion approach, the nonlinear relationship between model parameters and data (e.g. seismograms) requires a computationally demanding Monte Carlo (MC) approach. As the computational cost of the MC method grows exponentially with the number of parameters, dynamic inversion of a large earthquake, involving hundreds to thousands parameters, shows problems with convergence and sampling. We introduce a novel multi-stage approach to dynamic inversions. We divide the earthquake rupture into several successive temporal (e.g. 0-10 s, 10-20 s, …) and spatial stages (e.g., 100 km, 200 km, …). As each stage requires only a limited number of independent model parameters, their inversion converges relatively fast. Stages are interdependent: earlier stage inversion results are a prior for a later stage inversion. Our main advancement is the use of Generative Adversarial Networks (GAN) to transfer the prior information between inversion stages, inspired by Patel and Oberai (2019). GAN are a class of machine learning algorithms originally used for generating images similar to the training dataset. Their unsupervised training is based on a contest between a generator that generates new samples and a critic that discriminates between training and generator’s images. The resulting generator should generate synthetic images/samples with noise in a low-dimensional latent space as an input. We train GANs on samples of dynamic parameters from an earlier stage of the inversion and use the GAN to suggest the dynamic parameters in a later stage of inversion. We show a proof of concept dynamic inversion of a synthetic benchmark, comparing performance of direct MC dynamic inversion with parallel tempering with our GAN approach. We efficiently handle large ruptures by adopting a 2.5D approximation that solves for source properties averaged across the rupture depth. The 2.5D modeling approach accounts for the 3D effect of the finite rupture depth while keeping the computational cost the same as in 2D dynamic rupture simulations. Additionally we show current results on the dynamic inversion of 2023 Mw 7.8 Kahramanmaraş, Turkey, earthquake.

How to cite: Premus, J. and Ampuero, J.-P.: Dynamic earthquake source inversion with GAN priors, with application to the 2023 Mw 7.8 Kahramanmaraş, Turkey earthquake, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6072, https://doi.org/10.5194/egusphere-egu24-6072, 2024.

X1.83
|
EGU24-7309
Sheng-Yan Pan, Wei-Fang Sun, Chun-Ming Huang, and Hao Kuo-Chen

SeisBlue, a deep-learning-based earthquake monitoring system, is one of the solutions to deal with massive continuous waveform data and create earthquake catalogs. The SeisBlue workflow contains waveform data preprocessing, phase arrival detection by AI modules, phase associator, earthquake locating, earthquake catalog generation, and data visualization. The whole process can be done automatically and efficiently reduces the labor and time costs. In this study, SeisBlue is applied to three different regional seismic networks: the Formosa Array for the observation of magma chamber beneath the Tatun volcanic area, Taiwan (aperture ~80 km with 148 broadband stations and station spacing 5 km), the Chihshang seismic network (CSN) for monitoring micro-seismicity of Chihshang, Taiwan (aperture ~150 km with 14 broadband stations and station spacing 20 km), and the temporary dense nodal array for capturing the aftershock sequence of the 18 th Sep. 2022 Mw6.9 Chihshang earthquake, Taiwan (aperture ~70 km with 46 nodal stations and station spacing 3 km). The 2020 annual SeisBlue catalog of the Formosa Array contains 2,201 earthquakes, as background seismicity, compare to the 1,467 earthquakes listed in the standard catalog of the Central Weather Administration (CWA), Taiwan. The two-month SeisBlue catalog of the 2022 Mw6.9 Chihshang earthquake sequence, September to October, contains 14,276 earthquakes using the CSN dataset; however, the CWA standard catalog only lists 1,247 earthquakes during the same time period. By using waveform data of 18 th Sep. to 25 th Oct. 2022, SeisBlue detects 34,630 and 12,458 earthquakes extracted from the datasets of the dense nodal array and CSN, respectively. SeisBlue can effectively detects both background and aftershock seismicity and extracts small earthquakes via dense arrays.

Keywords: AI earthquake monitoring system, deep learning, AI earthquake catalog, SeisBlue, automatic waveform picking

How to cite: Pan, S.-Y., Sun, W.-F., Huang, C.-M., and Kuo-Chen, H.: Deep learning-based earthquake catalogs extracted from threebroadband/nodal seismic arrays with different apertures in Taiwan bySeisBlue, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7309, https://doi.org/10.5194/egusphere-egu24-7309, 2024.

X1.84
|
EGU24-8233
Petr Kolar, Matěj Petružálek, Jan Šílený, and Tomáš Lokajíček

In the past decade, the development of the Deep Neural Network formalism has emerged as a promising approach for addressing contemporary task in seismology, particularly in the effective and potentially automated processing of extensive datasets, such as seismograms. In this study, we introduce a 4D Neural Network (NN) based on the U-Net architecture, capable of simultaneously processing data from the entire seismic network. Our dataset comprises records/seismograms of Acoustic Emission (AE) events obtained during a laboratory loading experiment on a rock specimen. While AE event records share similarities with real seismograms, they exhibit simplifications in certain features.
To assess the capability of the proposed NN in handling complex data, including occurrences of multiple events observed during experiments, we generated double-event seismograms through the augmentation of unambiguous single-event seismograms. These augmented datasets were employed for training, validation, and testing of the NN. Despite the individual station detection rate being approximately 30%, the simultaneous processing of multiple stations significantly increased efficiency, achieving an overall detection rate of 97%.
In this work, we treat seismograms as "images," adopting an approach that proves to be fruitful. The simultaneous processing of seismograms, coupled with this image-based treatment, demonstrates high potential for reliable automatic interpretation of seismic data. This approach (possibly combined with other methodologies), holds promise for seismogram processing.

How to cite: Kolar, P., Petružálek, M., Šílený, J., and Lokajíček, T.: Double Acoustic Emission events detection using U-net Neural Network, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8233, https://doi.org/10.5194/egusphere-egu24-8233, 2024.

X1.85
|
EGU24-8913
|
ECS
Jorge Antonio Puente Huerta, Jannes Münchmeyer, Ian McBrearty, and Christian Sippl

In seismology, accurately associating seismic phases to their respective events is crucial for constructing reliable seismicity catalogs. This study presents a comprehensive benchmark analysis of five seismic phase associators, including machine learning based solutions, employing synthetic datasets tailored to replicate the seismicity characteristics of real seismic data in a crustal and a subduction zone scenario.

The synthetic datasets were generated using the NonLinLoc raytracer, using real station distributions and velocity models and simulating a large range of seismic events across different depths. In order to generate sets of picks with quality and diversity similar to a real-world dataset, some modifications such as adjustments to arrival times simulating picking errors, selective station exclusion, incorporation of false picks, were included. Such a controlled environment allowed for the assessment of associator performance under a range of different conditions.

As part of project MILESTONE, we compared the performance of five state-of-the-art seismic phase associators (PhaseLink, GaMMA, REAL, GENIE, and PyOcto) across multiple scenarios, including low-noise environments, high-noise background activity, out-of-network events, and complex aftershock sequences. Each associator's accuracy in identifying and associating true events amidst noise picks and its ability to handle overlapping sets of arrival times from different events were rigorously evaluated.

Additionally, we conducted a systematic comparison of the advantages and disadvantages of each associator, attempting a fair and unbiased evaluation. This included assessing their processing times, a critical factor in operational seismology. Our findings reveal significant differences in the precision and robustness of these associators.

This benchmark study not only underscores the importance of robust phase association in seismological research but also paves the way for future enhancements in seismic data processing techniques. The insights gained from this analysis are expected to significantly contribute to the ongoing efforts in seismic monitoring and hazard assessment, particularly in the realm of machine learning applications.

How to cite: Puente Huerta, J. A., Münchmeyer, J., McBrearty, I., and Sippl, C.: Benchmarking seismic phase associators: Insights from synthetic scenarios, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8913, https://doi.org/10.5194/egusphere-egu24-8913, 2024.

X1.86
|
EGU24-9120
|
ECS
Dinko Sindija, Marija Mustac Brcic, Gyorgy Hetenyi, and Josip Stipcevic

Identifying earthquakes and selecting their arrival phases are essential tasks in earthquake analysis. As more seismic instruments become available, they produce vast amounts of seismic data. This necessitates the implementation of automated algorithms for efficiently processing earthquake sequences and for recognising numerous events that might go unnoticed with manual methods.

In this study, we employed the EQTransformer, trained on the INSTANCE dataset, and utilised PyOcto for phase association, focusing specifically on the Petrinja earthquake series. This series is particularly interesting for its initial phase, which was marked by a limited number of seismometers in the epicentral area during the onset of the sequence in late December 2020. This limitation was subsequently addressed by the swift deployment of five additional stations near the fault zone in early January 2021, followed by a gradual expansion of the seismic network to over 50 instruments.

Our analysis covers the Petrinja earthquake series from its onset on December 28, 2020, up to present, offering a complete and up-to-date view of the seismic activity as the seismic activity is still higher than in the interseismic period. We compare our findings from the machine learning-generated catalogue with a detailed manual catalogue. Focusing on the first week of the series, when the seismic network was sparse and there was a high frequency of overlapping earthquakes, we achieved a recall of 80% and a precision of 81% for events with local magnitude greater than 1.0. In contrast, for the subsequent six months of processed data, a period still characterised by a high frequency of earthquakes but with the fully expanded network, our recall improved dramatically to 95% with over 20,000 detected events. This comparison allows us to demonstrate the challenges, evolution, and effectiveness of automatic seismic monitoring throughout the earthquake sequence.

How to cite: Sindija, D., Mustac Brcic, M., Hetenyi, G., and Stipcevic, J.: An up-to-date seismic catalogue of the 2020 Mw6.4 Petrinja (Croatia) earthquake sequence using machine learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9120, https://doi.org/10.5194/egusphere-egu24-9120, 2024.

X1.87
|
EGU24-14134
|
ECS
Wu-Yu Liao, En-Jui Lee, Elena Manea, Florent Aden, Bill Fry, Anna Kaiser, and Ruey-Juin Rau

Machine learning-based algorithms are emerging in mining earthquake occurrences from continuous recordings, replacing some routine processes by human experts, e.g., phase picking and phase association. In this study, we explore the combination of phase picker and phase associator with challenging application scenarios: the complex seismogenic structure, wide study area (15 degrees of both longitude and latitude and a depth of 600 km), hundreds of stations, and intensive seismicity during the 2016 Mw7.8 Kaikōura earthquake that correlates with at least seven faults. The deep learning-based phase pickers usually follow the prototype of PhaseNet, which maps the phase arrivals into truncated Gaussian functions with a customized model. Recent studies have shown poor generalizability of the advanced models on data out of the training distribution. In this study, we argue that appropriate data augmentation enables the RED-PAN model, trained on the Taiwanese data, to generalize well on New Zealand data even under intense seismicity. We applied RED-PAN on year-long continuous recordings over 439 stations of the GeoNet during 2016 and 2017. RED-PAN produces approximately three million P-S pairs over the New Zealand-wide network, enabling the exploration of the advanced phase associators' robustness on local and regional scales and under intense seismicity, e.g., back-projection, GaMMA, and PyOcto. Finally, we developed a six-stage automatic pipeline producing a high-quality earthquake catalog: phase picking, phase association, 3-D absolute location by NonLinLoc, magnitude estimation, weighted template matching, and 3-D relative location by GrowClust. 

How to cite: Liao, W.-Y., Lee, E.-J., Manea, E., Aden, F., Fry, B., Kaiser, A., and Rau, R.-J.: Recipe For Regular Machine Learning-based Earthquake Cataloging: A Systematic Examination in New Zealand, from Local to Regional Scale, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14134, https://doi.org/10.5194/egusphere-egu24-14134, 2024.

X1.88
|
EGU24-14438
|
ECS
Fanny Lehmann, Filippo Gatti, Michaël Bertin, and Didier Clouteau

Recent advances in scientific machine learning have led to major breakthroughs in predicting Partial Differential Equations’ solutions with deep learning. Neural operators, for instance, have been successfully applied to the elastic wave equation, which governs the propagation of seismic waves. They give rise to fast surrogate models of earthquake simulators that considerably reduce the computational costs of traditional numerical solvers.

We designed a Multiple-Input Fourier Neural Operator (MIFNO) and trained it on a database of 30,000 3D earthquake simulations. The inputs comprise a 3D heterogeneous geology and a point-wise source given by its position and its moment tensor coordinates. The outputs are velocity wavefields recorded at the surface of the propagation domain by a grid of virtual sensors. Once trained, the MIFNO predicts 6.4s of ground motion velocity on a domain of size 10km x 10km x 10km within a few milliseconds.

Our results show that the MIFNO can accurately predict surface wavefields for all earthquake sources and positions. Predictions are assessed in several frequency ranges to quantify the accuracy with respect to the well-known spectral bias (i.e. degradation of neural networks’ accuracy on small-scale features). Thanks to its efficiency, the MIFNO is also applied to a database of real geologies, allowing unprecedented uncertainty quantification analyses. This paves the way towards new seismic hazard assessment methods knowledgeable of geological and seismological uncertainties.

How to cite: Lehmann, F., Gatti, F., Bertin, M., and Clouteau, D.: A deep learning-based earthquake simulator: from source and geology to surface wavefields, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14438, https://doi.org/10.5194/egusphere-egu24-14438, 2024.

X1.89
|
EGU24-17061
|
ECS
Gabriele Paoletti, Laura Laurenti, Elisa Tinti, Fabio Galasso, Cristiano Collettini, and Chris Marone

Fault zone properties can evolve significantly during the seismic cycle in response to stress changes, microcracking, and wall rock damage. Distinguishing subtle changes in seismic behavior prior to earthquakes, even in locations with dense seismic networks, is challenging. In our previous works, we applied Deep Learning (DL) techniques to assess alterations in elastic properties before and after large earthquakes. To do that, we used 10,000 seismic events that occurred in a volume around the October 30th 2016, Mw 6.5, Norcia earthquake (Italy), and trained a DL model to classify foreshocks, aftershocks, and time-to-failure (TTF), defined as the elapsed time from the mainshock. Our model exhibited outstanding accuracy, correctly identifying foreshocks and aftershocks with over 90% precision and achieving good results also in time-to-failure multi-class classification.

To build upon our initial findings and enhance our understanding, this follow-up investigation aims to thoroughly examine the model's performance across various parameters. First, we will investigate the influence of earthquake magnitude on our model, specifically assessing whether and to what extent the model's accuracy and reliability are maintained across varying minimum magnitude thresholds included in the catalog. This aspect is crucial to understand whether the model's predictive power remains consistent at different magnitudes of completeness. In terms of source location, our study will extend to evaluate the model's reliability by selectively excluding events from specific locations within the study area, and alternatively, by expanding the selection criteria. This approach allows us to discern the model's sensitivity to spatial variations and its ability to adapt to diverse seismic activity distributions. Furthermore, we’ll pay particular attention to the analysis of null-results. This involves meticulously analyzing cases where the model does not perform effectively, producing low-precision or inconclusive results. By carefully examining these scenarios, our goal is to further assess and confirm the high-performance results obtained from previous works.

Our results highlight the promising potential of DL techniques in capturing the details of earthquake preparatory processes, acknowledging that while complexities of machine learning models exist, ML models have the potential to open hidden avenues of future research.

How to cite: Paoletti, G., Laurenti, L., Tinti, E., Galasso, F., Collettini, C., and Marone, C.: Further investigations in Deep Learning for earthquake physics: Analyzing the role of magnitude and location in model performance, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17061, https://doi.org/10.5194/egusphere-egu24-17061, 2024.

X1.90
|
EGU24-4807
|
ECS
Madona, Mohammad Syamsu Rosid, Djati Handoko, Nelly Florida Riama, and Deni Saiful

Studying earthquake prediction is an interesting academic area. Earthquakes are classified as natural disasters that have the potential to inflict significant damage.  The magnitude of an earthquake provides substantial benefits, both immediately after the event and in the future, for risk assessment and mitigation. This study employs the Random Forest algorithm to predict the magnitude of earthquakes occurring on the Matano Fault in Sulawesi, Indonesia. The prediction is derived from the historical seismic data collected between 1923 and 2023, obtained from the BMKG and USGS Catalogs. The area of interest is situated along the Matano Fault in Sulawesi, Indonesia, with coordinates ranging from 2.99°S to 1.66°S and from 120.50°W to 122.47°W. The dataset comprises six attributes and is split into training and testing sets at a ratio of 70% and 30%, respectively. The variables employed in this investigation encompass origin time, latitude, longitude, magnitude, and magnitude type.  The investigation yields an RMSE score of 0.1929. Overall, the prediction model has outstanding performance, with a high degree of accuracy in predicting values that quite match the actual values. Additionally, the Root Mean Square Error (RMSE) value is 0.1929, indicating a low level of error in the predictions. this work attempts to propose an alternative approach, characterized by a straightforward and practical technique, to solve problems in the geophysics field, for instance in the area of earthquake prediction.

How to cite: Madona, , Rosid, M. S., Handoko, D., Riama, N. F., and Saiful, D.: Implementation of the Random Forest Algorithm for Magnitude Prediction in the Matano Fault Indonesia , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4807, https://doi.org/10.5194/egusphere-egu24-4807, 2024.

X1.91
|
EGU24-6371
Reza Esfahani, Michel Campillo, Leonard Seydoux, Sarah Mouaoued, and Qing-Yu Wang

Clustering techniques facilitate the exploration of extensive seismogram datasets, uncovering a variety of distinct seismic signatures. This study employs deep scattering networks (Seydoux et al. 2020), a novel approach in deep convolutional neural networks using fixed wavelet filters, to analyze continuous multichannel seismic time-series data spanning four months before the 2019 Ridgecrest earthquake sequence in California. By extracting robust physical features known as scattering coefficients and disentangling them via independent component analysis, we cluster different seismic signals, including those from foreshock events and anthropogenic noises. We investigate the variability of intracluster (dispersion within each cluster) and examine how it correlates with waveform properties and feature space. The methodology allows us to measure this variability, either through distance to cluster centroids or 2D manifold mapping. Our findings reveal distinct patterns in the occurrence rate, daily frequency, and waveform characteristics of these clusters, providing new insights into the behavior of seismic events versus anthropogenic noises.

How to cite: Esfahani, R., Campillo, M., Seydoux, L., Mouaoued, S., and Wang, Q.-Y.: Detailed clustering of continuous seismic waveforms with deep scattering networks: a case study on the Ridgecrest earthquake sequence, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6371, https://doi.org/10.5194/egusphere-egu24-6371, 2024.

X1.92
|
EGU24-18303
Quentin Bletery, Kévin Juhel, Andrea Licciardi, and Martin Vallée

A signal, coined PEGS for Prompt Elasto-Gravity Signal, was recently identified on seismograms preceding the seismic waves generated by very large earthquakes, opening promising applications for earthquake and tsunami early warning. Nevertheless, this signal is about 1,000,000 times smaller than seismic waves, making its use in operational warning systems very challenging. A Deep Learning algorithm, called PEGSNet, was later designed to estimate, as fast as possible, the magnitude of an ongoing large earthquake from PEGS recorded in real time. PEGSNet was applied to Japan and Chile and proved capable of tracking the magnitude of the Mw 9.1 Tohoku-oki and Mw 8.8 Maule earthquakes within a few minutes from the events origin times. Here, we apply this algorithm to a very well instrumented region: Alaska. We find that, applied to such a dense seismic network, the performance of PEGSNet is drastically improved, with robust performances obtained for earthquakes with magnitudes down to 7.8. The gain in resolution also allows us to estimate the focal mechanism of the events in real time, providing all the information required for tsunami warning within less than 3 minutes.

How to cite: Bletery, Q., Juhel, K., Licciardi, A., and Vallée, M.: Machine learning based rapid earthquake characterization using PEGS in Alaska, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18303, https://doi.org/10.5194/egusphere-egu24-18303, 2024.

X1.93
|
EGU24-7119
|
Ming Zhao, Zhuowei Xiao, Bei Zhang, Bo Zhang, and Shi Chen

As the amount of seismic data increasing drastically worldwide, there are ever-growing needs for high-performance automatic seismic data processing  methods and high-quality, standardized professional datasets. To address this issue, we recently updated the 'DiTing' dataset, one of the world's largest seismological AI datasets with ~2.7 million traces and corresponding labels,  with 1,089,920 three-component waveforms from 264,298 natural earthquakes in mainland China and adjacent areas, and 958,076 Pg, 780603 Sg, 152752 Pn, 25956 Sn earthquake phase arrival tags, in addition to 249,477 Pg, 41610 Pn first motion polarity tags from 2020 to 2023. We also collected 15375 non-natural earthquake waveforms in mainland China from 2009 to 2023 and a manually labeled noise dataset containing various typical noise signals from the China Seismological Network. With the support of the 'DiTing' dataset, we developed and trained several deep learning models referred as 'DiTingTools' for automatic seismic data processing. In the continuous waveform detection and evaluation of more than 1,000 stations over a year across China, 'DiTingTools' has achieved an average recall rate of 80% for event detection, mean square error ±0.2s for P phase picking, and ±0.4s for S, the average identification accuracy rate of Pg first motion polarity reached 86.7% (U) and 87.9% (D), and 75.1% (U) and 73.1% (D) for Pn first motion polarity, the average magnitude prediction error of a single station is mainly concentrated at ±0.5. The remarkable generalization capabilities of 'DiTingTools' were demonstrated through its application on the China Seismic Network. Specifically, 'DiTingPicker', a model within 'DiTingTools' designed for earthquake detection and phase picking, was employed to analyze the M 6.8  earthquake that struck Luding County, Sichuan Province, in 2022. This tool was instrumental in automatically processing data to examine the main shock and intricate fault structures of the aftershocks. The effectiveness of 'DiTingTools' in earthquake prevention and disaster reduction was further validated through these practical applications.

How to cite: Zhao, M., Xiao, Z., Zhang, B., Zhang, B., and Chen, S.: 'DiTing' and 'DiTingtools':a large multi-label dataset and algorithm set for intelligent seismic data processing established based on the China Seismological Network, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7119, https://doi.org/10.5194/egusphere-egu24-7119, 2024.

X1.94
|
EGU24-5044
Sadegh Karimpouli, Grzegorz Kwiatek, Patricia Martínez-Garzón, Georg Dresen, and Marco Bohnhoff

Earthquake forecasting is a highly complex and challenging task in seismology ultimately aiming to save human lives and infrastructures. In recent years, Machine Learning (ML) methods have demonstrated progressive achievements in earthquake processing and even labquake forecasting. Developing a more general and accurate ML model for more complex and/or limited datasets is obtained by refining the ‘ML models’ and/or enriching the ‘input data’. In this study, we present an event-based approach to enrich the input data by extracting spatio-temporal seismo-mechanical features that are dependent on the origin time and location of each event. Accordingly, we define and analyze a variety of features such as: (a) immediate features, defined as the features which benefit from very short characteristics of the considered event in time and space, (b) time-space features, based on the subsets of acoustic emission (AE) catalog constrained by time and space distance from the considered event, and (c) family features, extracted from topological characteristics of the clustered (family) events extracted from clustering analysis in different time windows. We use AE catalogs recorded by tri-axial stick-slip experiments on rough fault samples to compute event-based features. Then, a random forest classifier is applied to forecast the occurrence of a large magnitude event (MAE>3.5) in the next time window. Results show that to obtain a more accurate forecasting model, one needs to separate background and clustered activities. Based on our results, the classification accuracy when the entire catalog data is used reaches 73.2%, however, it shows a remarkable improvement for separated background and clustered populations with an accuracy of 82.1% and 89.0%, respectively. Feature importance analysis reveals that not only AE-rate, seismic energy and b-value are important, but also family features developed from a topological tree decomposition play a crucial role for labquake forecasting.

How to cite: Karimpouli, S., Kwiatek, G., Martínez-Garzón, P., Dresen, G., and Bohnhoff, M.: Event-based features: An improved feature extraction approach to enrich machine learning based labquake forecasting, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5044, https://doi.org/10.5194/egusphere-egu24-5044, 2024.