NH4.3 | Machine learning and statistical models applied to earthquake occurrence
EDI
Machine learning and statistical models applied to earthquake occurrence
Co-organized by SM8
Convener: Stefania Gentili | Co-conveners: Álvaro González, Filippos Vallianatos, Piero Brondi
Orals
| Fri, 19 Apr, 14:00–15:45 (CEST), 16:15–17:25 (CEST)
 
Room 1.31/32
Posters on site
| Attendance Thu, 18 Apr, 16:15–18:00 (CEST) | Display Thu, 18 Apr, 14:00–18:00
 
Hall X4
Posters virtual
| Attendance Thu, 18 Apr, 14:00–15:45 (CEST) | Display Thu, 18 Apr, 08:30–18:00
 
vHall X4
Orals |
Fri, 14:00
Thu, 16:15
Thu, 14:00
New physical and statistical models based on observed seismicity patterns shed light on the preparation process of large earthquakes and on the temporal and spatial evolution of seismicity clusters.

As a result of technological improvements in seismic monitoring, seismic data is nowadays gathered with ever-increasing quality and quantity. As a result, models can benefit from large and accurate seismic catalogues. Indeed, accuracy of hypocenter locations and coherence in magnitude determination are fundamental for reliable analyses. And physics-based earthquake simulators can produce large synthetic catalogues that can be used to improve the models.

Multidisciplinary data recorded by both ground and satellite instruments, such as geodetic deformation, geological and geochemical data, fluid content analyses and laboratory experiments, can better constrain the models, in addition to available seismological results such as source parameters and tomographic information.

Statistical approaches and machine learning techniques of big data analysis are required to benefit from this wealth of information, and unveiling complex and nonlinear relationships in the data. This allows a deeper understanding of earthquake occurrence and its statistical forecasting.

In this session, we invite researchers to present their latest results and findings in physical and statistical models and machine learning approaches for space, time, and magnitude evolution of earthquake sequences. Emphasis will be given to the following topics:

• Physical and statistical models of earthquake occurrence.
• Analysis of earthquake clustering.
• Spatial, temporal and magnitude properties of earthquake statistics.
• Quantitative testing of earthquake occurrence models.
• Reliability of earthquake catalogues.
• Time-dependent hazard assessment.
• Methods and software for earthquake forecasting.
• Data analyses and requirements for model testing.
• Machine learning applied to seismic data.
• Methods for quantifying uncertainty in pattern recognition and machine learning.

Session assets

Orals: Fri, 19 Apr | Room 1.31/32

Chairpersons: Stefania Gentili, Álvaro González, Piero Brondi
14:00–14:05
Analysis of Seismic Catalogs
14:05–14:35
|
EGU24-6037
|
solicited
|
On-site presentation
Ian Main and Gina-Maria Geffers

The exponent b of the log-linear frequency-magnitude relation for natural seismicity commonly takes values that are statistically indistinguishable from b=1.  There are some exceptions, notably with respect to focal mechanism and for volcanic and induced seismicity, but it is possible these could be explained at least in part by variability in the dynamic range of measurements between the minimum magnitude of complete reporting and the maximum magnitude, especially where the dynamic range of the statistical sample is small.  However, in laboratory experiments and in discrete element simulations a wide range of b-values for acoustic emissions are consistent (after accounting for systematic differences in the transducer response) with systematic variations in the range  as the stress intensity factor increases from its minimum to its maximum, critical value.  The question remains: why is  an attractor stationary state for large-scale seismicity?  Previous attempts to answer this question have relied on a simple geometric ‘tiling’ argument that is inconsistent with the spatial distribution of earthquake locations, or a hierarchical ‘triple-junction’ model that has not been validated by observation.  Here, we derive a closed analytical solution for the maximum entropy -value, conditional on the assumption that earthquake magnitude scales linearly with the logarithm of rupture area. In the limit of infinite dynamic range, the solution is .  The maximum entropy -value converges to this value asymptotically from above as dynamic range increases for large systems at steady state.  This is in contrast to a previous maximum entropy solution based on analysing the spectrum in ‘natural time’ of earthquake catalogues, where larger samples with greater dynamic range lead to a divergence from .  The new theory is consistent with the trend in b-value convergence from above towards an asymptotic limit of in b=1.027±0.015 at 95% confidence from the global CMT earthquake frequency-moment catalogue for data since 1990.

How to cite: Main, I. and Geffers, G.-M.: Why is  b=1?, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6037, https://doi.org/10.5194/egusphere-egu24-6037, 2024.

14:35–14:45
|
EGU24-401
|
ECS
|
On-site presentation
Farnaz Kamranzad, Mark Naylor, and Finn Lindgren

Earthquake catalogues, vital for understanding earthquake dynamics, often grapple with incompleteness across varying time scales. Our research pioneers an innovative strategy to seamlessly integrate time-varying incompleteness into the Epidemic-Type Aftershock Sequence (ETAS) model. Leveraging the Bayesian prowess of inlabru package in R programming language, which is based on the Integrated Nested Laplace Approximation (INLA) method, we not only capture uncertainties but also forge a robust bridge between short-term to long-term gaps in records of earthquakes.

Our methodology, a fusion of the ETAS model and inlabru, provides a comprehensive framework that adapts to diverse scales of incompleteness. We address the complex nature of seismic patterns by considering both short-term gaps in early aftershocks (minutes to a few days) and long-term irregularities (years to centuries) in historical earthquake data records. Technically, the short-term incompleteness period arises from seismic network saturation during periods of high activity, resulting in the underrecording of small events, while the long-term incompleteness originates from sparse network coverage and inability to detect events over extended time. Bayesian foundation of inlabru enriches the model with posterior distributions, empowering us to navigate uncertainties and refine seismic hazard assessments. By utilising a combination of simulated synthetic data and real earthquake catalogues, our results showcase the impact of this approach on the ETAS model, markedly improving its predictive accuracy across various temporal scales of incompleteness.

In this study, we present an initiative in seismicity modelling that bridges temporal gaps, allowing the ETAS model to evolve with the ever-changing landscape of earthquake data incompleteness. This research not only enriches our understanding of spatiotemporal seismicity patterns but also lays the groundwork for more resilient and adaptive aftershock forecasting, ultimately equipping decision-makers with more reliable information about seismic hazards, and enhancing community resilience in the face of earthquakes.

How to cite: Kamranzad, F., Naylor, M., and Lindgren, F.: Bridging time scales for comprehensive ETAS modelling to accommodate short-term to long-term incompleteness of seismicity catalogues, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-401, https://doi.org/10.5194/egusphere-egu24-401, 2024.

14:45–14:55
|
EGU24-4379
|
ECS
|
On-site presentation
Paola Corrado, Marcus Herrmann, and Warner Marzocchi

Current models used for earthquake forecasting assume that the magnitude of an earthquake is independent of past earthquakes, i.e., the earthquake magnitudes are uncorrelated. Nevertheless, several studies have challenged this assumption by revealing correlations between the magnitude of subsequent earthquakes in a sequence. These findings could significantly improve earthquake forecasting and help in understanding the physics of the nucleation process.

We investigate this phenomenon for the foreshock sequence of the first 2019 Ridgecrest event (Mw6.4) using a high-resolution catalog; choosing this foreshock sequence has been guided by a low b-value (~0.68 ± 0.06 after converting local magnitudes to moment magnitudes) and a significant magnitude correlation, even when considering only earthquakes above the completeness level estimated with different methods. To disregard incomplete events in the b-value estimation, we apply the b-positive approach (van der Elst 2021), i.e., using only positive magnitude differences; those magnitude differences are uncorrelated and we obtain a markedly higher b-value (~0.9 ± 0.1). Apparently, the foreshock sequence contained substantial short-term aftershock incompleteness due to a Mw4.0 event.

We observe a similar behaviour for whole Southern California after stacking earthquake sequences. Finally, we generate synthetic catalogs and apply short-term incompleteness to demonstrate that common methods for estimating the completeness level still result in magnitude correlation, indicating hidden incompleteness.

Our findings highlight that (i) existing methods for estimating the completeness level have limited statistical power and the remaining incompleteness can significantly bias the b-value estimation; (ii) the magnitude correlation is the most powerful property to detect incompleteness, so it should supplement statistical analyses of earthquake catalogs.

How to cite: Corrado, P., Herrmann, M., and Marzocchi, W.: Magnitude correlation exposes hidden short-term earthquake catalog incompleteness, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4379, https://doi.org/10.5194/egusphere-egu24-4379, 2024.

14:55–15:05
|
EGU24-7459
|
ECS
|
On-site presentation
Gopala Krishna Rodda and Yuval Tal

Exploring the potential relationship between an earthquake’s onset and its final moment magnitude (Mw) is a fundamental question in earthquake physics. This has practical implications, as rapid and accurate magnitude estimation is essential for effective early warning systems.

This study employs a novel approach of a hybrid Convolutional Neural Network (CNN) - Recurrent Neural Network (RNN) models to estimate moment magnitude from just the first two seconds of source time functions (STFs), which is significantly shorter than the entire source duration. We use STFs of large earthquakes from the SCARDEC database, which applies a deconvolution method on teleseismic body waves, considering only events with a Mw > 7 and an initial STF value smaller than 1017 Nm/s to avoid potential bias. Additionally, we incorporate STFs from physics-based numerical simulations of earthquake cycles on nonplanar faults, varying in roughness levels and fault lengths. These simulations exhibit substantial variability in earthquake magnitude and slip behavior between events. The reported methodology uses the information contained in the initial characteristics of the STF, its temporal derivative, and the associated seismic moment, capturing the valuable insights present in the initial energy release about the final moment magnitude.

For the simulated data, the CNN-RNN model demonstrates a good correlation between the initial 2 seconds of the STF and the final event magnitude. Correlation coefficients close to 0.8 and root mean squared errors (RMSE) around 0.25 for magnitudes between 5 and 7.5 showcase the model’s ability to learn and generalize effectively from diverse earthquake scenarios. While results for natural earthquakes from the SCARDEC database remain promising (RMSE of 0.27), the correlation coefficient is lower (0.31), suggesting a weaker relationship than simulated data. This discrepancy might be attributed to the narrower band of magnitudes (7 to 7.5) within SCARDEC data used here, potentially limiting the model’s ability to discern subtle variations and establish a stronger correlation. Further, as an earthquake's fractional duration, 2 sec/source duration, increases, the model's error consistently decreases as expected. Finally, most predictions fall within a narrow range of 1% error, and nearly 90% of samples across diverse durations satisfy a set 5% error threshold. This consistent performance of the hybrid CNN-RNN model across varying source durations, magnitude ranges, and fault characteristics underscores the model's adaptability and robustness in handling diverse earthquake scenarios. While we mostly use here STFs from simulated earthquakes, continuous learning and refinement against reliable and diverse STFs obtained from teleseismic data, when available, are key to enhancing the potential of these CNN-RNN models for a better understanding of the onset-magnitude correlation in natural earthquakes.

How to cite: Rodda, G. K. and Tal, Y.: Analyzing Earthquake's Onset-Magnitude Correlation Using Machine Learning and Simulated and Seismic Data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7459, https://doi.org/10.5194/egusphere-egu24-7459, 2024.

Spatio-temporal Analysis of Seismicity
15:05–15:15
|
EGU24-9184
|
On-site presentation
Georgios Michas and Filippos Vallianatos

Seismic swarms are characterized by intense seismic activity strongly clustered in time and space and without the occurrence of a major event that can be considered as the mainshock. Such intense seismic activity is most commonly associated with external aseismic factors, as pore-fluid pressure diffusion, aseismic creep, or magmatic intrusion that can perturb the regional stresses locally triggering the observed seismicity. These factors can control the spatiotemporal evolution of seismic swarms, frequently exhibiting spatial expansion and migration of event hypocenters with time. This phenomenon, termed as earthquake diffusion, can be highly anisotropic and complex, with earthquakes occurring preferentially along fractures and zones of weakness within the heterogeneous crust, presenting anisotropic diffusivities that may locally vary over several orders of magnitude. The efficient modelling of the complex spatiotemporal evolution of seismic swarms, thus, represents a major challenge. Herein, we develop a stochastic framework based on the well-established Continuous Time Random Walk (CTRW) model, to map the spatiotemporal evolution of seismic swarms. Within this context, earthquake occurrence is considered as a point-process in space and time, with jump lengths and waiting times between successive earthquakes drawn from a joint probability density function. The spatiotemporal evolution of seismicity is then described with an appropriate master equation and the time-fractional diffusion equation (TFDE). The applicability of the model is demonstrated in the 2014 Long Valley Caldera (California) seismic swarm, which has been associated with a pore-fluid pressure triggering mechanism. Statistical analysis of the seismic swarm in the light of the CTRW model shows that the mean squared distance of event hypocenters grows slowly with time, with a diffusion exponent much lower than unity, as well as a broad waiting times distribution with asymptotic power law behavior. Such properties are intrinsic characteristics of anomalous earthquake diffusion and particularly subdiffusion. Furthermore, the asymptotic solution of the TFDE can successfully capture the main features of earthquake progression in time and space, showing a peak of event concentration close to the initial source of the stress perturbation and a stretched relaxation of seismicity with distance. Overall, the results demonstrate that the CTRW model and the TFDE can efficiently be used to decipher the complex spatiotemporal evolution of seismic swarms.

Acknowledgements

The research project was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the “2nd Call for H.F.R.I. Research Projects to support Post-Doctoral Researchers” (Project Number: 00256). 

How to cite: Michas, G. and Vallianatos, F.: Spatiotemporal Evolution of Seismic Swarms in the light of the Continuous Time Random Walk Model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9184, https://doi.org/10.5194/egusphere-egu24-9184, 2024.

15:15–15:25
|
EGU24-2615
|
ECS
|
On-site presentation
Davide Zaccagnino, Filippos Vallianatos, Giorgios Michas, Luciano Telesca, and Carlo Doglioni

Seismic activity clusters in space and time due to stress accumulation and static and dynamic triggering. Therefore, both moderate and large magnitude events can be preceded by smaller events and also seismic swarms can occur without being succeeded by major shocks – which represents the vast majority of cases.

Unveiling if seismic activity can forewarn mainshocks, being somewhat distinguished by swarms, is an issue of crucial importance for the development of short-term seismic hazard. The analysis of thousand clusters of seismicity before mainshocks in Southern California and Italy highlights that the surface over which selected seismic activity spreads is positively correlated with the magnitude of the impending mainshock, as well as the cumulative seismic moment, the number of earthquakes, the variance of magnitude and its entropy, while no significant difference is observed in the duration, seismic rate, and trends of magnitudes and interevent times between foreshocks and swarms. Our interpretation is that crustal volumes and fault interfaces host more and more correlated seismicity as they become unstable, and some properties of seismic clusters may mark their state of stability. For this reason, large mainshocks tend to occur in more extended correlated regions and because of the scaling of maximum magnitudes with the size of unstable faults. Considering this, the recording of more numerous and energetic cluster activity before mainshocks than during swarms is also reasonable.

In recent years, our ability to track seismic clusters has improved outstandingly, so that their structural and statistical characterization can be performed almost in real time. Therefore, it may be possible to compare the current features of the active seismic cluster with the cumulative distribution functions of past seismicity. However, we would like to stress that foreshocks should not be considered as precursors in the sense that neither they forewarn mainshocks, nor they are physically different from swarms: the precursor is not in seismic activity itself, but in the development of mechanical instability within crustal volumes.

How to cite: Zaccagnino, D., Vallianatos, F., Michas, G., Telesca, L., and Doglioni, C.: What do seismic clusters tell us about fault stability?, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2615, https://doi.org/10.5194/egusphere-egu24-2615, 2024.

15:25–15:35
|
EGU24-15724
|
On-site presentation
Luigi Passarelli, Simone Cesca, Leila Mizrahi, and Gesa Petersen

Tectonic earthquake swarms exhibit a distinct temporal and spatial pattern compared to mainshock-aftershock sequences. Unlike the latter ones, where the earthquake sequence typically starts with the largest earthquake that triggers an Omori-Utsu temporal decay of aftershocks, earthquake swarms show a unique increase in seismic activity without a clear mainshock. The largest earthquake(s) in a swarm sequence often occur(s) later, and the sequence consists of multiple earthquake bursts showing spatial migration. This erratic clustering behavior of earthquake swarms arises from the interplay between the long-term accumulation of tectonic elastic strain and short-term transient forces. Detecting and investigating earthquake swarms challenges the community and ideally requires an unsupervised approach, which has led in recent decades to the emergence of numerous algorithms for earthquake swarm identification.

In a comprehensive review of commonly used techniques for detecting earthquake clusters, we applied a blend of declustering algorithms and machine learning clustering techniques to synthetic earthquake catalogs produced with a state-of-the-art ETAS model, with a time-dependent background rate mimicking realistic swarm-like sequences. This approach enabled the identification of boundaries in the statistical parameters commonly used to distinguish earthquake cluster types, i.e., mainshock-aftershock clusters versus earthquake swarms. The results obtained from synthetic data helped to have a more accurate classification of seismicity clusters in real earthquake catalogs, as it is the case for the 2010-2014 Pollino Range (Italy) seismic sequence, the Húsavík-Flatey transform fault seismicity (Iceland), and the regional catalog of Utah (USA). However, the classification obtained through automated application of these findings to real cases depends on the clustering algorithm utilized, the statistical completeness of catalogs, the spatial and temporal distribution of earthquakes, and benefits of a posteriori manual inspection. Nevertheless, the systematic assessment and comparison of commonly used methods - benchmarked in this work to synthetics catalogs and real seismicity – allows the community to have clear and thorough guidelines to identify swarm-like seismicity.

How to cite: Passarelli, L., Cesca, S., Mizrahi, L., and Petersen, G.: Detect and characterize swarm-like seismicity, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15724, https://doi.org/10.5194/egusphere-egu24-15724, 2024.

15:35–15:45
|
EGU24-3482
|
ECS
|
On-site presentation
Paulo Sawi, Saeko Kita, and Roland Bürgmann

From October to December 2019, the provinces of Cotabato and Davao del Sur in the Philippines experienced an earthquake sequence that involved five M~6 (Mw 6.4, 6.6, 5.9, 6.5, and 6.7) inland earthquakes. A deep-neural network-based phase picker, PhaseNet, was used to obtain the seismic phases of earthquake waveforms of stations within 200 km from the area of the events for 80 days from October 16 to December 31, 2019. The acquired seismic phases were initially associated and located using the Rapid Earthquake Association and Location (REAL). Subsequently, the initial hypocenter locations were adjusted through relocation utilizing VELEST, with further refinement achieved through a relative relocation technique hypoDD. By employing these methodologies, we successfully created an earthquake catalog that contains ~5,000 earthquakes for the corresponding period. The number of determined earthquakes through this method surpassed the ~3,000 event count reported in the original catalog by DOST-PHIVOLCS which depended solely on manually selected seismic phases. The spatial distribution of the relocated hypocenters reveals two seismic alignments: one trending in the SW-NE direction, parallel to the existing mapped active faults, and the other in the NW-SE direction. These lineaments intersect near the location of the Mw6.4 event, suggesting the presence of a conjugate fault or cross fault. The created earthquake catalog illuminates the spatial and temporal evolution of seismicity following each significant event, offering insights into the detailed patterns that characterize the clustering of aftershocks.

How to cite: Sawi, P., Kita, S., and Bürgmann, R.: Machine-Learning-based Relocation Analysis: Revealing the Spatiotemporal Changes in the 2019 Cotabato and Davao del Sur Earthquakes, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3482, https://doi.org/10.5194/egusphere-egu24-3482, 2024.

Coffee break
Chairpersons: Stefania Gentili, Álvaro González, Piero Brondi
16:15–16:25
|
EGU24-8786
|
On-site presentation
Ester Piegari, Giovanni Camanni, Martina Mercurio, and Warner Marzocchi

We present a method for automatically identifying segmented fault surfaces through the clustering of earthquake hypocenters without prior information. Our approach integrates density-based clustering algorithms (DBSCAN and OPTICS) with principal component analysis (PCA). Using the spatial distribution of earthquake hypocenters, DBSCAN detects primary clusters, which represent areas with the highest density of connected seismic events. Within each primary cluster, OPTICS identifies nested higher-order clusters, providing information on their quantity and size. PCA analysis is then applied to the primary and higher-order clusters to assess eigenvalues, enabling the differentiation of seismicity associated with planar features and distributed seismicity that remains uncategorized. The identified planes are subsequently characterized in terms of their location and orientation in space, as well as their length and height. By applying PCA analysis before and after OPTICS, a planar feature derived from a primary cluster can be interpreted as a fault surface, while planes derived from high-order clusters can be interpreted as fault segments within the fault surface. The consistency between the orientation of illuminated fault surfaces and fault segments, and that of the nodal planes of earthquake focal mechanisms calculated along the same faults, supports this interpretation. We show applications of the method to earthquake hypocenter distributions from various seismically active areas (Italy, Taiwan, California) associated with faults exhibiting diverse kinematics.

How to cite: Piegari, E., Camanni, G., Mercurio, M., and Marzocchi, W.: A Machine Learning-based Method for Identifying Segmented Fault Surfaces Through Hypocenter Clustering, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8786, https://doi.org/10.5194/egusphere-egu24-8786, 2024.

16:25–16:35
|
EGU24-20167
|
ECS
|
On-site presentation
Hoby N.T. Razafindrakoto and A. Tahina Rakotoarisoa

Earthquake catalog is a key element in seism hazards. However, it may be contaminated by non-natural earthquake sources. Hence,
This study aims to discriminate natural and non-natural earthquakes through machine learning techniques and spatio-temporal distribution of the events. 
First, we propose a Convolutional Neural Network based on spectrograms to perform the waveform classification. It is targeted to applications in Madagascar. The approach consists of three main steps: (1) generation of the time–frequency representation of ground-motion recordings (spectrogram); (2) training and validation of the model using spectrograms of ground shaking; (3) testing and prediction. To measure the compatibility between output predictions and given ground truth labels, we adopt the commonly used loss function and accuracy measure. Given that the spatial distribution of the seismic data in Madagascar is non-uniform, we perform two-step analyses. First, we adopt a supervised approach for 6051 known events in the central part of Madagascar. Then, we use the outcome for the second step of training and perform the prediction for non-categorized events throughout the country. The results show that our model has the potential to separate earthquakes from mining-related events. For the supervised approach, among the 20% used for testing, 97.48% and 2.52% of the events give correct and incorrect labels, respectively. These pre-trained data are subsequently used to perform predictions for unlabeled events throughout Madagascar. Our results show that the model could learn the features of the classes even for data coming from different parts of Madagascar.
From the analyses of the spatio-temporal patterns of seismicity, we also found evidence of induced earthquakes associated with the heavy-oil exploration in Tsimiroro, Madagascar with an increase in the rate of earthquake occurrence in 2022.

How to cite: Razafindrakoto, H. N. T. and Rakotoarisoa, A. T.: Identification of Earthquakes and Anthropogenic Events in Madagascar, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20167, https://doi.org/10.5194/egusphere-egu24-20167, 2024.

Earthquake Forecasting
16:35–16:45
|
EGU24-6097
|
On-site presentation
Laura Gulia, Stefan Wiemer, Emanuele Biondini, Bogdan Enescu, and Gianfranco Vannucci

Strong earthquakes are followed by countless smaller events, whose number decays with time: a posteriori, we call them aftershocks. Sometimes, this sequence is interrupted by a larger event, and the “aftershocks” turn out to be foreshocks. In 2019, Gulia and Wiemer have proposed traffic light tool, named the Foreshock Traffic Light System (FTLS), that can discriminate between foreshocks and aftershocks, by monitoring the size distribution of events closely. The model successfully passed the first near real-time test (Gulia et al., 2020). A new version of the code, that can run in real-time, has been recently developed; since testing is the essence of the scientific method and is fundamentally important in seismicity forecast evaluation, we here show the performance of the new version of the FTLS through pseudo-prospective and, when possible, real-time tests on the available seismic sequences between 2016 and 2024.

How to cite: Gulia, L., Wiemer, S., Biondini, E., Enescu, B., and Vannucci, G.: The performance of the Foreshock Traffic Light System for the period 2016-2024, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6097, https://doi.org/10.5194/egusphere-egu24-6097, 2024.

16:45–16:55
|
EGU24-9366
|
ECS
|
On-site presentation
Leila Mizrahi and Dario Jozinović

The use of machine learning (ML) methods for earthquake forecasting has recently emerged as a promising avenue, with several recent publications exploring the application of neural point processes. Such models, in contrast to those currently applied in practice, offer the flexibility to incorporate additional datasets alongside earthquake catalogs, indicating potential for enhanced earthquake forecasting capabilities in the future. However, with a forecasting performance that currently remains similar to that of the agreed-upon benchmark, the Epidemic-Type Aftershock Sequence (ETAS) model, the black-box nature of ML models poses a challenge in communicating forecasts to lay audiences. The ETAS model has stood the test of time and is relatively simple and comprehensively understood, with few empirically derived laws describing aftershock triggering behavior. A main drawback of ETAS is its reliance on large numbers of simulations of possible evolutions of ongoing earthquake sequences, which is typically associated with long computation times or resources required for parallelization.

In this study, we propose a deep learning approach to emulate the output of the well-established ETAS model, bridging the gap between traditional methodologies and the potential advantages offered by machine learning. By focusing on modeling the temporal behavior of higher-order aftershocks, our approach aims to combine the interpretability of the ETAS model with the computational efficiency intrinsic to deep learning.

Evaluated using commonly applied metrics of both the ML and earthquake forecasting communities, our approach and the traditional, simulation-based approach are shown to perform very similarly in describing synthetic datasets generated with the simulation-based approach. Our method has two major benefits over the traditional approach. It is faster by several orders of magnitude, and it is not susceptible to being influenced by the presence (or absence) of individual 'extreme' realizations of the process, and thus enables accurate earthquake forecasting in near-real-time.

How to cite: Mizrahi, L. and Jozinović, D.: Deep Learning for Higher-Order Aftershock Forecasting in Near-Real-Time, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9366, https://doi.org/10.5194/egusphere-egu24-9366, 2024.

16:55–17:05
|
EGU24-3745
|
ECS
|
On-site presentation
Ke Gao

With earthquake disasters inflicting immense devastation worldwide, advancing reliable prediction models utilizing diverse data paradigms offers new perspectives to unlock practicable prediction solutions. As reliable earthquake forecasting remains a grand challenge amidst complex fault dynamics, we employ combined finite-discrete element method (FDEM) simulations to generate abundant laboratory earthquake data. We propose a multimodal features fusion model that integrates temporal sensor data and wavelet-transformed visual kinetic energy to predict laboratory earthquakes. Comprehensive experiments under varied stress conditions confirm the superior prediction capability over single modal approaches by accurately capturing stick slip events and patterns. Furthermore, efficient adaptation to new experiments is achieved through fine-tuning of a lightweight adapter module, enabling generalization. We present a novel framework leveraging multimodal features and transfer learning for advancing physics-based, data-driven laboratory earthquake prediction. As increasing multi-source monitoring data becomes available, the established modeling strategies introduced here will facilitate the development of reliable real world earthquake analysis systems.

How to cite: Gao, K.: Laboratory earthquake prediction via multimodal features, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3745, https://doi.org/10.5194/egusphere-egu24-3745, 2024.

17:05–17:15
|
EGU24-17967
|
On-site presentation
Marta Han, Leila Mizrahi, and Stefan Wiemer

In our recent study, we have developed an ETAS-based (Epidemic-Type Aftershock Sequence; Ogata, 1988) time-dependent earthquake forecasting model for Europe. Aside from inverting a basic set of parameters describing aftershock behaviour on a highly heterogeneous dataset, we have proposed several model variants, focusing on implementing the knowledge about spatial variations in the background rate inferred by ESHM20 already during the inversion of ETAS parameters, fixing the term dictating the productivity law to specific values to balance the more productive triggering by high-magnitude events (productivity law) with their much rarer occurrence (GR law), and using the b-positive method for the estimation of the b-value.

When testing the model variants, we apply the commonly used approach of performing retrospective tests on each model to check for self-consistency over long time periods and pseudo-prospective tests for comparison of models on one-day forecasting periods during seven years. While such pseudo-prospective tests reveal that some models indeed outperform others, for other model pairs, no significant performance difference was detected.

Here, we investigate in more detail the conditions under which performance differences of two competing models can be detected with statistical significance. Using synthetic tests, we investigate the effects of a catalog’s size and the magnitude range it covers on the significance of model performance difference. This will provide insight into whether recording many small events can, in this sense, replace having a large enough dataset of higher-magnitude events. Furthermore, due to the underrepresentation (or absence) of high-magnitude earthquakes in both training and testing data, both the models and tests are prone to overfitting to small events, potentially resulting in forecasts that underestimate both productivity of sequences with a high-magnitude main event and probabilities that a larger earthquake will follow such an event. We focus on defining metrics that highlight these properties as they are often of interest when applying time-dependent forecasting models to issuing operational earthquake forecasts.

How to cite: Han, M., Mizrahi, L., and Wiemer, S.: The Effect of Data Limitations on Earthquake Forecasting Model Selection, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17967, https://doi.org/10.5194/egusphere-egu24-17967, 2024.

17:15–17:25
|
EGU24-19288
|
ECS
|
On-site presentation
An Investigation into the Effectiveness of MachineLearning Algorithms for earthquake Magnitude Prediction using Seismic Data
(withdrawn)
Basil Alabowsh and Wei Li

Posters on site: Thu, 18 Apr, 16:15–18:00 | Hall X4

Display time: Thu, 18 Apr, 14:00–Thu, 18 Apr, 18:00
Chairpersons: Stefania Gentili, Álvaro González, Piero Brondi
Analysis of Seismic Catalogs
X4.42
|
EGU24-13573
Marine Laporte, Stéphanie Durand, Blandine Gardonio, Thomas Bodin, and David Marsan

The frequency/magnitude distribution of earthquakes can be approximated by an exponential law whose exponent is the so-called b-value. The b-value is routinely used for probabilistic seismic hazard assessment. In this context we propose to estimate the temporal variations of the b-value together with its uncertainties. The b-value is commonly estimated using the frequentist approach of Aki (1965), but biases may arise from the choice of completeness magnitude (Mc), the magnitude below which the exponential law is no longer valid. Here we propose to describe the full frequency-magnitude distribution of earthquakes by the product of an exponential law with a detection law. The latter is characterized by two parameters, μ and σ, that we jointly estimate with b-value within a Bayesian framework. In this way, we use all the available data to recover the joint probability distribution for b-value, μ and σ. Then, we extend this approach for recovering temporal variations of the three parameters. To that aim, we randomly explore with a Markov chain Monte Carlo (McMC) method in a transdimensional framework a large number of time variation configurations of the 3 parameters. This provides posterior probability distributions of the temporal variations in b-value, μ and σ.  For an application to a seismic catalog of far-western Nepal, we show that the probability distribution of the b-value remains stable with larger uncertainties during the monsoon period when the detectability decreases significantly . This confirms that we can see variations in the b-value that are independent of variations in detectability. Our results can be compared with the results and interpretations obtained using the b-positive approach. We hope that further applications to real and experimental data can provide statistical constraints on the b-value variations and help to better understand the physical meaning behind these variations.

How to cite: Laporte, M., Durand, S., Gardonio, B., Bodin, T., and Marsan, D.: A Bayesian transdimensional approach to estimate temporal changes in the b-value distribution without truncating catalogs, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13573, https://doi.org/10.5194/egusphere-egu24-13573, 2024.

X4.43
|
EGU24-16739
|
ECS
Tsz Lam Lau and Hongfeng Yang

Determining mainshocks from an ongoing seismic sequence poses a challenge for real-time hazard assessment. This study aims to address this issue by analyzing temporal variations in the b-value derived from the Gutenberg-Richter law, with a focus on moderate-to-large events in Yunnan province, southwest China. Yunnan is well known to experience frequent earthquakes due to the convergence of the Indian and Eurasian tectonic plates, along with its complex subsurface geological structure and active fault zones. Earthquake data were analyzed from the Unified National Catalog over the period from January 2000 through December 2022 and the magnitude of completeness is 1.4. We selected seismic sequences where the mainshock magnitudes were above 5. We employed a temporal b-value calculation approach, utilizing a minimum of 10 years of seismic data and including earthquakes within 20 km from the mainshock hypocenter. We used the long-term average b-value preceding the mainshock as the reference. Specifically, we compared the temporal b-value variation calculated for the one-month period following each mainshock to the reference b-value. In total, we investigated 23 sequences in the region. The b-value increased by 10% or more for 4 sequences and by <10% for 3 sequences. Three sequences showed b-value reduction. Insufficient data prevented analysis of 13 other sequences. To conclude, assessing temporal b-value variations is an active research topic to evaluate ongoing earthquake sequences. By testing the application in Yunnan, our b-value is able to help us identify a few mainshock-aftershock sequences. However, we also observe controversial cases. This limitation poses challenges to rapidly determine mainshocks in operational decision-making applications.

How to cite: Lau, T. L. and Yang, H.: Evaluating the Utility of b-Value for Discriminating Foreshocks and Mainshocks in Yunnan, southwest China, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16739, https://doi.org/10.5194/egusphere-egu24-16739, 2024.

Spatio-temporal Analysis of Seismicity
X4.44
|
EGU24-10426
Filippos Vallianatos, Vasilis Kapetanidis, Andreas Karakonstantis, and Georgios Michas

On 27 September 2021, a significant Mw=6.0 earthquake struck near Arkalochori village in central Crete, Greece, about ~25 km south-southeast of Heraklion city. Remarkably, an extensive seismic swarm lasting nearly four months preceded the mainshock, activating structures near its hypocenter. In this work, we investigate the foreshock swarm by leveraging waveform data from seismological stations of the Hellenic Unified Seismic Network (HUSN) that were operational on Crete Island during its occurrence. Our approach involves the utilization of the EQ-Transformer machine-learning model, pre-trained with a diverse dataset comprising ~50,000 earthquakes sourced from the INGV bulletin (INSTANCE dataset). We employ a sophisticated methodology that incorporates a Bayesian Gaussian Mixture Model (GaMMA) to associate automatically picked P- and S-wave arrival times with event origins. Subsequently, the events are located using a local velocity model. Our findings reveal the detection and precise location (ERH < 1 km, RMS < 0.2 s) of over 3,400 events in the activated area between late May and 26 September 2021, showcasing a substantial increase compared to existing catalogs derived from routine analysis using conventional methods. The spatiotemporal distribution of the foreshock seismicity is examined to unveil migration patterns, potentially linked to fluid dynamics and pore-pressure diffusion. Furthermore, we explore the evolution of seismicity concerning different structures activated during the seismic swarm, with a particular focus on the final days leading up to the mainshock. Finally, our results are subjected to analysis through non-extensive statistical physics methods, providing a comprehensive understanding of the complex dynamics culminating in the Arkalochori earthquake sequence.

Acknowledgements

We would like to thank the personnel of the institutions participating to the Hellenic Unified Seismological Network (http://eida.gein.noa.gr/) for the installation, operation and management of the seismological stations used in this work. The present study is co-funded by the Special Account for Research Grants (S.A.R.G.) of the National and Kapodistrian University of Athens.

How to cite: Vallianatos, F., Kapetanidis, V., Karakonstantis, A., and Michas, G.: Unraveling the dynamics of the 2021 Arkalochori foreshock swarm: a fusion of machine-learning models and non-extensive statistical physics, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10426, https://doi.org/10.5194/egusphere-egu24-10426, 2024.

X4.45
|
EGU24-11734
Kyriaki Pavlou, Filippos Vallianatos, and Georgios Michas

Our research aims to investigate the three recent and powerful earthquakes in the Ionian Sea region which occurred on January 26, 2014, November 17, 2015, and October 25, 2018, of magnitude Mw 6.1, Mw 6.0, and Mw 6.6 respectively using the complexity theory and the non-extensive statistical physics (NESP).

The scaling properties that have been observed in the three aftershock sequences of the recent strong earthquakes that took place in the region of Ionian islands are presented. To analyze the evolution of three aftershock sequences, we plotted the cumulative number of aftershocks N(t) over time. Additionally, we used a modified version of Omori's law to study the temporal decay of aftershock activity.

Based on non-extensive statistical physics, the analysis of interevent times distribution suggests that the system is in an anomalous equilibrium, with a crossover from anomalous (q>1) to normal (q=1) statistical mechanics for large interevent times. The obtained values of q indicate that the system has either one or two degrees of freedom. Furthermore, the migration of aftershock zones can be scaled as a function of the logarithm of time. This scaling is discussed in terms of rate-strengthening rheology, which governs the evolution of the afterslip process.

Acknowledgements

The present study is co-funded by the Special Account for Research Grants (S.A.R.G.) of the National and Kapodistrian University of Athens.

How to cite: Pavlou, K., Vallianatos, F., and Michas, G.: Spatio-temporal evolution and scaling laws analysis of the recent three strongest earthquakes in the Ionian Sea region during the period 2014-2019., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11734, https://doi.org/10.5194/egusphere-egu24-11734, 2024.

X4.46
|
EGU24-4958
Fengling Yin and Changsheng Jiang

Within a span of 9 hours on February 6, 2023, two significant earthquakes, with magnitudes of Mw7.8 and Mw7.6, struck the southeastern part of Türkiye and the northern region of Syria, resulting in significant casualties and widespread economic losses. The occurrence of such intense earthquakes in rapid succession on adjacent faults, especially within a highly complex intraplate region with a multi-fault network, poses a rare phenomenon, presenting new challenges for seismic hazard analysis in such areas. In order to investigate whether the preparatory processes for the Mw7.8-7.6 earthquake doublet could be identified on a large spatial scale prior to the seismic events, we employed a data-driven approach for b-value calculation. The difference in b values from the background values (Δb) in a reference period were used as inputs, and the Cumulative Migration Pattern (CMP) method, quantitatively describing the migration of seismic activity, was utilized to calculate the corresponding probability distributions. The results indicate a widespread phenomenon of decreasing b-values in the study area over a decade before the occurrence of the earthquake doublet, revealing a significant enhancement of differential crustal stress over a large region. Additionally, despite not being the region with the most pronounced decrease in b-values, there is a distinct high probability distribution of CMP near the nucleation points of the earthquake doublet, indicating a spatial and temporal "focus" of increased crustal differential stress in the study area, unveiling the preparatory process of the earthquake doublet. This study reveals quantifiable migration patterns over a long-time scale and a large spatial extent, providing new insights into the evolution and occurrence processes of the 2023 Kahramanmaraş Mw7.8-7.6 earthquake doublet. Moreover, it offers potential clues for seismic hazard analysis in such intraplate regions with multiple fault systems.

How to cite: Yin, F. and Jiang, C.: Unraveling the Preparatory Processes of the 2023 Kahramanmaraş MW7.8-7.6 Earthquake Doublet, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4958, https://doi.org/10.5194/egusphere-egu24-4958, 2024.

X4.47
|
EGU24-14304
|
Giuseppe Falcone, Ilaria Spassiani, Stefania Gentili, Rodolfo Console, Maura Murru, and Matteo Taroni

Short-term seismic clustering, a crucial aspect of seismicity, has been extensively studied in literature. Existing techniques for cluster identification are predominantly deterministic, relying on specific constitutive equations to define spatiotemporal extents. Conversely, probabilistic models, such as the Epidemic Type Aftershock Sequence (ETAS) model, dominate short-term earthquake forecasting. The ETAS model, known for its stochastic nature, has been employed to decluster earthquake catalogs probabilistically. However, the challenge arises when selecting a probability threshold for cluster identification, potentially distorting the model's underlying hypothesis.
This study aims to assess the consistency between seismic clusters identified by deterministic window-based techniques specifically, Gardner-Knopoff and Uhrhammer-Lolli-Gasperini and the associated probabilities predicted by the ETAS model for events within these clusters. Both deterministic techniques are implemented in the NESTOREv1.0 package and applied to the Italian earthquake catalog spanning from 2005 to 2021.
The comparison involves evaluating, for each event within an identified cluster, both the probability of independence and the expected number of triggered events according to the ETAS model. Results demonstrate overall agreement between the two cluster identification methods, with identified clusters exhibiting consistency with corresponding ETAS probabilities. Any minor discrepancies observed can be attributed to the fundamentally different nature of the deterministic and probabilistic approaches.
This research is supported by a grant from the Italian Ministry of Foreign Affairs and International Cooperation.

How to cite: Falcone, G., Spassiani, I., Gentili, S., Console, R., Murru, M., and Taroni, M.: Comparative Analysis of Seismic Clustering: Deterministic Techniques vs. Probabilistic ETAS Model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14304, https://doi.org/10.5194/egusphere-egu24-14304, 2024.

X4.48
|
EGU24-3684
|
Jeanne Hardebeck and Ruth Harris

Why some aftershocks appear to occur in stress shadows, regions of Coulomb stress decrease due to a mainshock, is an open question with implications for physical and statistical aftershock models. New machine-learning focal mechanism catalogs make it possible to study the fault orientations of aftershocks occurring in the stress shadows, and test competing hypotheses about their origins. There are three main hypotheses: (1) Aftershocks appear in shadows because of inaccuracy in the computed stress change. (2) Aftershocks in the shadows occur on faults with different orientations than the model receiver faults, and these unexpected fault orientations experience increased Coulomb stress. (3) Aftershocks in the shadows are triggered by dynamic stress changes. We test these three hypotheses on the 2016 Kumamoto and 2019 Ridgecrest sequences. We test Hypothesis 1 through many realizations of the stress calculations with multiple mainshock models, multiple receiver fault orientations based on background events, and a range of coefficients of friction. We find that numerous aftershocks are consistently in the stress shadows. To test Hypothesis 2, we consider whether the individual event focal mechanisms receive an increase of Coulomb stress. Again, we perform many realizations of the stress calculation, this time with receiver fault orientations based on the focal mechanism and its uncertainty. Many of the aftershocks in the shadows consistently show a Coulomb stress decrease on the planes of their focal mechanisms. These results imply that aftershocks do occur in stress shadows, many on fault planes receiving a decrease in static Coulomb stress, contrary to Hypotheses 1 and 2. We test Hypothesis 3 by investigating the modeled dynamic stress changes on the individual event focal mechanisms. Preliminary results show that while the amplitude of the dynamic Coulomb stress change is generally lower on the aftershock nodal planes than on the planes of background events, the amplitude of the dynamic normal stress change is often 20%-100% higher. This suggests a dynamic triggering mechanism related to changing fault strength.

How to cite: Hardebeck, J. and Harris, R.: Stress Shadows: Insights into Physical Models of Aftershock Triggering, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3684, https://doi.org/10.5194/egusphere-egu24-3684, 2024.

X4.49
|
EGU24-5516
|
ECS
Venkata Gangadhara Rao Kambala and Piotr Senatorski

Abstract. Due to the long recurrence time of the largest earthquakes and the short time covered by seismic catalogues, the potential for the strongest earthquakes in a given region should be estimated based both on combined seismological and geodetic observations, as well as on the developed seismicity models. At the same time, the asperity model, which is a general view of earthquake occurrence in seismic zones, still requires refinement and more solid empirical support.

In this study, we use new data science methods to analyze and interpret various data from selected subduction and collision zones, including Japan, Chile, and Himalaya-Nepal regions. First, we estimate the expected recurrence times of large earthquakes within a given magnitude range as functions of the Gutenberg-Richter’s b values, for  the assumed maximum magnitude and seismic moment deficit accumulation rate due to the tectonic plate movement. Second, we show seismicity patterns and underlying asperity structures using graphs representing the forerunning and afterrunning earthquakes, which are strictly defined based on the location of earthquakes in time and space, as well as their sizes.

In particular, we propose a method to estimate the rupture areas and magnitudes of possible megathrust earthquakes based on seismicity from the last few decades. We use the graph characteristics to distinguish among different seismicity patterns and scenarios. We also argue that changes in these features over time and space can be used to forecast seismicity forecasting.

 

Keywords: Earthquake forecasting, Gutenberg-Richter law, Recurrence time, Asperities, Forerunning earthquakes.

How to cite: Kambala, V. G. R. and Senatorski, P.: Asperity distribution and earthquake recurrence time based on patterns of forerunning earthquakes., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5516, https://doi.org/10.5194/egusphere-egu24-5516, 2024.

X4.50
|
EGU24-8744
|
ECS
Sarah Mouaoued, Michel Campillo, and Léonard Seydoux

We analyze the seismic data continuously recorded in the vicinity of the Mw7.4 2019 Ridgecrest earthquake with an unsupervised deep learning method proposed by Seydoux et al. (2020), in search of seismic signatures of physical signatures of the earthquake preparation phase. We downloaded data from the 3 different stations B918, with a 100 Hz sampling frequency, SRT and CLC with a 40 Hz sampling frequency. Using a scattering network combined with an independent component analysis, we define stable waveform features and cluster the continuous signals extracted from a sliding window before proposing cluster-based interpretations of the seismic signals. We also further discuss our results with external datasets such as independently-obtained seismicity catalogs in the area. We also investigate a manifold-learning-based representation (UMAP) of the data in 2D from the scattering network. According to our first results merged with a catalog analysis we are able to separate various events from the noise and identify several types of seismicity and noises. 

How to cite: Mouaoued, S., Campillo, M., and Seydoux, L.: Exploring the 2019 Ridgecrest seismic data with unsupervised deep learning , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8744, https://doi.org/10.5194/egusphere-egu24-8744, 2024.

X4.51
|
EGU24-634
|
ECS
Wade van Zyl, Diego Quiros, and Alastair Sloan

Ground motion caused by near-source seismic waves from shallow earthquakes can be dangerous to vital infrastructure such as nuclear power plants. South Africa is a stable continental region (SCR), however significant seismicity is known to occur. Nearby Cape Town, and the Koeberg Nuclear Power Station, historical sources record an earthquake with a potential magnitude of 6.5 in 1809. On September 29th, 1969 the magnitude 6.3 Ceres-Tulbagh earthquake affected an area less than 100 kilometers of the Koeberg Nuclear Power Station. These events emphasize the need to take the potential seismic hazard in this area seriously. Previous research has shown that the source zones of historic and even prehistoric SCR earthquakes are frequently related with enhanced microseismicity over hundreds or even thousands of years. This study seeks to investigate possible source zones for the 1809 event, and possible sources of future damaging earthquakes, by establishing whether earthquakes can be detected on regional structures. To accomplish these goals, we deployed 18 3-component seismographs over a 40-by-35-kilometer area near the Koeberg Nuclear Power Station. The network, which covered the Colenso fault zone, was also near the postulated Milnerton fault, the Ceres-Tulbagh region, and the Cape Town area. The network recorded for three months between August and October 2021. We looked for seismicity around known structures, like the Colenso fault, using supervised machine learning algorithms like PhaseNET, traditional STA/LTA algorithms, and manual inspection in addition to unsupervised machine learning algorithms such as Density-based spatial clustering of applications with noise (DBSCAN) and Bayesian Gaussian Mixture Models (BGMMs). We found 35 occurrences dispersed throughout our research area. These events appear to be organized into three broad groups, the first being an offshore cluster outside of the study region, and the second being a scattered cluster between the Colenso fault system and the postulated Milnerton Fault. The third concentrates on the Colenso Fault system, implying that it may be active. Additional results from our research show that traditional methods like STA/LTA are far less accurate at detecting micro-seismic events than manual inspection of waveform data and machine learning (i.e., where the unsupervised and supervised machine learning algorithms get combined to form an earthquake identification tool).

How to cite: van Zyl, W., Quiros, D., and Sloan, A.: Mapping micro-seismicity around a nuclear power station in stable South Africa through machine learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-634, https://doi.org/10.5194/egusphere-egu24-634, 2024.

Earthquake Forecasting
X4.52
|
EGU24-5878
|
Stefania Gentili, Giuseppe Davide Chiappetta, Giuseppe Petrillo, Piero Brondi, Jiancang Zhuang, and Rita Di Giovambattista

NESTORE (Next STrOng Related Earthquake) is a machine learning algorithm for forecasting strong aftershocks during ongoing earthquake clusters. It has already been successfully applied to Italian, Greek and Californian seismicity in the past. A free version of the software in MATLAB (NESTOREv1.0) is available on GitHub. The method is trained on the region under investigation using seismicity characteristics. The obtained region-specific parameters are used to provide the probability, for the ongoing clusters, that the strongest aftershock has a magnitude greater than or equal to that of the mainshock - 1. If this probability is greater than or equal to 0.5, the cluster is labeled as type A, otherwise as type B. The current version of the code is modular and the cluster identification method is based on a window approach, where the size of the spatio-temporal window can be adjusted according to the characteristics of the analyzed region.

In this study, we applied NESTORE to the seismicity of Japan using the Japan Meteorological Agency (JMA) catalogue from 1973 to 2022. To account for the highly complex seismicity of the region, we replaced the cluster identification module with software that uses a stochastic declustering approach based on the ETAS model.

The analysis is performed in increasing time intervals after the mainshock, starting a few hours later, to simulate the evolution of knowledge over time. The analysis showed a high prevalence of clusters where there are no strong earthquakes later than 3 hours after the mainshock, leading to an imbalance between type A and type B classes.

NESTORE was trained with data from 1973 to 2004 and tested from 2005 onwards. The large imbalance in the data was mitigated by carefully analyzing the training set and developing techniques to remove outliers. The cluster type forecasting was correct in 84% of cases.

 

Funded by a grant from the Italian Ministry of Foreign Affairs and International Cooperation and Co-funded within the RETURN Extended Partnership and received funding from the European Union Next-GenerationEU (National Recovery and Resilience Plan - NRRP, Mission 4, Component 2, Investment 1.3 – D.D. 1243 2/8/2022, PE0000005) and by the NEar real-tiME results of Physical and StatIstical Seismology for earthquakes observations, modeling and forecasting (NEMESIS) Project (INGV).

How to cite: Gentili, S., Chiappetta, G. D., Petrillo, G., Brondi, P., Zhuang, J., and Di Giovambattista, R.: Forecasting Strong Subsequent Earthquakes in Japan Using NESTORE Machine Learning Algorithm: preliminary results , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5878, https://doi.org/10.5194/egusphere-egu24-5878, 2024.

X4.53
|
EGU24-6346
Piero Brondi, Stefania Gentili, Matteo Picozzi, Daniele Spallarossa, and Rita Di Giovambattista

Italy is a country affected by strong seismic activity due to the collision between the African and Euro-Asian plates. In such an area, it often happens that a first strong earthquake (FSE) is followed by a subsequent strong event (SSE) of similar magnitude. In recent years, several studies have attempted to analyze the correlation between the occurrence of a possible SSE in an area and the spatio-temporal distribution of the stress drop on the same area. In this work, we have investigated this relationship in Central Italy by using the Rapid Assessment of MOmeNt and Energy Service (RAMONES), which provides source parameters for events that have occurred in the area since 2007. Using 12900 ML≥2 events available in the RAMONES catalog and a window-based clustering method, we obtained 25 clusters between 2009 and 2017 with magnitude of the FSE greater or equal to 4. Among them are also the clusters corresponding to the L'Aquila earthquake (2009) and the Amatrice earthquake (2016). Looking at the magnitude difference between the FSE and the strongest SSE (DM), it is less than or equal to 1 in 64% of the cases and greater than 1 in 36%. In the first case, we labelled the cluster as type A, in the second case as type B. By analyzing the ratio between seismic energy and seismic moment provided by RAMONES over the entire duration of the cluster, we found that almost all A clusters correspond to a maximum change in apparent stress over time larger than the one of B clusters. To a first approximation, this observation also proves to be true when analyzing the seismicity before the strongest SSE or at the first SSE. These preliminary results are therefore encouraging for future use in forecasting SSEs in Central Italy.

Funded by the NEar real-tiME results of Physical and StatIstical Seismology for earthquakes observations, modeling and forecasting (NEMESIS) Project (INGV).

How to cite: Brondi, P., Gentili, S., Picozzi, M., Spallarossa, D., and Di Giovambattista, R.: Characterizing clusters with strong subsequent events in Central Italy using RAMONES, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6346, https://doi.org/10.5194/egusphere-egu24-6346, 2024.

X4.54
|
EGU24-13405
|
ECS
Emanuele Biondini, Flavia D'Orazio, Barbara Lolli, and Paolo Gasperini

The analysis of space-time variations of the b-value of the frequency-magnitude distribution of earthquakes can be considered an important indicator in understanding the processes that precede strong earthquake events. Variations in b-value can provide valuable information on the stress state and probability of earthquake occurrence in a specific geographical region. By analyzing spatial variations in b-value, changes in local tectonic conditions can be identified, highlighting areas where seismic risk may increase. Similarly, the analysis of temporal variations in b-value can reveal patterns preceding seismic events, providing a possible precursor signal. Such variations could be the result of complex geological processes, such as the progressive accumulation of stress along active faults or the presence of underground fluids that influence fault dynamics. In fact, as it has been observed in many cases, the b-value tends to descend in the preparatory phases of a strong earthquake, and it increases suddenly after the mainshock occurrence.

To evaluate such a hypothesis, in this work, an alarm-based forecasting method that uses b-value space-time variations as a precursor signal is implemented. The forecasting method has been retrospectively calibrated and optimized for the period 1990-2011 to forecast Italian shallow earthquake (Z<50 km) of magnitude larger than 5.0.

The method has been than applied pseudo-prospectively over the period 2011-2022 and the forecasting skills have been assessed using specific test and statistics for alarm-based models. Such forecasting skills have been also compared with those of another alarm-based earthquake forecasting model that use the occurrence of potential foreshock as precursor signal.

How to cite: Biondini, E., D'Orazio, F., Lolli, B., and Gasperini, P.: Pseudo-prospective earthquakes forecasting experiment in Italy based on temporal variation of the b-value of the Gutenberg-Richter law., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13405, https://doi.org/10.5194/egusphere-egu24-13405, 2024.

X4.55
|
EGU24-3569
|
ECS
Jonas Köhler, Wei Li, Johannes Faber, Georg Rümpker, and Nishtha Srivastava

Reliable earthquake forecasting methods have long been sought after, and so the rise of modern data science techniques raises a new question: does deep learning have the potential to learn this pattern? 
In this study, we leverage the large amount of earthquakes reported via good seismic station coverage in the subduction zone of Japan.  We pose earthquake forecasting as a classification problem and train a Deep Learning Network to decide, whether a timeseries of length ≥ 2 years will end in an earthquake on the following day with magnitude ≥ 5 or not.
 
Our method is based on spatiotemporal b value data, on which we train an autoencoder to learn the normal seismic behaviour. We then take the pixel by pixel reconstruction error as input for a Convolutional Dilated Network classifier, whose model output could serve for earthquake forecasting. We develop a special progressive training method for this model to mimic real life use. The trained network is then evaluated over the actual dataseries of Japan from 2002 to 2020 to simulate a real life application scenario. The overall accuracy of the model is 72.3%. The accuracy of this classification is significantly above the baseline and can likely be improved with more data in the future.

How to cite: Köhler, J., Li, W., Faber, J., Rümpker, G., and Srivastava, N.: Testing the Potential of Deep Learning in Earthquake Forecasting, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3569, https://doi.org/10.5194/egusphere-egu24-3569, 2024.

X4.56
|
EGU24-17178
|
ECS
Foteini Dervisi, Margarita Segou, Brian Baptie, Ian Main, and Andrew Curtis

The development of novel deep learning-based earthquake monitoring workflows has led to a rapid increase in the availability of earthquake catalogue data. Earthquake catalogues are now being created by deep learning algorithms at significantly reduced processing times compared to catalogues built by human analysts and contain at least a factor of ten more earthquakes. The use of these rich catalogues has been shown to have led to improvements in the predictive power of statistical and physics-based forecasts. Combined with the increasing availability of computational power, which has greatly contributed to the recent breakthrough in the field of artificial intelligence, the use of rich datasets paired with machine learning workflows seems to be a promising approach to uncovering novel insights about earthquake sequences and discovering previously undetected relationships within earthquake catalogues.

Our focus is on employing deep learning architectures to produce high-quality earthquake forecasts. Our hypothesis is that deep neural networks are able to uncover underlying patterns within rich earthquake catalogue datasets and produce accurate forecasts of earthquakes, provided that a representative dataset that accurately reflects the properties of earthquake sequences is used for training. We use earthquake catalogue data from different geographical regions to build a time series of spatiotemporal maps of past seismicity. We then split this time series into training, validation, and test datasets in order to explore the ability of deep neural networks to capture patterns within sequences of seismicity maps and produce short-term spatiotemporal earthquake forecasts.

We assess the performance of the trained deep learning-based forecasting models by using metrics from the machine learning and time-series forecasting domains. We compare the trained models against a null hypothesis, the persistence model, which assumes no change between consecutive time steps and is commonly used as a baseline in various time series forecasting settings. The persistence null hypothesis has been proven to be a very effective model due to the fact that when only background seismicity is observed, there is very little change between consecutive time steps. We also evaluate the relative performance of different deep learning architectures and assess their suitability for dealing with our specific problem. We conclude that deep learning techniques are a promising alternative to disciplinary statistics and physics-based earthquake forecasting methods as, once trained, deep learning models have the potential of producing high-quality short-term earthquake forecasts within seconds. This realisation can influence the future of operational earthquake forecasting and earthquake predictability. 

How to cite: Dervisi, F., Segou, M., Baptie, B., Main, I., and Curtis, A.: Towards a Deep Learning Approach for Data-Driven Short-Term Spatiotemporal Earthquake Forecasting , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17178, https://doi.org/10.5194/egusphere-egu24-17178, 2024.

Posters virtual: Thu, 18 Apr, 14:00–15:45 | vHall X4

Display time: Thu, 18 Apr, 08:30–Thu, 18 Apr, 18:00
Chairpersons: Stefania Gentili, Álvaro González, Piero Brondi
vX4.32
|
EGU24-14398
Ryuku Hashimoto and Yasuhisa Kuzuha

Challenges in Aftershock Forecasting

The probabilistic evaluation of aftershock activity relies on two empirical rules: the Gutenberg–Richter law (GR law) and the Modified Omori law (MO law). An important issue arises in aftershock observation, particularly when regarding the technical aspects of seismic monitoring, where smaller earthquakes are more challenging to detect than larger ones.  In cases where records of smaller seismic events are absent, the b value of the GR law and the K value of the MO law tend to be underestimated. This study was conducted to develop a model that corrects underestimation of these parameters based on main shock information.

 

Data and Methodology

Seismic source data from the Japan Meteorological Agency were used, including the main shock – aftershock sequence magnitudes, latitude and longitude of the epicenter, and occurrence times. Using data for the periods immediately after the main shock to 3 hours, 1 day, 30 days, and 90 days, calculations were performed on data to ascertain the K value of the MO law and the b value of the GR law. The objective was to investigate the relation between these parameters and the main shock magnitude (hereinafter, M0). Based on these relations, methodologies for correcting parameters were explored.

 

Results and Discussion

  • Relation between M0 and parameters

Significant negative correlation was found between the M0 and the b value, with larger M0 values associated with smaller b values. Furthermore, correlation was stronger for b values closer to the immediate aftermath of the main shock. This strong correlation suggests that larger M0 values are more likely to result in the omission of weaker seismic events from the data. The omission of earthquakes is particularly noticeable immediately following occurrence of the main shock. Similarly, a tendency was observed for the K value to be underestimated immediately after the main shock, when M0 is larger.

  • Parameter corrections

We introduce new parameters, b' and K', defined as shown below.

Larger values of b' and K' indicate underestimation of parameters at 3 hours after the main shock compared to 1 day after the main shock. Using these parameters, we perform linear regression analysis with M0 as the independent variable and b' and K' as dependent variables to estimate 1 day post-main-shock parameters from the 3 hours post-main-shock values.

The precisions of the estimated values are compared as shown in Figure 1.

Figure1:The precisions of the estimated values

Estimation of b shows superior accuracy compared to that obtained using earlier methodologies and conventional approaches used by the Japan Meteorological Agency. Estimated values of K show that systematic errors have been improved with the methodology used for this study. Using these corrected parameters, Figure 2 presents a comparison of the predicted aftershock numbers from 3 hours to 1 day after the main shock with the actual values. As the figure shows, on average, the methodology used for this provides favorable accuracy of predictions.

Figure2:The accuracy of Aftershock Predictions

How to cite: Hashimoto, R. and Kuzuha, Y.: Advancement of Aftershock Distribution Prediction Model Following a Main Shock – Examining Parameter Correction Methods for Predictive Formula Parameters Based on Main Shock Information –, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14398, https://doi.org/10.5194/egusphere-egu24-14398, 2024.