New models based on seismicity patterns, considering their physical meaning and their statistical significance, shed light on the preparation process of large earthquakes and on the evolution in time and space of clustered seismicity.
Opportunities for improved model testing are being opened by the increasing amount of earthquake data available on local to global scales, together with accurate assessments of the catalogues’ reliability in terms of location precision, magnitude of completeness and coherence in magnitude determination.
Moreover, it is possible to reliably integrate the models with additional information, like geodetic deformation, active fault data, source parameters of previously recorded seismicity, fluid contents, tomographic information, or laboratory and numerical experiments of rock fracture and friction. Such integration allows a detailed description of the system and hopefully an improved forecasting of the future distribution of seismicity in space, time and magnitude.
In this session, we invite researchers to submit their latest results and insights on the physical and statistical models and machine learning approaches for the space, time and magnitude evolution of earthquake sequences. Particular emphasis will be placed on:
• physical and statistical models of earthquake occurrence;
• analysis of earthquake clustering;
• spatial, temporal and magnitude properties of earthquake statistics;
• quantitative testing of earthquake occurrence models;
• reliability of earthquake catalogues;
• time-dependent hazard assessment;
• methods for earthquake forecasting;
• data analyses and requirements for model testing;
• pattern recognition in seismology;
• machine learning applied to seismic data; and
• methods for quantifying uncertainty in pattern recognition and machine learning.
Confirmed solicited speaker: Robert Shcherbakov (University of Western Ontario, London, Ontario, Canada)
vPICO presentations: Fri, 30 Apr
Earthquakes trigger subsequent earthquakes. They form clusters and swarms in space and in time. This is a direct manifestation of the non-Poisson behavior in the occurrence of earthquakes, where earthquake magnitudes and time intervals between successive events are not independent and are influenced by past seismicity. As a result, the distribution of the number of earthquakes is no longer strictly Poisson and the statistics of the largest events deviate from the GEV distribution. In statistical seismology, the occurrence of earthquakes is typically approximated by a stochastic marked point process. Among different models, the ETAS model is the most successful in reproducing several key aspects of seismicity. Recent analysis suggests that the ETAS model generates sequences of events which are not Poisson. This becomes important when the ETAS based models are used for earthquake forecasting (Shcherbakov et al., Nature Comms., 2019). In this work, I consider the Bayesian framework combined with the ETAS model to constrain the magnitudes of the largest expected aftershocks during a future forecasting time interval. This includes the MCMC sampling of the posterior distribution of the ETAS parameters and computation of the Bayesian predictive distribution for the magnitudes of the largest expected events. To validate the forecasts, the statistical tests developed by the CSEP are reformulated for the Bayesian framework. In addition, I define and compute the Bayesian p-value to evaluate the consistency of the forecasted extreme earthquakes during each forecasting time interval. The Bayesian p-value gives the probability that the largest forecasted earthquake can be more extreme than the observed one. The suggested approach is applied to the recent 2019 Ridgecrest earthquake sequence to forecast retrospectively the occurrence of the largest aftershocks (Shcherbakov, JGR, 2021). The results indicate that the Bayesian approach combined with the ETAS model outperformed the approach based on the Poisson assumption, which uses the extreme value distribution and the Omori law.
How to cite: Shcherbakov, R.: A Bayesian Framework for Aftershock Forecasting and Testing, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8120, https://doi.org/10.5194/egusphere-egu21-8120, 2021.
During seismic clusters, strong earthquakes (e.g. the mainshocks) are sometimes followed by another strong following earthquake, very dangerous because it strikes already damaged structures. To forecast the occurrence of such subsequent large earthquakes (SLE), we proposed a pattern recognition approach based on seismological features. The method, called NESTORE, has been successfully applied to northeastern Italy and western Slovenia (Gentili and Di Giovambattista, 2020) and to all of Italy (Gentili and Di Giovambattista, 2017). In this study, we will present the results of the application of NESTORE to California seismicity. NESTORE method is adaptive and depends on the region analyzed. During the supervised training phase, some features are selected as the best-performing ones in the analyzed area, which will be used for classification. Tests of this method demonstrate good performance for California seismicity.
How to cite: Gentili, S. and Di Giovambattista, R.: Strong following earthquake forecasting by a pattern recognition approach in California , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4381, https://doi.org/10.5194/egusphere-egu21-4381, 2021.
Earthquake sequences add significant hazard beyond the solely declustered perspective of common probabilistic seismic hazard analysis (PSHA). A particularly strong driver for both social and economic losses are so-called earthquake doublets (more generally multiplets), i.e. sequences of two (or more) comparatively large events in spatial and temporal proximity. Not differentiating between foreshocks and aftershocks, we hypothesize three main drivers of doublet occurrence: (1) the number of direct aftershocks triggered by an earthquake; (2) the underlying, independent background seismicity in the same time-space window; and (3) the magnitude size distribution of triggered events (in contrast to independent events). We tested synthetic catalogs simulated by a common, isotropic epidemic type aftershock sequence (ETAS) model for both Japan and Southern California. Our findings show that the standard ETAS approach dramatically underestimates doublet frequencies compared to observations in historical catalogs. Among others, the results partially smooth out pronounced peaks of temporal and spatial event clustering. Focusing on the impact on direct aftershock productivity, we propose two modifications of the ETAS spatial kernel in order to improve doublet rate predictions: (a) a restriction of the spatial function to a maximum distance of 2.5 estimated rupture lengths; (b) an anisotropic function with contour lines constructed by a box with two semicircular ends around the estimated rupture line. The restriction of the spatial extent shifts triggering potential from weaker to stronger events and in consequence improves doublet rate predictions for larger events. However, this improvement goes at the cost of a weaker overall model fit according to AIC. The anisotropic models improve the overall model fit, but have minor impact on doublet occurrence rate predictions.
How to cite: Grimm, C., Käser, M., Hainzl, S., Pagani, M., and Küchenhoff, H.: Improving earthquake doublet rate predictions in ETAS by using modified spatial trigger distributions, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3452, https://doi.org/10.5194/egusphere-egu21-3452, 2021.
Recent advances in machine learning and pattern recognition methods have propagated into various applications in seismology. Phase picking, earthquake location, anomaly detection and classification applications have benefited also from the increased availability of cloud computing and open-source software libraries. However, applications of these new techniques to the problems of earthquake forecasting and prediction have remained relatively stagnant.
The main challenges in this regard have been the testing and validation of the proposed methods. While there are established metrics to quantify the performance of algorithms in common pattern recognition and classification problems, the earthquake prediction problem requires a properly defined reference (null) model to establish the information gain of a proposed algorithm. This complicates the development of new methods, as researchers are required to develop not only a novel algorithm but also a sufficiently robust null model to test it against.
We propose a solution to this problem. We have recently introduced a global real-time earthquake forecasting model that can provide occurrence probabilities for a user defined time-space-magnitude window anywhere on the globe (Nandan et al. 2020). In addition, we have proposed the Information Ratio (IR) metric that can rank algorithms producing alarm based deterministic predictions as well as those producing probabilistic forecasts (Kamer et al. 2020). To provide the community with a retrospective benchmark, we have run our model in a pseudoprospective fashion for the last 30 years (1990-2020). We have calculated and stored the earthquake occurrence probabilities for each day, for the whole globe (at ~40km resolution) for various time-space windows (7 to 30 days, 75 to 300 km). These can be queried programmatically via an Application Programmable Interface (API) allowing model developers to train and test their algorithms retrospectively. Here we shall present how the Rx TimeMachine API is used for the training of a simple pattern recognition algorithm and show the algorithm's prospective predictive performance.
Nandan, S., Kamer, Y., Ouillon, G., Hiemer, S., Sornette, D. (2020). Global models for short-term earthquake forecasting and predictive skill assessment. European Physical Journal ST. doi: 10.1140/epjst/e2020-000259-3
Kamer, Y., Nandan, S., Ouillon, G., Hiemer, S., Sornette, D. (2020). Democratizing earthquake predictability research: introducing the RichterX platform. European Physical Journal ST. doi: 10.1140/epjst/e2020-000260-2
How to cite: Kamer, Y., Nandan, S., Hiemer, S., Ouillon, G., and Sornette, D.: Rx TimeMachine: A global pseudoprospective earthquake forecast database for training and ranking predictive algorithms, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13905, https://doi.org/10.5194/egusphere-egu21-13905, 2021.
Earthquakes prediction is considered the holy grail of seismology. After almost a century of efforts without convincing results, the recent raise of machine learning (ML) methods in conjunction with the deployment of dense seismic networks has boosted new hope in this field. Even if large earthquakes still occur unanticipated, recent laboratory, field and theoretical studies support the existence of a preparatory phase preceding earthquakes, where small and stable ruptures progressively develop into an unstable and confined zone around the future hypocenter. The problem of recognizing the preparatory phase of earthquakes is of critical importance for mitigating seismic risk for both natural and induced events. Here, we focus on the induced seismicity at The Geysers geothermal field in California. We address the preparatory phase of M~4 earthquakes identification problem by developing a ML approach based on features computed from catalogues, which are used to train a Recurrent Neural Network (RNN). We show that RNN successfully reveal the preparation of M~4 earthquakes. These results confirm the potential of monitoring induced microseismicity and should encourage new research also in predictability of natural earthquakes.
How to cite: Iaccarino, A. G. and Picozzi, M.: Forecasting the Preparatory Phase of Induced Earthquakes by Recurrent Neural Network, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4321, https://doi.org/10.5194/egusphere-egu21-4321, 2021.
Earthquake catalogs exhibit strong spatio-temporal correlations. As such, earthquakes are often classified into clusters of correlated activity. Clusters themselves are traditionally classified in two different kinds: (i) bursts, with a clear hierarchical structure between a single strong mainshock, preceded by a few foreshocks and followed by a power-law decaying aftershock sequence, and (ii) swarms, exhibiting a non-trivial activity rate that cannot be reduced to such a simple hierarchy between events.
The Epidemic Aftershock Sequence (ETAS) model is a linear Hawkes point process able to reproduce earthquake clusters from empirical statistical laws [Ogata, 1998]. Although not always explicit, the ETAS model is often interpreted as the outcome of a background activity driven by external forces and a Galton-Watson branching process with one-to-one causal links between events [Saichev et al., 2005]. Declustering techniques based on field observations [Baiesi & Paczuski, 2004] can be used to infer the most likely causal links between events in a cluster. Following this method, Zaliapin and Ben‐Zion (2013) determined the statistical properties of earthquake clusters characterizing bursts and swarms, finding a relationship between the predominant cluster-class and the heat flow in seismic regions.
Here, I show how the statistical properties of clusters are related to the fundamental statistics of the underlying seismogenic process, modeled in two point-process paradigms [Baró, 2020].
The classification of clusters into bursts and swarms appears naturally in the standard ETAS model with homogeneous rates and are determined by the average branching ratio (nb) and the ratio between exponents α and b characterizing the production of aftershocks and the distribution of magnitudes, respectively. The scale-free ETAS model, equivalent to the BASS model [Turcotte, et al., 2007], and usual in cold active tectonic regions, is imposed by α=b and reproduces bursts. In contrast, by imposing α<0.5b, we recover the properties of swarms, characteristic of regions with high heat flow.
Alternatively, the same declustering methodology applied to a non-homogeneous Poisson process with a non-factorizable intensity, i.e. in absence of causal links, recovers swarms with α=0, i.e. a Poisson Galton-Watson process, with similar statistical properties to the ETAS model in the regime α<0.5b.
Therefore, while bursts are likely to represent actual causal links between events, swarms can either denote causal links with low α/b ratio or variations of the background rate caused by exogenous processes introducing local and transient stress changes. Furthermore, the redundancy in the statistical laws can be used to test the hypotheses posed by the ETAS model as a memory‐less branching process.
Baiesi, M., & Paczuski, M. (2004). Physical Review E, 69, 66,106. doi:10.1103/PhysRevE.69.066106.
Baró, J. (2020). Journal of Geophysical Research: Solid Earth, 125, e2019JB018530. doi:10.1029/2019JB018530.
Ogata, Y. (1998) Annals of the Institute of Statistical Mathematics, 50(2), 379–402. doi:10.1023/A:1003403601725.
Saichev, A., Helmstetter, A. & Sornette, D. (2005) Pure appl. geophys. 162, 1113–1134. doi:10.1007/s00024-004-2663-6.
Turcotte, D. L., Holliday, J. R., and Rundle, J. B. (2007), Geophys. Res. Lett., 34, L12303, doi:10.1029/2007GL029696.
Zaliapin, I., and Ben‐Zion, Y. (2013), J. Geophys. Res. Solid Earth, 118, 2865– 2877, doi:10.1002/jgrb.50178.
How to cite: Baro, J.: Earthquake clusters expected from bare statistics: How bursts and swarms emerge from exogenous and epidemic aftershock processes., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8277, https://doi.org/10.5194/egusphere-egu21-8277, 2021.
Chilean seismic activity is among the strongest ones in the world. As already shown in previous papers, seismic activity can be usefully described by a space-time branching process, like the ETAS (Epidemic Type Aftershock Sequences) model, which is a semiparametric model with a large time scale component for the background seismicity and a small time scale component for the induced seismicity. The large-scale component intensity function is usually estimated by nonparametric techniques, specifically in our paper we used the Forward Likelihood Predictive approach (FLP); the induced seismicity is modelled with a parametric space-time function. In classical ETAS models the expected number of induced events depends only on the magnitude of the main event. From a statistical point of view, forecast of induced seismicity can be performed in the days following a big event; of course the estimation of this component is very important to forecast the evolution, in space and time domain, of a seismic sequence. Together with magnitude, to explain the expected number of induced events we also used other covariates. According to this formulation, the expected number of events induced by event Ei is a function of a linear predictor ηi=xiβ, where xi is the vector of covariates observed for the i-th event (the first is usually the magnitude mi), and β is a vector of parameters to be estimated together with the other parametric and nonparametric components of the ETAS model. We obtained some interesting result using some covariates related to the depth of events and to some GPS measurement, corresponding to earth movement observed before main events. We find that some of these models can improve the description and the forecasting of the induced seismicity in Chile, after a subdivision of the country in different spatial regions. We used open-source software (R package etasFLP) to perform the semiparametric estimation of the ETAS model with covariates.
How to cite: Chiodi, M., Nicolis, O., Adelfio, G., D'angelo, N., and Gonzàlez, A.: ETAS Space time modelling of Chile induced seismicity using covariates., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3523, https://doi.org/10.5194/egusphere-egu21-3523, 2021.
The linear Coulomb failure (LCM) and the rate-and-state model (RSM) are two widely-used physics-based seismicity models both assuming Coulomb stress changes acting on pre-existing populations of faults. While both predict background earthquake rates and time-dependent stress effects, only the RSM can additionally explain the time-dependent triggering of aftershocks.
We develop a modified effective media Coulomb model which accounts for the possibility of earthquake nucleation and retarded triggering of rupture. The new model has only two independent parameters and explains all statistical features of seismicity equally well as the RMS, but is simpler in its concept and provides insights in the possible nature of time-dependent frequency-magnitude distributions. Some of the statistical predictions are different compared to the RSM or LCM. For instance, the model domain is not limited to positive earthquake background or stressing rates; it can also simulate seismicity under zero stressing assumptions. The increase of background seismicity with tectonic stressing is nonlinear, different to the other models, and may even saturate if the tectonic stress loading is very strong. The Omori aftershock decay is predicted in the new model with an exponent of p=1 also for time periods much larger than the aftershock decay time, however, the productivity factor K is time dependent with a very slow exponential attenuation. The attenuation may explain the apparent variation of p in observed aftershock sequences. Interesting is also that the new model predicts a co-seismic peak of triggered aftershocks, which depends on the magnitude of the stress step and does not influence the attenuation of aftershocks following the stress step. It could be a physical explanation for the c-value in Omori’s law, the origin of which is still under discussion.
We compare the new model to RSM and LCM and discuss the possible implications for earthquake clustering and frequency magnitude distributions.
How to cite: Dahm, T.: A modified Coulomb failure seismicity model to study earthquake occurrence and frequency-magnitude distributions, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3176, https://doi.org/10.5194/egusphere-egu21-3176, 2021.
The central Ionian Islands area accommodates remarkable seismic activity with frequent strong (M>6.0) earthquake occurrence and continuous microseismicity. The dominant seismotectonic characteristic is the Kefalonia Transform Fault Zone (KTFZ), a dextral transform active boundary between oceanic subduction and continental collision, running along the western coastlines of Kefalonia and Lefkada Islands. KTFZ comprises two main fault branches (Kefalonia and Lefkada) connected with a step over zone in between. In the past 20 years, four strong earthquakes ruptured the Lefkada (06/08/2003–M6.5 and 17/11/2015–M6.5) and the Kefalonia (20/01/2014–M6.0 and 03/02/2014–M6.1) branches. Their aftershock activity along with the continuous microseismicity and some bursts of seismicity comprising moderate earthquakes, provided the data set proper for detailing seismicity characteristics in the area.
We investigate the identification of repeating earthquakes (repeaters), which are earthquakes with highly similar waveforms caused by rupture of the same fault area, through different clustering approaches, aiming to explore strategies for the discrimination of repeaters in an accurately located dataset. We compiled a catalog of ~15600 manually picked earthquakes in the period 09/2016 – 01/2020. Relocation with the Double Difference method, using cross–correlation differential times, resulted in highly accurate locations with spatial errors ranging from a few tens to a few hundreds of meters.
The establishment of groups of repeaters (multiplets) is discussed based on several approaches. We identify multiplets by grouping event pairs that contain a common event, which is a widely used method, against the application of a density-based clustering algorithm, known as DBSCAN. In DBSCAN events are grouped into multiplets based on their similarity (cross-correlation coefficient), information which is provided through a distance matrix of all event pairs whose elements correspond to zero when their cross-correlation coefficient is equal to one and so forth. A multiplet is created when an event is directly connected with at least events, i.e. their distance is within the similarity upper cutoff, ε. We discuss differences between the two approaches and proper parameter setting for the DBSCAN algorithm for multiplet grouping and we explore geodynamic implications of the classified clusters.
This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Program «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Kinematic properties, active deformation and stochastic modelling of seismogenesis at the Kefalonia - Lefkada transform zone” (MIS-5047845).
How to cite: Kostoglou, A., Bountzis, P., Karakostas, V., and Papadimitriou, E.: Classification of earthquake repeaters in the central Ionian Islands area, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-11966, https://doi.org/10.5194/egusphere-egu21-11966, 2021.
On October 30, 2020 a strong shallow earthquake of magnitude Mw=7.0 occurred on the Eastern edge of Aegean Sea. The epicenter was located on the North offshore of the Greek island of Samos. The aim of our work is to present a first analysis of the scaling properties observed in the aftershock sequence as reported until December 31, 2020, as numerous seismic clusters activated. Our analysis is focused on the main of the clusters observed in the East area of the activated fault zone and strongly related with the main shock’s fault. The aftershock sequence follows the Omori law with a value of p≈1.01 for the main cluster which is remarkably close to a logarithmic evolution. The analysis of interevent times distribution, based on non-extensive statistical physics indicates a system in an anomalous equilibrium with a cross over from anomalous (q>1) to normal (q=1) statistical mechanics, for great interevent times. A discussion of the cross over observed, in terms of superstatistics is given. In addition the obtained value q≈1.67 suggests a system with one degree of freedom. Furthermore, an scaling of the migration of aftershock zone as a function of the logarithm of time is discussed in terms of rate strengthening rheology that govern the evolution of afterslip process.
Tsallis, C. Introduction to Nonextensive Statistical Mechanics-Approaching a Complex World; Springer: New York, USA, 2009; pp. 1–382.
Perfettini, H.,Frank, W. B., Marsan, D., and Bouchon, M. (2018). A model of aftershock migration driven by afterslip. Geophys. Res. Let., 45, 2283–2293.
Vallianatos, F.; Papadakis, G.; Michas, G. (2016). Generalized statistical mechanics approaches to earthquakes and tectonics. Proc. R. Soc. A Math. Phys. Eng. Sci. 2016,472, 20160497
We acknowledge support of this work by the project “HELPOS – Hellenic System for Lithosphere Monitoring” (MIS 5002697) which is implemented under the Action “Reinforcement of the Research and Innovation Infrastructure”, funded by the Operational Programme "Competitiveness, Entrepreneurship and Innovation" (NSRF 2014-2020) and co-financed by Greece and the European Union (European Regional Development Fund).
How to cite: Vallianatos, F. and Pavlou, K.: Scaling properties of the Mw7.0 Samos (Greece), 2020 aftershock sequence., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10967, https://doi.org/10.5194/egusphere-egu21-10967, 2021.
Long records often show large earthquakes occurring in supercycles—temporal clusters of seismicity, cumulative displacement, and cumulative strain release separated by less active intervals. Presently used earthquake recurrence models do not account for the time dependence and clustering. Poisson models assume that earthquake recurrence is time-independent, but seismicity studies have shown that time is needed to accumulate strain along a fault before another large earthquake. Seismic cycle/renewal models account for this time-dependence but assume that all strain is released after large earthquakes and fail to replicate clustered earthquake behavior. The resulting probability estimates for recurrence of the next earthquake thus depend crucially on whether the cluster is treated as ongoing or over.
In this study, we have reformulated our previously developed Long-Term Fault Memory (LTFM) earthquake model as a Markov process to better quantify long-term earthquake behavior and the probability of future earthquakes. In the LTFM model, the probability of a large earthquake reflects accumulated strain rather than elapsed time. The probability increases with accumulated strain (and time) until an earthquake happens, after which the probability decreases, but not necessarily to zero. This simple, strain-driven recurrence model yields realistic sequences of large earthquakes with periods of elevated activity followed by longer quiescence. Using the Markov formulation, we explore long-term earthquake behavior and how to use paleoseismic records to better estimate the recurrence and probability of future large earthquakes.
How to cite: Neely, J., Salditch, L., Stein, S., and Spencer, B.: Modeling earthquake occurrence and recurrence for supercycles and clusters, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13551, https://doi.org/10.5194/egusphere-egu21-13551, 2021.
How to cite: Opris, A., Kundu, S., and Hatano, T.: Imaging the spatio-temporal variations in deep tremor activity using cluster analysis techniques, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14542, https://doi.org/10.5194/egusphere-egu21-14542, 2021.
Using long-term mining-induced earthquake statistics of the Khibiny Mountains (Kola Peninsula, Russia) we studied the spatial peculiarities of clustered seismicity. To declutter the earthquake catalog, we used the nearest neighbor method by Zaliapin and Ben-Zion, 2016, DOI: 10.1093/gji/ggw300. It was shown that the distribution of distances from triggering event to triggered earthquakes obeys a power law with a parameter independent of the trigger magnitude. This result is consistent with distribution of mainshock-aftershock distances obtained for tectonic seismicity by many researchers (e.g., Huc M., Main, DOI: 10.1029/2001JB001645; Felzer and Brodsky, DOI: 10.1785/0120030069; Richards-Dinger et al., DOI: 10.1038/nature09402). Combining the spatial power distribution and the law of earthquake productivity by Shebalin et al. 2020 (DOI: 10.1093/gji/ggaa252), confirmed for the seismicity of the Khibiny Mountains (Baranov et al., 2020, DOI: 10.1134/S1069351320030015) we derived a distribution of maximal distances from trigger to triggered earthquake.
Using this distribution, we suggest a probabilistic model of zone where triggered earthquakes are expected. The zone is a cylinder centered on the trigger hypocenter, its size (radius and height) depends on the probability of containing triggered earthquakes. The model validation was performed using Molchan’s error diagram. Applying the method of three strategies (Baranov and Shebalin, 2017, DOI: 10.1134/S1069351317020021) to the error diagram, we identified three limiting points on the error trajectory, corresponding to "soft," "neutral," and "hard" strategies. These strategies reflect the prediction importance.
The research was supported by Russian Foundation of Basic Research, Project No 19-05-00812.
How to cite: Baranov, S., Motorin, A., and Shebalin, P.: Spatial distribution of clustered seismicity in Khibiny Montains, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15354, https://doi.org/10.5194/egusphere-egu21-15354, 2021.
On April 6th 2009 (01:32 UTC) strong earthquake of magnitude MW6.1 occurred near the city of L’Aquila in the Abruzzo region in the Central Apennines of Italy. Due to the extensional processes the Abruzzo region is characterized by prominent historical seismicity. However, before the 2009 event the background seismic activity is characterised as sparse and mostly clustered in space and time. The general lack of events, especially small magnitude events before the 2009 event motivated us to study the long-term near-fault seismicity before the large earthquake occurrence. To achieve this we first have to extend the existing catalog. We take into consideration the data from the AQU (42.354, 13.405) station that has been recorded in the city of L’Aquila, near Paganica fault responsible for the 2009 event, during an extensive period of 29-years, 19 years before the event itself. The catalog extension is performed by applying the two-stage convolutional neural network pipeline for earthquake detection and characterisation (epicentral distance and magnitude) using three component signal station waveforms. The algorithm allows us to successfully detect ~800 local events (less than 10 km from the AQU station) in the period 1990-2009. We here present a detailed analysis of this catalog including waveforms characterization to derive new insights about the long term preparation processes(es) occuring before the 2009 Mw6.1 earthquake.
How to cite: Majstorović, J. and Poli, P.: Property of long term (1990-2009) seismicity in the region of L’Aquila Mw6.1 earthquake revealed from machine learning , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3463, https://doi.org/10.5194/egusphere-egu21-3463, 2021.
We present results aimed at understanding preparation processes of large earthquakes by tracking progressive localization of earthquake deformation with three complementary analyses: (i) estimated production of rock damage by background events, (ii) spatial localization of background seismicity within damaged areas, and (iii) progressive coalescence of individual earthquakes into clusters. Techniques (i) and (ii) employ declustered catalogs to avoid the occasional strong fluctuations associated with aftershock sequences, while technique (iii) examines developing clusters in entire catalog data. The different techniques provide information on different time scales and on the spatial extent of weakened damaged regions. The analyses reveal generation of earthquake-induced rock damage on a decadal timescale around eventual rupture zones, and progressive localization of background seismicity on a 2-3 yr timescale before several M > 7 earthquakes in southern and Baja California and M7.9 events in Alaska. This is followed by coalescence of earthquakes into growing clusters that precede the mainshocks. Corresponding analysis around the 2004 M6 Parkfield earthquake in the creeping section of the San Andreas fault shows contrasting tendencies to those associated with the large seismogenic faults. The results are consistent with observations from laboratory experiments and physics-based models with heterogeneous materials not dominated by a pre-existing failure zone. Continuing studies with these techniques, combined with analysis of geodetic data and insights from laboratory experiments and model simulations, may allow developing an integrated multi-signal procedure to estimate the approaching time and size of large earthquakes.
How to cite: Zaliapin, I. and Ben-Zion, Y.: Localization of seismicity prior to large earthquakes, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14086, https://doi.org/10.5194/egusphere-egu21-14086, 2021.
Decadal forerunning seismic activity is used to map great asperities that subsequently ruptured in very large, shallow earthquakes at subduction zones and transform faults. The distribution of forerunning shocks of magnitude Mw>5.0 is examined for 50 mainshocks of Mw 7.5 to 9.1 from 1993 to 2020. The zones of large slip in many great earthquakes were nearly quiescent beforehand and are identified as the sites of great asperities. Much forerunning activity occurred at smaller asperities along the peripheries of the rupture zones of great and giant mainshocks. Asperities are strong, well-coupled portions of plate interfaces. Sizes of great asperities as ascertained from forerunning activity generally agree with the areas of high seismic slip as determined by others using geodetic and tide-gauge data and finite-source seismic modeling. Different patterns of forerunning activity on time scales of about 5 to 45 years are attributed to the sizes and spacing of asperities. This permits many great asperities to be mapped decades before they rupture in great and giant shocks. Rupture zones of many large earthquakes are bordered either along strike, updip, or downdip by zones of low plate coupling. Several bordering regions were sites of forerunning activity, aftershocks and slow-slip events. Several poorly coupled subduction zones, however, are characterized by few great earthquakes and little forerunning activity. The detection of forerunning and precursory activities of various kinds should be sought on the peripheries of great asperities. The manuscript can be found at http://www.ldeo.columbia.edu/~sykes
How to cite: Sykes, L.: Decadal Seismicity Prior to Great Earthquakes at Subduction and Transform-Fault Plate Boundaries, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1600, https://doi.org/10.5194/egusphere-egu21-1600, 2021.
Mass redistribution during large earthquakes produces a prompt elasto-gravity signal (PEGS) that travels at the speed of light and can be observed on seismograms before the arrival of P-waves. PEGS carries information about earthquake magnitude and the temporal evolution of seismic moment, therefore it could be used to both improve the accuracy of current early source estimation systems and speed-up early warning. However, PEGS has been detected for only a handful of very large earthquakes so far, and its potential use for operational early warning remains to be established. In this work, we study the timeliness of magnitude estimation for subduction earthquakes in Japan using PEGS waveforms by means of Deep Learning and Bayesian uncertainty analysis. Given the paucity of PEGS observations, we train the model on a database of synthetic seismograms augmented with empirical noise in order to simulate more realistic waveforms. We use about 80 stations from the Japanese F-Net network and from networks with data available through IRIS.
Under this experimental setup, we find that our model is able to track the moment release for earthquakes with a final Mw above 8.0, with a system latency that depends on the signal-to-noise ratio of PEGS. The application of our model to the Mw=9.1 Tohoku-Oki earthquake shows a latency of about 50 s after which the model is able to track well the evolving Mw of the earthquake. After about 2 minutes from the earthquake origin time, a reliable estimate of its final Mw is obtained. Similar performances in terms of timeliness of final Mw estimation are observed for the relatively smaller Hokkaido earthquake (Mw=8.1) although with higher uncertainty.
Our results highlight the potential of PEGS to enhance the performance of existing tsunami early warning systems where estimating the magnitude of very large earthquakes within few minutes is vital.
How to cite: Licciardi, A., Bletery, Q., Rouet-Leduc, B., Ampuero, J.-P., and Juhel, K.: Timeliness of earthquake magnitude estimation from the prompt elasto-gravity signal using Deep Learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14790, https://doi.org/10.5194/egusphere-egu21-14790, 2021.
We proposed a new Bayesian approach to estimate continuous crustal strain-rate fields from spatially discrete displacement-rate data, based on Global Navigation Satellite System (GNSS) observations, under the prior constraint on spatial flatness of the strain-rate fields. The optimal values of the hyperparameters in the model of strain-rate fields are determined by using Akaike's Bayesian Information Criterion. A methodological merit of this approach is that, by introducing a two-layer Delaunay tessellation technique, the time-consuming computation of strain rates can be omitted through the model estimation process. We applied the Bayesian approach to GNSS displacement-rate data in Mainland China and examined the correlation between the estimated strain-rate fields and seismic activity by using Molchan’s Error Diagram. The results show that the increase rate of maximum shear strain is positively correlated with the occurrence of earthquakes, indicating the strain rate can be used to augment probability earthquake models for background seismicity forecasting.
How to cite: Xiong, Z. and Zhuang, J.: Crustal strain-rate fields estimated from GNSS data with a Bayesian approach and its correlation to seismic activity, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2873, https://doi.org/10.5194/egusphere-egu21-2873, 2021.
Earthquakes are natural hazards that occur suddenly and without much notice. The most established method of detecting earthquakes is to use a network of seismometers. Nowadays, station positions of the global navigation satellite system (GNSS) can be determined with a high accuracy of a few centimetres or even millimetres. This high accuracy, together with the dense global coverage, makes it possible to also use GNSS station networks to investigate geophysical phenomena such as earthquakes. Absolute ground movements caused by earthquakes are reflected in the GNSS station coordinate time series and can be characterised using statistical methods or machine learning techniques.
In this work, we have used thousands of time series of GNSS station positions distributed all over the world to detect and classify earthquakes. We apply a variety of machine learning algorithms that enable large-scale processing of the time series in order to identify spatio-temporal patterns. Several machine learning algorithms, including Random Forest, Nearest Neighbours, and Multi-Layer Perceptron, are compared against each other, as well as against classical statistical methods, based on their performance on detecting earthquakes from the station coordinate time series.
How to cite: Crocetti, L., Schartner, M., and Soja, B.: Detecting earthquakes in GNSS station coordinate time series using machine learning algorithms, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1975, https://doi.org/10.5194/egusphere-egu21-1975, 2021.
The ultimate goal of paleoseismology is to estimate the strength, location and timing of future large earthquakes. Paleoseismic records can deliver an accurate description of past hazards, which is the first step towards that goal. To understand how reliably past events forecast the timing of future events, paleoseismologists characterise the periodicity in earthquake sequences. The periodicity is often expressed by the coefficient of variation (CoV, mean-normalised standard deviation) of recurrence intervals. Depending on the CoV, fault rupture behaviour is called periodic, random or clustered. However, sedimentary records rely on age-models, which have various ways to assign age and age uncertainty to events and thus it is unclear how well the CoV can be estimated.
Most onshore paleoseismic studies rely on sedimentary sequences, where sedimentation rate is highly variable and cannot be used as a constraint (Bayesian prior) in age-depth modelling. In these onshore studies, event ages are often determined by dates constraining the event age to a minimum and maximum age and making use of the stratigraphic order of event deposits. In contrast, marine and lacustrine paleoseismic records benefit from a more stable sedimentation rate, which is a suitable prior for Bayesian age-depth models, effectively decreasing age uncertainty. The sediment thickness between event deposits in subaqueous records can thus form a reasonable estimator of recurrence intervals (RI), i.e., the relative age of the events.
Different approaches are used to calculate RIs affecting the reported CoV. For example, there is the "best" age approach, where a single "best" age (often the median of an age distribution) is assigned to each event and the difference is the RI. In another approach the RIs are calculated based on the age differences within Markov chain Monte Carlo iterations that make up the age model. The latter method draws on more information and gives a mathematically more correct estimate for RIs by keeping the probabilistic nature of the event age. This method can be applied through, e.g., the use of the Difference()-function in OxCal or through the subtraction of iteration ages of consecutive events (Bacon.Age.d()-function) from BACON age-depth models.
To quantify the effect of age uncertainty on CoVs of earthquake sequences, we first describe the uncertainty in CoVs from various synthetic earthquake recurrence patterns without age uncertainty (control). Then we simulate the effects that age uncertainty in paleoseismic records can have on earthquake sequence statistics. We evaluate when ignoring the age uncertainty while calculating the CoV is a convenient and appropriate shortcut and when it can cause considerably different results by discussing various natural cases from literature.
How to cite: Kempf, P. and Moernaut, J.: Understanding the Effect of Age Uncertainty in Recurrence Analysis of Paleoseismic Records, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7929, https://doi.org/10.5194/egusphere-egu21-7929, 2021.
Statistical seismology relies on earthquake catalogs as homogeneous and complete as possible. However, heterogeneities in earthquake data compilation and reporting are common and frequently are not adverted.
The Global Centroid Moment Tensor Catalog (www.globalcmt.org) is considered as the most homogeneous global database for large and moderate earthquakes occurred since 1976, and it has been used for developing and testing global and regional forecast models.
Changes in the method used for calculating the moment tensors (along with improvements in global seismological monitoring) define four eras in the catalog (1976, 1977-1985, 1986-2003 and 2004-present). Improvements are particularly stark since 2004, when intermediate-period surface waves started to be used for calculating the centroid solutions.
Fixed centroid depths, used when the solution for a free depth did not converge, have followed diverse criteria, depending on the era. Depth had to be fixed mainly for shallow earthquakes, so this issue is more common, e.g. in the shallow parts of subduction zones than in the deep ones. Until 2003, 53% of the centroids had depths calculated as a free parameter, compared to 78% since 2004.
Rake values have not been calculated homogenously either. Until 2003, the vertical-dip-slip components of the moment tensor were assumed as null when they could not be constrained by the inversion (for 3.3% of the earthquakes). This caused an excess of pure focal mechanisms: rakes of -90° (normal), 0° or ±180° (strike-slip) or +90° (thrust). Even disregarding such events, rake histograms until 2003 and since 2004 are not equivalent to each other.
The magnitude of completeness (Mc) of the catalog is analyzed here separately for each era. It clearly improved along time (average Mc values being ~6.4 in 1976, ~5.7 in 1977-1985, ~5.4 in 1986-2003, and ~5.0 since 2004). Maps of Mc for different eras show significant spatial variations.
How to cite: González, Á.: The Global Centroid Moment Tensor Catalog: Heterogeneities and improvements, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3038, https://doi.org/10.5194/egusphere-egu21-3038, 2021.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.