The analysis of the spatiotemporal evolution of seismicity and the development of physical
and statistical models of seismicity have substantially improved our understanding of
earthquake occurrence. Such endeavor has considerably benefited from the availability of
new techniques and high-resolution, high-quality datasets. However, our forecasting skill of
large earthquake is still bounded to the "low-probability" environment. Additional
challenges are posed by issues such as missing data, catalog quality, biases affecting the
estimation of model parameters.
This session focuses on the most recent developments of models and techniques for
seismicity analysis, together with the main issues we need to be aware of. Specifically, it
will address the following topics:
• Advances in earthquake forecasting at different time scales;
• Advances in the analysis of spatiotemporal properties of seismicity;
• Earthquake statistics;
• Challenges affecting the analysis and modeling of spatiotemporal earthquake
• Future perspectives in seismicity modeling;
• Is there life beyond ETAS?

Co-organized by NH4
Convener: Angela StalloneECSECS | Co-conveners: Ilaria SpassianiECSECS, Sebastian Hainzl, Jiancang Zhuang
| Attendance Mon, 04 May, 10:45–12:30 (CEST)

Files for download

Session summary Download all presentations (154MB)

Chat time: Monday, 4 May 2020, 10:45–12:30

D1713 |
| solicited
Arnaud Mignan and Marco Broccardo

In the last few years, deep learning has solved seemingly intractable problems, boosting the hope to find approximate solutions to problems that now are considered unsolvable. Earthquake prediction, the Grail of Seismology, is, in this context of continuous exciting discoveries, an obvious choice for deep learning exploration. We reviewed the literature of artificial neural network (ANN) applications for earthquake prediction (77 articles, 1994-2019 period) and found two emerging trends: an increasing interest in this domain over time, and a complexification of ANN models towards deep learning. Despite the relatively positive results claimed in those studies, we verified that far simpler (and traditional) models seem to offer similar predictive powers, if not better ones. Those include an exponential law for magnitude prediction, and a power law (approximated by a logistic regression or one artificial neuron) for aftershock prediction in space. Due to the structured, tabulated nature of earthquake catalogues, and the limited number of features so far considered, simpler and more transparent machine learning models than ANNs seem preferable at the present stage of research. Those baseline models follow first physical principles and are consistent with the known empirical laws of Statistical Seismology (e.g. the Gutenberg-Richter law), which are already known to have minimal abilities to predict large earthquakes.

How to cite: Mignan, A. and Broccardo, M.: Neural Network Applications in Earthquake Prediction (1994-2019): Meta-Analytic & Statistical Insights on their Limitations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6851, https://doi.org/10.5194/egusphere-egu2020-6851, 2020.

D1714 |
| solicited
| Highlight
Laura Gulia and Stefan Wiemer

Immediately after a large earthquake, the main question asked by the public and decision-makers is whether it was the mainshock or a foreshock to an even stronger event yet to come. So far, scientists can only offer empirical evidence from statistical compilations of past sequences, arguing that normally the aftershock sequence will decay gradually whereas the occurrence of a forthcoming larger event has a probability of a few per cent.

We analyse the average size distribution of aftershocks of the 2016 Amatrice–Norcia (Italy) and Kumamoto (Japan) earthquake sequences and we suggest that in many cases it may be possible to discriminate whether an ongoing sequence represents a decaying aftershock sequence or foreshocks to an upcoming large event.

We propose a simple traffic light classification (FTLS, Foreshock Traffic Light System) to assess in real time the level of concern about a subsequent larger event and test it against 58 sequences, achieving a classification accuracy of 95 per cent.

We finally test, in near-real-time, the performance of the FTLS to the 2019 Ridgecrest sequence, California: a Mw6.4 followed, about 2 days later, by a Mw7.1. We find that in the hours after the first Ridgecrest event (Mw 6.4, the b-value drops by 23% on average, when compared to the background value, resulting in a ‘red’ foreshock traffic light.

Mapping in space the changes in b, we identify an area to the north of the rupture plane as the most likely location of a subsequent event. The second mainshock of magnitude 7.1 then indeed occurred in this location and after this event, the b-value increased by 26 percent over the background value, resulting in a green traffic light state.

How to cite: Gulia, L. and Wiemer, S.: Real-time discrimination of earthquake foreshocks and aftershocks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7590, https://doi.org/10.5194/egusphere-egu2020-7590, 2020.

D1715 |
Eugenio Lippiello, Giuseppe Petrillo, Cataldo Godano, Lucilla de Arcangelis, Anna Tramelli, Eleftheria Papadimitrou, and Vassilis Karakostas

We show that short term post-seismic incompleteness can be interpreted in terms of the overlap of aftershock coda waves. We use this information to develop a novel procedure which gives accurate occurrence probabilities of post-seismic strong ground shaking within 30 minutes after the mainshock. This novel approach uses, as only information, the ground velocity recorded at a single station without requiring that signals are transferred and elaborated by operational units. We will also discuss how this information can be implemented in the Epidemic-Type Aftershock Sequence model in order to reproduce statistical features in time and magnitude of recorded aftershocks.

Main references

de Arcangelis L., Godano C. & Lippiello E. (2018) The Overlap of Aftershock Coda Waves and Short-Term Postseismic Forecasting. Journal of Geophysical Research: Solid Earth, 123: 5661-5674,doi:10.1029/2018JB015518

Lippiello E., Petrillo G. , Godano G. , Tramelli A., Papadimitriou E. &, Karakostas V. (2019) Forecasting of the first hour aftershocks by means of the perceived magnitude. Nature Communications , 10, 2953, doi:10.1038/s41467-019-10763-3

How to cite: Lippiello, E., Petrillo, G., Godano, C., de Arcangelis, L., Tramelli, A., Papadimitrou, E., and Karakostas, V.: The overlap of aftershock coda waves and forecasting the first hour aftershocks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2444, https://doi.org/10.5194/egusphere-egu2020-2444, 2020.

D1716 |
Max Wyss

The hypothesis that extrapolation of the Gutenberg-Richter (GR) relationship allows estimates of the probability of large earthquakes is incorrect. For nearly 200 faults for which the recurrence time, Tr (1/probability of occurrence), is known from trenching and geodetically measured deformation rates, it has been shown that Tr based on seismicity is overestimated typically by one order of magnitude or more. The reason for this is that there are not enough earthquakes along major faults. In some cases there are too few earthquakes for the fault to be mapped based on seismicity. Some examples are the following rupture segments of great faults: the 1717 Alpine Fault, the 1856 San Andreas, the 1906 San Andreas, the 2001 Denali earthquakes, for which geological Tr are 100 years to 300 years and seismicity Tr are 10,000 to 100,000 years. In addition, the hypothesis leads to impossible results when one considers the dependence of the b-value on stress. It has been shown that thrusts, strike-slip and normal faults have low, intermediate and high b-values, respectively. This implies that, regardless of local slip rates, the probability of large earthquakes predicted by the hypothesis is high, intermediate and low in thrust, strike-slip, and normal faulting, respectively. Measurements of recurrence probability show a different dependence: earthquake probability depends on slip rate. Finally, the hypothesis predicts different probabilities for large earthquakes, depending on the magnitude scale used. For the 1906 rupture segment, the difference in probability of an M8 earthquake is approximately a factor of 50, using the two available catalogs. Various countries measure earthquake magnitude on their own scale that is intended to agree with the ML scale of California or the MS scale of the USGS. However, it is not trivial to match a scale that is valid for a different region with different attenuation of seismic waves. As a result, some regional M-scales differ from the global MS scale, which yields different Tr for the same Mmax in the same region, depending on whether the global or local magnitude scale is used. Based on the aforementioned facts, the hypothesis that probabilities of large earthquakes can be estimated by extrapolating the GR relationship has to be abandoned.

How to cite: Wyss, M.: The probability of large earthquakes cannot be calculated from seismicity rates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3637, https://doi.org/10.5194/egusphere-egu2020-3637, 2020.

D1717 |
Ilya Zaliapin, Karla Henricksen, and Konstantin Zuev

We examine the space-time-magnitude distribution of earthquakes using the Gromov hyperbolic property of metric spaces. The Gromov δ-hyperbolicity quantifies the curvature of a metric space via so-called four-point condition, which is a computationally convenient analog of the famous thin triangle property. We estimate the standard and scaled values of the δ-parameter for the observed earthquakes of Southern California during 1981 – 2017 according to the catalog of Hauksson et al. [2012], the global seismicity according to the NCEDC catalog during 2000 – 2015, and synthetic seismicity produced by the ETAS model with parameters fit for Southern California. In this analysis, a set of earthquakes is represented by a point field in space-time-energy domain D. The Baiesi-Paczuski asymmetric proximity η, which has been shown efficient in applied cluster analysis of natural and human-induced seismicity and acoustic emission experiments, is used to quantify the distances between the earthquakes. The analyses performed in the earthquake space (D,η) and in the corresponding proximity networks show that earthquake field is strongly hyperbolic, i.e. it is characterized by small values of δ. We show that the Baiesi-Paczuski proximity is a natural approximation to a proper hyperbolic metric in the space-time-magnitude domain of earthquakes, with the b-value related to the space curvature. We discuss the hyperbolic properties in terms of the examined earthquake field. The results provide a novel insight into the geometry and dynamics of seismicity and expand the list of natural processes characterized by underlying hyperbolicity.

How to cite: Zaliapin, I., Henricksen, K., and Zuev, K.: Hyperbolic geometry of earthquake networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12091, https://doi.org/10.5194/egusphere-egu2020-12091, 2020.

D1718 |
Pierre Dublanchet

The magnitudes of earthquakes are known to follow a power-law distribution, where the frequency of earthquake occurrence decreases with the magnitude. This decay is usually characterized by the power exponent, the so-called b-value. Typical observations report b-values in the range 0.5-2. The origin of b-value variations is however still debated. Seismological observations of natural seismicity indicate a dependence of the b-value with depth, and with faulting style, which could be interpreted as a signature of a stress dependence. Within creeping regions of major tectonic faults, the b-value of microseismicity increases with creep rate. Stress dependent b-value of acoustic emissions is also commonly reported during rock failure experiments in the laboratory. Natural and laboratory observations all support a decrease of b-value with increasing differential stress. I report here on the origin of b-value variations obtained in a fault model consisting in a planar 2D rate-and-state frictional fault embedded between 3D elastic slabs. This model assumes heterogeneous frictional properties in the form of overlapping asperities with size-dependent critical slip distance distributed on a creeping segment. This allows to get complex sequences of earthquakes characterized by realistic b-values. The role of frictional heterogeneity, normal stress, shear stress, and creep rate on the b-value variations is systematically explored. It is shown that the size distribution of asperities is not the only feature controlling the b-value, which indicates an important contribution from partial ruptures, and cascading events. In this model cascades of events (and thus b-value) is strongly influenced by frictional heterogeneity and normal stress through fracture energy distribution. If the decrease of b-value with differential stress is reproduced in these simulations, it is also shown that part of the b-value fluctuations could be attributed to changes of nucleation length and stress drop with normal stress. A slight increase of b-value with slip rate exists but remains an order of magnitude smaller than the observations.

How to cite: Dublanchet, P.: What controls b-value variations: insights from a physics based numerical model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11079, https://doi.org/10.5194/egusphere-egu2020-11079, 2020.

D1719 |
Hossein Ebrahimian, Fatemeh Jalayer, and Hamid Zafarani


The implementation of short-term forecasts for emergency response management in the immediate aftermath of a seismic event, and in the presence of an ongoing seismic sequence, requires two basic components: scientific advisories expressed in terms of risk assessment, and protocols that establish how the scientific results can be translated into decisions/actions for risk mitigation. The operational earthquake forecasting framework is geared towards providing scientific advisories in the form of time-dependent probabilities expressing seismicity, hazard and risk that can be practically translated into decisions. Considering the triggered sequence of aftershocks in the process of post-event decision-making and prioritization of emergency operations still seems to need and to deserve much more attention. To this end, the adopted novel and fully-probabilistic procedure succeeds in providing spatio-temporal predictions of aftershock occurrence in a prescribed forecasting time interval (in the order of hours or days). The procedure aims at exploiting the information provided by the ongoing seismic sequence in quasi-real time considering the time needed for registering and transmitting the data. The versatility of the Bayesian inference is exploited to adaptively update the forecasts based on the incoming information as it becomes available. The aftershock clustering in space and time is modelled based on an Epidemic Type Aftershock Sequence (ETAS). One of the main novelties of the proposed procedure is that it considers the uncertainties in the aftershock occurrence model and its model parameters. This is done by moving within a framework of robust reliability assessment which enables the treatment of uncertainties in an integrated manner. Pairing up the Bayesian robust reliability framework and the suitable simulation schemes (Markov Chain Monte Carlo Simulation) provides the possibility of performing the whole forecasting procedure with minimum (or no) need of human interference.


This procedure is demonstrated through a retrospective application to early forecasting of seismicity associated with the 2017 Sarpol-e Zahab seismic sequence activities. On Sunday November 12, 2017, at 18:18:16 UTC, (21:48:16 local time), a strong earthquake with Mw7.3 occurred in western Iran in the border region between Iran and Iraq in vicinity of the Sarpol-e Zahab town. Unfortunately, this catastrophic seismic event caused 572 causalities, thousands of injured and vast amounts of damage to the buildings, houses and infrastructures in the epicentral area. The mainshock of this seismic sequence was felt in the entire western and central provinces of Iran and surrounding areas. The main event was preceded by a foreshock with magnitude 4.5 about 43 minutes before the mainshock that warned the local residence to leave their home and possibly reduced the number of human casualties. More than 2500 aftershocks with magnitude greater than 2.5 have been reported up to January 2019 with the largest registered aftershock of Mw6.4. The fully simulation-based procedure is examined for both Bayesian model updating of ETAS spatio-temporal model and robust operational forecasting of the number of events of interest expected to happen in various time intervals after main events within the sequence. The seismicity is predicted within a confidence interval from the mean estimate.

How to cite: Ebrahimian, H., Jalayer, F., and Zafarani, H.: Operational Aftershock Forecasting for Mw7.3 Sarpol-e Zahab (2017) Earthquake in Western Iran, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20443, https://doi.org/10.5194/egusphere-egu2020-20443, 2020.

D1720 |
Stephen Bourne and Steve Oates

Geological faults may fail and produce earthquakes due to external stresses induced by hydrocarbon recovery, geothermal extraction, CO2 storage or subsurface energy storage. The associated hazard and risk critically depend on the spatiotemporal and size distribution of any induced seismicity. The observed statistics of induced seismicity within the Groningen gas field evolve as non-linear functions of the poroelastic stresses generated by pore pressure depletion since 1965. The rate of earthquake initiation per unit stress has systematically increased as an exponential-like function of cumulative incremental stress over at least the last 25 years of gas production. The expected size of these earthquakes also increased in a manner consistent with a stress-dependent tapering of the seismic moment power-law distribution. Aftershocks of these induced earthquakes are also observed, although evidence for any stress-dependent aftershock productivity or spatiotemporal clustering is inconclusive.

These observations are consistent with the reactivation of a mechanically disordered fault system characterized by a large, stochastic prestress distribution. If this prestress variability significantly exceeds the induced stress loads, as well as the earthquake stress drops, then the space-time-size distribution of induced earthquakes may be described by mean field theories within statistical fracture mechanics. A probabilistic seismological model based on these theories matches the history of induced seismicity within the Groningen region and correctly forecasts the seismicity response to reduced gas production rates designed to lower the associated seismic hazard and risk.

How to cite: Bourne, S. and Oates, S.: Statistical mechanics-based forecasting of induced seismicity within the Groningen gas field, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22047, https://doi.org/10.5194/egusphere-egu2020-22047, 2020.

D1721 |
Eyup Sopaci and Atilla Arda Özacar

The clock of an earthquake can be advanced due to dynamic and static changes when a triggering signal is applied to a stress-loading fault. While static effects decrease rapidly with distance, dynamic effects can reach thousands of kilometers away. Therefore, earthquake triggering is traditionally associated to static stress changes at local distances and to dynamic effects at greater scales. However, static and dynamic effects near the triggering signal are often nested, thus identifying which effect dominates, becomes unclear. So far, earthquake triggering has been tested using different rate-and-state friction (RSF) laws utilizing alternative views of friction without much comparison. In this study, the analogy of an earthquake is simulated using single degree of freedom spring-block systems governed with three different RSF laws, namely “Dieterich”, “Ruina” and “Perrin”. First, the fault systems are evolved until they reach a stable limit cycle and then static, dynamic and their combination are applied as triggering signals. During synthetic simulations, effects of the triggering signal parameters (onset time, size, duration and frequency) and the fault system parameters (fault stiffness, characteristic slip distance, direct velocity and time dependent state effects) are tested separately. Our results indicate that earthquake triggering is controlled mainly by the onset time, size and duration of the triggering signal but not much sensitive to the signal frequency. In terms of fault system parameters, the fault stiffness and the direct velocity effect are the critical parameters in triggering processes. Among the tested RSF laws, “Ruina” law is more sensitive than “Dieterich” law to both static and dynamic changes and “Perrin” is apparently the most sensitive law to dynamic changes. Especially, when the triggering onset time is close to an unperturbed failure time (future earthquake), dynamic changes result the largest clock advancement, otherwise, static stress changes are substantially more effective. In the next step, realistic models will be established to simulate the effect of the recent (26 September 2019) Marmara earthquake with Mw=5.7 on the locked Kumburgaz fault segment of the North Anatolian Fault Zone. The triggering earthquake will be simulated by combining the static stress change computed via Coulomb law and the dynamic effects using ground motions recorded at broadband seismic stations within similar distances. Outcomes will help us to better understand the effects of static and dynamic changes on the seismic cycle of the Kumburgaz fault segment, which is expected to break soon with a possibly big earthquake causing damage at the metropolitan area of Istanbul in Turkey.

How to cite: Sopaci, E. and Özacar, A. A.: Investigation of Dynamic and Static Effects on Earthquake Triggering Using Different Rate and State Friction Laws and Marmara Simulation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-533, https://doi.org/10.5194/egusphere-egu2020-533, 2020.

D1722 |
Katerina Orfanogiannaki and Dimitris Karlis

Modeling seismicity data is challenging and it remains a subject of ongoing research. Assumptions about the distribution of earthquake numbers play an important role in seismic hazard and risk analysis. The most common distribution that has been widely used in modeling earthquake numbers is the Poisson distribution because of its simplicity and easy to use. However, the heterogeneity in earthquake data and temporal dependencies that are often present in many real earthquake sequences make the use of the Poisson distribution inadequate. So, we propose the use of a Hidden Markov model (HMM) with state-specific Negative Binomial distributions in which some states are allowed to approach the Poisson distribution. A HMM is a generalization of a mixture model where the different unobservable (hidden) states are related through a Markov process rather than being independent of each other. We parameterize the Negative Binomial distribution in terms of the mean and dispersion (clustering) parameter. Maximum likelihood estimates of the models’ parameters are obtained through an Expectation-Maximization algorithm (EM-algorithm).

We apply the model to real earthquake data. We have selected the area of Killini Western Greece to test the proposed hypothesis. The area of Killini has been selected based on the fact that in a time window of 17 years three clusters of seismicity associated with strong mainshocks are included in the catalog. Application of the model to the data resulted in three states, representing different levels of seismicity (low, medium, high). The state that corresponds to the low seismicity level approaches the Poisson distribution while the other two states (medium and high) are following the Negative Binomial distribution. This result complies with the nature of the data. The variation within each state that is introduced to the model by the Negative Binomial distribution is greater in the states of medium and high seismicity. 

How to cite: Orfanogiannaki, K. and Karlis, D.: Modeling earthquake numbers by Negative Binomial Hidden Markov models , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1182, https://doi.org/10.5194/egusphere-egu2020-1182, 2020.

D1723 |
Behnam Maleki Asayesh, Hamid Zafarani, and Mohammad Tatar

Immediate after a large earthquake, accurate prediction of spatial and temporal distribution of aftershocks has a great importance for planning search and rescue activities. Currently, the most sophisticated approach to this goal is probabilistic aftershock hazard assessment (PASHA). Spatial distribution of the aftershocks fallowing moderate to large earthquakes correlate well with the imparted stress due to the mainshock. Furthermore the secondary static stress changes caused by smaller events (aftershocks) could have effect on the triggering of aftershocks and should be considered in the calculations. The 26 December 2003 (Mw 6.6) Bam earthquake with more than 26000 causalities is one of the most destructive events in the recorded history of Iran. This earthquake was an interesting event and was investigated in a majority of aspects. Good variable-slip fault model and precise aftershocks data enabled us to impart Coulomb stress changes due to mainshock and secondary static stress triggering on the nodal planes of aftershocks to learn whether they were brought closer to failure.

We used recently published high-quality focal mechanisms and hypocenters to reassess the role of small to moderate earthquakes for static stress triggering of aftershocks during the Bam earthquake. By imparting Coulomb stress changes due to the mainshock on the nodal planes of the 158 aftershocks we showed that 77.8% (123 from 158) of the aftershocks received positive stress changes at least in one nodal plane. We also calculated Coulomb stress changes imparted by the mainshock and aftershocks (1≤M≤4.1) onto subsequent aftershocks nodal planes and found that 81.6% (129 of 158) of aftershocks received positive stress changes at least in one nodal plane. In summary, 77.8% of aftershocks are encouraged by the main shocks, while adding secondary stress encourages 81.6%. Therefore, by adding secondary stress the Coulomb Index (CI), the fraction of events that received net positive Coulomb stress changes compared to the total number of events, increased from 0.778 to 0.816.

How to cite: Maleki Asayesh, B., Zafarani, H., and Tatar, M.: Effects of secondary static stress triggering on the spatial distribution of aftershocks, a case study, 2003 Bam earthquake (SE Iran) , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1443, https://doi.org/10.5194/egusphere-egu2020-1443, 2020.

D1724 |
Myunghyun Noh

In most seismic studies, we prefer the earthquake catalog that covers a larger region and/or a longer period. We usually combine two or more catalogs to achieve this goal. When combining catalogs, however, care must be taken because their completeness is not identical so that unexpected flaws may be caused.

We tested the effect of combining inhomogeneous catalogs using the catalog of Korea Meteorological Administration (KMA). In fact, KMA provides a single catalog containing the earthquakes occurred in and around the whole Korean Peninsula. Like the other seismic networks, however, the configuration of the KMA seismic network is not uniform over its target monitoring region, so is the earthquake detection capability. The network is denser in the land than in the off-shore. Moreover, there are no seismic information available from North Korea. Based on these, we divided the KMA catalog into three sub-catalogs; SL, NL, and AO catalogs. The SL catalog contains the earthquakes occurred in the land of South Korea while the NL catalog contains those in the land of North Korea. The AO catalog contains all earthquakes occurred in the off-shore surrounding the peninsula.

The completeness of a catalog is expressed in terms of mc, the minimum magnitude above which no earthquakes are missing. We used the Chi-square algorithm by Noh (2017) to estimate the mc. It turned out, as expected, that the mc of the SL is the smallest among the three. Those of NL and AO are comparable. The mc of the catalog combining the SL and AO is larger than those of individual catalogs before combining. The mc is largest when combining all the three. If one needs more complete catalog, he or she had better divide the catalog into smaller ones based on the spatiotemporal detectability of the seismic network. Or, one may combine several catalogs to cover a larger region or a longer period at the expense of catalog completeness.

How to cite: Noh, M.: Effect of Combining Catalogs with Different Completeness, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1749, https://doi.org/10.5194/egusphere-egu2020-1749, 2020.

D1725 |
Shubham Sharma, Sebastian Hainzl, Gert Zöller, and Matthias Holschneider

The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties arise due to non-unique slip inversions and unknown receiver fault mechanism. Especially for the latter, uncertainties are highly dependent on the choice of the assumed receiver mechanism. There are two ways of defining the receiver faults, either by predefining fault kinematics by geological constraints, or by calculating optimally oriented planes, both ways are pretty unrealistic as real aftershocks show variable rupture mechanisms. Recent studies have proposed an alternative method based on deep learning to forecast aftershocks. Using a binary test (aftershocks yes/no), it has been shown that their method as well as alternative stress values, such as maximum shear or the von-Mises criteria, are more accurate and reliable than the classical CFS criterion with predefined receiver mechanism.

Here we use 351 slip inversions from SRCMOD database to calculate Coulomb failure stress on a layered-half space using variable receiver mechanisms as well as proposed alternative stress metrics. We also perform tests for different magnitude cut-offs, grid size variation, and aftershock duration to verify the use of ROC analysis for ranking of stress metrics. The observations suggest that introducing a layered-half space does not improve the stress maps and ROC curves. However, magnitude cut-off and aftershock duration does effect the efficiency of stress metric in a way that larger magnitudes and shorter aftershock durations are forecasted efficiently. Two alternative statistics based tests i.e. log-likelihood and information gain tests using rate-based forecasts (non-binary) are also performed to compare the ability of metrics to discriminate the regions with and without aftershocks. The results suggest that simple methods of stress calculations perform better than the classic Coulomb failure stress calculations.

How to cite: Sharma, S., Hainzl, S., Zöller, G., and Holschneider, M.: Is Coulomb stress the best choice for aftershock forecasting?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4907, https://doi.org/10.5194/egusphere-egu2020-4907, 2020.

D1726 |
Corentin Gouache, Pierre Tinard, François Bonneau, and Jean-Marc Montel

Both French mainland and Lesser Antilles are characterized by sparse earthquake catalogues respectively due to the low-to-moderate seismic activity and the low recording historical depth. However, it is known that major earthquakes could strike French mainland (e.g. Ligure in 1887 or Basel in 1356) and even more French Lesser Antilles (e.g. Guadeloupe 1943 or Martinique 1839). Assessing seismic hazard in these territories is necessary to support building codes and prevention actions to population. One approach to estimate seismic hazard despite lack of data is to generate a set of plausible seismic scenarios over a large time span. A generator of earthquakes is thus presented in this paper. Its first step is to generate only main shocks. The second step consists of trigger aftershocks related to main shocks.
To draw the time occurrence of main shocks, original draw of frequencies and year-by-year summation of it is proceeded. The frequencies are drawn, for each magnitude step, in probability density functions computed through the inter event time method (Hainzl et al. 2006). By propagating magnitude uncertainties contained in the initial catalogue through a Monte Carlo Markov Chain, each magnitude step has not only one main shock frequency but a distribution of it. Once a main shock is temporally drawn, its 2D location is drawn thanks to the cumulative seismic moment recorded on each 5x5 km cell in the French territories. A seismotectonic zoning is used to limit both the spatial distribution and magnitude of large earthquakes. Finally, the other parameters (strike, dip, rake and depth) are drawn in ranges of values depending on the seismotectonic zone where the main shock is located. 
For purpose of trigger aftershocks from the main shocks, an approximation of the Bath law (Richter 1958; Båth 1965) is proceeded during the computation of the frequency – magnitude distributions. Thus, for each magnitude step, an α–value distribution is obtained in which, for each main shock an α–value is drawn. In this way, the maximal magnitude of triggered aftershocks is known.

How to cite: Gouache, C., Tinard, P., Bonneau, F., and Montel, J.-M.: Stochastic generator of earthquakes in French territories, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5150, https://doi.org/10.5194/egusphere-egu2020-5150, 2020.

D1727 |
Christian Grimm, Martin Käser, and Helmut Küchenhoff

While Probabilistic Seismic Hazard Assessment is commonly based on earthquake catalogues in a declustered form, ongoing seismicity in aftershock sequences is known to be able to add significant hazard, which can also increase the damage potential to already affected structures in risk assessment. Especially so-called earthquake doublets (multiplets), i.e. a cluster mainshock being followed or preceded by one (or more) events with a similarly strong magnitude occurring within pre-defined temporal and spatial limits, can cause loss multiplication effects to the insurance industry, which therefore has a pronounced interest in investigating the frequency of earthquake doublets to happen worldwide. A widely used method to analyse and simulate the triggering process of earthquake sequences is the Epidemic Type Aftershock Sequence (ETAS) model. We estimate the ETAS model parameters for some regional areas and produce synthetic catalogues, which are then analysed particularly with respect to the occurrence of earthquake doublets and compared to the observed history. Also, different seismic subduction-type regions in the world are pointed out to have shown differing relative frequencies of earthquake doublets. Regression models are used to study whether certain mainshock and local, geophysical properties such as magnitude, dip and rake angle, depth, distance to subduction plate interface and velocity of converging subduction plates nearby show explanatory power for the probability of a cluster containing an earthquake doublet.

How to cite: Grimm, C., Käser, M., and Küchenhoff, H.: Occurrence of earthquake doublets in the light of the ETAS model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5242, https://doi.org/10.5194/egusphere-egu2020-5242, 2020.

D1728 |
Leila Mizrahi, Shyam Nandan, and Stefan Wiemer

The Epidemic-Type Aftershock Sequence (ETAS) model is often used to describe the spatio-temporal distribution of earthquakes. A fundamental requirement for parameter estimation of the ETAS model is the completeness of the catalog above a magnitude threshold mc. mc is known to vary with time for reasons such as gradual improvement of the seismic network, short term aftershock incompleteness and so on. For simplicity, nearly all applications of the ETAS model assume a global magnitude of completeness for the entirety of the training period. However, in order to be complete for the entire training period, the modeller is often forced to use very conservative estimates of mc, as a result completely ignoring abundant and high-quality data from the recent periods, which falls below the assumed mc. Alternatively, to benefit from the abundance of smaller magnitude earthquakes from the recent period in model training, the duration of the training period is often restricted. However, parameters estimated in this way may be dominated by one or two sequences and may not represent long term behavior.

We developed an alternative formulation of ETAS parameter inversion using expectation maximization, which accounts for a temporally variable magnitude of completeness. To test the adequacy of such a technique, we evaluate its forecasting power on an ETAS-simulated synthetic catalog, compared to the constant completeness magnitude ETAS base model. The synthetic dataset is designed to mimic the conditions in California, where mc since 1970 is estimated to be around 3.5, and where a general decreasing trend in the temporal evolution of mc can be observed. Both models are trained on the primary catalog with identical time horizon. While the reference model is solely based on information about earthquakes of magnitude 3.5 and above, our alternative represents completeness magnitude as a monotonically decreasing step-function, starting at 3.5 and assuming values down to 2.1 in more recent times.

To compare the two models, we issue forecasts by repeated probabilistic simulation of earthquake interaction scenarios, and evaluate those forecasts by assessing the likelihood of the actual occurrences under each of the alternatives. As a measure to quantify the difference in performance between the two models, we calculate the mean information gain due to model extension for different spatial resolutions, different temporal forecasting horizons, and different target magnitude ranges.

Preliminary results suggest that the parameter bias introduced by successive application of simulation and inversion decreases exponentially with an increasing fraction of data used in the inversion. It is therefore expected that also the forecasting power of such a model increases with the amount of data available, indicating substantial importance of the method for the future of probabilistic seismic hazard assessment.

How to cite: Mizrahi, L., Nandan, S., and Wiemer, S.: How ETAS Can Leverage Completeness of Modern Seismic Networks Without Renouncing Historical Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5827, https://doi.org/10.5194/egusphere-egu2020-5827, 2020.

D1729 |
Joern Davidsen, Cole Lord-May, Jordi Baro, and David Eaton

Earthquakes can be induced by natural and anthropogenic processes involving the injection or migration of fluids within rock formations. A variety of field observations has led to the formulation of three different and apparently contradicting paradigms in the estimation of the seismic hazard associated with fluid injections. Based on a unified conceptual model accounting for the non-homogeneous pore-pressure stimulation caused by fluid injection in a prestressed region, we show here that all three paradigms naturally coexist. The loading history and heterogeneity of the host medium determine which of the three paradigms prevails. This can be understood as a consequence of a superposition of two populations of events triggered at different pore-pressure levels with different Gutenberg-Richter b-values.

How to cite: Davidsen, J., Lord-May, C., Baro, J., and Eaton, D.: Seismic hazard due to fluid injections, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6334, https://doi.org/10.5194/egusphere-egu2020-6334, 2020.

D1730 |
Mark Naylor, Kirsty Bayliss, Finn Lindgren, Francesco Serafini, and Ian Main

Many earthquake forecasting approaches have developed bespokes codes to model and forecast the spatio-temporal eveolution of seismicity. At the same time, the statistics community have been working on a range of point process modelling codes. For example, motivated by ecological applications, inlabru models spatio-temporal point processes as a log-Gaussian Cox Process and is implemented in R. Here we present an initial implementation of inlabru to model seismicity. This fully Bayesian approach is computationally efficient because it uses a nested Laplace approximation such that posteriors are assumed to be Gaussian so that their means and standard deviations can be deterministically estimated rather than having to be constructed through sampling. Further, building on existing packages in R to handle spatial data, it can construct covariate maprs from diverse data-types, such as fault maps, in an intutitive and simple manner.

Here we present an initial application to the California earthqauke catalogue to determine the relative performance of different data-sets for describing the spatio-temporal evolution of seismicity.

How to cite: Naylor, M., Bayliss, K., Lindgren, F., Serafini, F., and Main, I.: Modelling Seismicity in California as a Spatio-Temporal Point Process Using inlabru: Insights for Earthquake Forecasting, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8814, https://doi.org/10.5194/egusphere-egu2020-8814, 2020.

D1731 |
Gina-Maria Geffers, Ian Main, and Mark Naylor

The Gutenberg-Richter (GR) b-value represents the relative proportion of small to large earthquakes in a scale-free population. For tectonic seismicity, this is often close to unity, but some studies have shown the b-value to be elevated (>1) in both volcanic and induced seismicity. However, many of these studies have used relatively small datasets – in sample size and magnitude range, easily introducing biases. This leads to incomplete catalogues above the threshold above which all events are assumed to be recorded – the completeness magnitude Mc. At high magnitudes, the scale-free behaviour must break down because natural tectonic and volcano-tectonic processes are incapable of an infinite release of energy, which is difficult to estimate accurately. In particular, it can be challenging to distinguish between regions of unlimited scale-free behaviour and physical roll-off at larger magnitudes. The latter model is often referred to as the modified Gutenberg-Richter (MGR) distribution.

We use the MGR distribution to describe the breakdown of scale-free behaviour at large magnitudes, introducing the roll-off parameter (θ) to the incremental distribution. Applying a maximum likelihood method to estimate the b-value could violate the implicit assumption that the underlying model is GR. If this is the case, the methods used will return a biased b-value rather than indicate that the method used is inappropriate for the underlying model. Using synthetic data and testing it on various earthquake catalogues, we show that when we have little data and low bandwidth, it is statistically challenging to test whether the sample is representative of the scale-free GR behaviour or whether it is controlled primarily by the finite size roll-off seen in MGR.

How to cite: Geffers, G.-M., Main, I., and Naylor, M.: Estimating b-values and biases in small earthquake catalogues, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11642, https://doi.org/10.5194/egusphere-egu2020-11642, 2020.

D1732 |
Andreas Tzanis and Angeliki Efstathiou

We examine the association of recurrence intervals and dynamic (entropic) states of shallow (crustal) and deep (sub-crustal) seismogenetic systems, simultaneously testing if earthquakes are generated by Poisson processes and are independent (uncorrelated), or by Complex processes and are dependent (correlated). To this effect, we apply the q-exponential distribution to the statistical description of interevent times, focusing on the temporal entropic index (measure of dynamic state), in connexion to the q-relaxation interval that constitutes a characteristic recurrence interval intrinsically dependent on the dynamic state. We examine systems in different geodynamic settings of the northern Circum-Pacific Belt: transformational plate boundaries and inland seismic regions of California, Alaska and Japan, convergent boundaries and Wadati-Benioff zones of the Aleutian, Ryukyu, Izu-Bonin and Honshū arcs and the divergent boundary of the Okinawa Trough.

Our results indicate that the q-exponential distribution is universal descriptor of interevent time statistics. The duration of q-relaxation intervals is reciprocal to the level of correlation and both may change with time and across boundaries so that neighbouring systems may co-exist in drastically different states. Crustal systems in transformational boundaries are generally correlated through short and long range interaction; very strong correlation is quasi-stationary and q-relaxation intervals very short and extremely slowly increasing with magnitude: this means that on occurrence of any event, such systems respond swiftly by generating any magnitude anywhere within their boundaries. These are attributes expected of SOC. Crustal systems in convergent and divergent margins are no more than moderately correlated and sub-crustal seismicity is definitely uncorrelated (quasi-Poissonian). In these cases q-relaxation intervals increase exponentially, but in Poissonian or weakly correlated systems their escalation is much faster than in moderately to strongly correlated ones. In consequence, moderate to strong correlation is interpreted to indicate Complexity that could be sub-critical or non-critical without a means of telling (for now). The blending of earthquake populations from dynamically different fault networks randomizes the statistics of the mixed catalogue.

A possible partial explanation of the observations is based on simulations of small-world fault networks and posits that free boundary conditions at the surface allow for self-organization and possibly criticality to develop, while fixed boundary conditions at depth do not; this applies particularly to crustal transformational systems. The information introduced by q-relaxation may help in improving the analysis of earthquake hazards but its utility remains to be clarified.

Acknowledgement: This presentation is financially supported by the Special Account for Research Grants of the National and Kapodistrian University of Athens

How to cite: Tzanis, A. and Efstathiou, A.: Earthquake Recurrence Intervals in Complex Seismogenetic Systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11871, https://doi.org/10.5194/egusphere-egu2020-11871, 2020.

D1733 |
Zhenguo Zhang, Wenqiang Zhang, Jiankuan Xu, and Xiaofei Chen

Earthquakes recorded by instruments obey the Gutenberg-Richter law, which expresses the dependence of earthquake frequency on magnitude. The Gutenberg-Richter law reveals the physics of earthquake sources and is important for analyzing the seismicity of active fault systems and vulnerable areas. Based on rupture dynamics, for the first time, we obtain a power-law distribution for the relationship between earthquake frequency and magnitude. The weight of an earthquake relies on its rupture area and recurrence interval. Our derived frequency-magnitude distribution agrees with the Gutenberg-Richter law, which is summarized from global and regional earthquake catalogs. This work provides a new way to understand the Gutenberg-Richter law and the physics of earthquake sources.

How to cite: Zhang, Z., Zhang, W., Xu, J., and Chen, X.: The Gutenberg-Richter law based on rupture dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12534, https://doi.org/10.5194/egusphere-egu2020-12534, 2020.

D1734 |
Pablo Iturrieta, Danijel Schorlemmer, Fabrice Cotton, José Bayona, and Karina Loviknes

In earthquake forecasting, smoothed-seismicity models (SSM) are based on the assumption that previous earthquakes serve as a guideline for future events. Different kernels are used to spatially extrapolate each moment tensor from a seismic catalog into a moment-rate density field. Nevertheless, governing mechanical principles remain absent through the model conception, even though crustal stress is responsible for moment release mainly in pre-existent faults. Furthermore, a lately developed SSM by Hiemer et al., 2013 (SEIFA) incorporates active-fault characterization and deformation rates stochastically, so that a geological estimate of moment release could also be taken into account. Motivated by this innovative approach, we address the question: How representative is the stochastic temporal/spatial averaging of SEIFA, of the long-term crustal deformation and stress? In this context, physics-based modeling provides insights about the energy, stress, and strain-rate fields within the crust due to discontinuities found therein. In this work, we aim to understand the required temporal window of SEIFA to satisfy mechanically its underlying assumption of stationarity. We build various SEIFA models within different spatio-temporal subsets of a catalog and confront them with a physics-based model of long-term seismic energy/moment rate. Following, we develop a method based on the moment-balance principle and information theory to compare the spatial similarity between these two types of models. These models are built from two spatially conforming layers of information: a complete seismic catalog and a computerized 3-D geometry of mapped faults along with their long-term slip rate. SEIFA uses both datasets to produce a moment-density rate field, from which later a forecast could be derived. A simple physics-based model is used as proof of concept, such as the steady-state Boundary Element Method (BEM). It uses the fault 3D geometry and slip rates to calculate the long-term interseismic energy rate and elastic stress and strain tensors, accumulated both along the faults and within the crust. The SHARE European Earthquake Catalog and the European Database of Seismogenic Faults are used as a case study, constrained to crustal faults and different spatio-temporal subsets of the Italy region in the 1000-2006 time window. The moment-balance principle is analyzed in terms of its spatial distribution calculating the spatial mutual information (SMI) between both models as a similarity measure. Finally, by using the SMI as a minimization function, we determine the catalog optimal time window for which the predicted moment rate by the SSM is closer to the geomechanical prediction. We emphasize that regardless of the stationarity assumption usefulness in seismicity forecasting, we determine a simple method that provides a physical boundary to data-driven seismicity models. This framework may be used in the future to combine seismicity data and geophysical modeling for earthquake forecasting.

How to cite: Iturrieta, P., Schorlemmer, D., Cotton, F., Bayona, J., and Loviknes, K.: A physical constraint on smoothed-seismicity models and the stationary seismicity assumption in long-term forecasting, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17890, https://doi.org/10.5194/egusphere-egu2020-17890, 2020.

D1735 |
Paolo Gasperini, Emanuele Biondini, Antonio Petruccelli, Barbara Lolli, and Gianfranco Vannucci

In some recent works it has been hypothesized that the slope (b-value) of the magnitude-frequency distribution of earthquakes may be related to the differential stress inside the crust.  In particular, it has been observed that low b-values are associated with high stress values and therefore with high probability of occurrence of strong seismic shocks. In this paper we formulate a predictive hypothesis based on temporal variations of the b-value. We tested and optimized such hypothesis retrospectively based on the homogenized Italian instrumental seismic catalog (HORUS) from 1995 to 2018. A comparison is also made with a similar predictive hypothesis based on the occurrence of strong foreshocks.


How to cite: Gasperini, P., Biondini, E., Petruccelli, A., Lolli, B., and Vannucci, G.: Short-term retrospective forecasting of earthquakes based on temporal variations of the b-value of the magnitude-frequency distribution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20053, https://doi.org/10.5194/egusphere-egu2020-20053, 2020.

D1736 |
Robert Churchill, Maximilian Werner, Juliet Biggs, and Ake Fagereng

Aftershock sequences following large tectonic earthquakes exhibit considerable spatio-temporal complexity and suggest causative mechanisms beyond co-seismic, elasto-static Coulomb stress changes in the crust. Candidate mechanisms include dynamic triggering and postseismic processes such as viscoelastic relaxation, poroelastic rebound and aseismic afterslip, which has garnered particular interest recently. Aseismic afterslip – whereby localized frictional sliding within velocity-strengthening rheologies acts to redistribute lithospheric stresses in the postseismic phase – has been suggested by numerous studies to exert dominant control on aftershock sequence evolution, including productivity, spatial distribution and temporal decay.

As evidence is based overwhelmingly on individual case study analysis, we wish to systematically compare key metrics of aseismic afterslip and corresponding aftershock sequences to investigate this relationship. We specifically look for any empirical relationship between the seismic-equivalent moment of aseismic afterslip episodes and the corresponding aftershock sequence productivity. We first compile published afterslip models into a database containing moment estimates over varying time periods, as well as spatial distributions, temporal decays and modelling methodology as a supplementary resource. We then identify the corresponding aftershock sequence from the globally comparable USGS PDE catalog. As expected, coseismic moment exerts an obvious control on both afterslip moment and aftershock productivity – an effect we control for by normalising by mainshock moment and expected productivity (the Utsu-Seki law) respectively. Preliminary results suggest broad variability of both afterslip moment and aftershock productivity with no obvious control of afterslip on aftershocks beyond the scaling with mainshock size, including when separated by mainshock mechanism or region. As this study is insensitive to spatial and temporal distributions, we cannot rule out the potential influence afterslip exerts in these but find no evidence that afterslip drives overall productivity of aftershock sequences.

How to cite: Churchill, R., Werner, M., Biggs, J., and Fagereng, A.: The role of afterslip in driving aftershock sequences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20235, https://doi.org/10.5194/egusphere-egu2020-20235, 2020.