In the last two decades the number of high quality seismic instruments being installed around the world has grown exponentially and probably will continue to grow in the coming decades. This led to a dramatic increase in the volume of available seismic data and pointed out the limits of the current standard routine seismic analysis, often performed manually by seismologists. Exploiting this massive amount of data is a challenge that can be overcome by using new generation, fully automated and noise-robust seismic processing techniques. In the last years waveform-based detection and location methods have grown in popularity and their application have dramatically improved seismic monitoring capability. Moreover, machine learning techniques, which are dedicated methods for data-intensive applications, are showing promising results in seismicity characterization applications opening new horizons for the development of innovative, fully automated and noise-robust seismic analysis methods. Such techniques are particularly useful when working with data sets characterized by large numbers of weak events with low signal-to-noise ratio, such as those collected in induced seismicity, seismic swarms and volcanic monitoring operations. This session aims to bring to light new methods that can be applied to large data sets, either retro-actively or in (near) real-time, to characterize seismicity (i.e., perform detection, location, magnitude and source mechanisms estimation) at different scales and in different environments. We thus encourage contributions that demonstrate how the proposed methods help improve our understanding of earthquake and/or volcanic processes.

- Invited presentation by Dr. Sebastian Heimann (GFZ Potsdam, Germany) on probabilistic characterization of earthquake sources from a combination of seismic and geodetic observations

Convener: Nima NooshiriECSECS | Co-conveners: Natalia Poiata, Francesco GrigoliECSECS, Simone Cesca, Federica LanzaECSECS
| Attendance Mon, 04 May, 08:30–10:15 (CEST)

Files for download

Download all presentations (55MB)

Chat time: Monday, 4 May 2020, 08:30–10:15

D1676 |
Dario Baturan, Bruce Townsend, and Andrew Moores

Traditionally, the ability to study seismic phenomena is dependant on both the available hardware and time for processing data needed to produce a research grade catalogue. Consequently, shortages in either of these resources constrain the scope of studies available to the research scientist. This is becoming especially challenging as networks become larger and more dense, and as the community moves towards Large-N networks and arrays. We will look at alternative solutions to address these resource constraints and open the scientist up to a broader field of study.

Ownership of equipment or waiting in a queue for loan pool assets are the two most common methods for acquiring the hardware necessary to conduct a scientific study. Further, once the data has been collected, a good deal of time is spent processing that data to produce a catalogue before the scientific inquiry can begin.   

There is now an alternative model for acquiring and processing data in seismology that shortens the time and effort necessary to produce a research grade catalogue. We will demonstrate how we can customize acquisition arrays to meet experimental goals and apply proven processing models and AI techniques to deliver a bespoke research grade catalogue at a fraction of the time and cost of traditional acquisition and processing methods. This removes several of the challenging aspects of running an experiment in order to enable researchers to get straight to their science and shortening the time to publication.

How to cite: Baturan, D., Townsend, B., and Moores, A.: A new operational model that increases experiment diversity and shortens time to publication for research Seismology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12535, https://doi.org/10.5194/egusphere-egu2020-12535, 2020

D1677 |
Kevin Mayeda, Rengin Gok, Justin Barno, William Walter, and Jorge Roman-Nieves

The coda magnitude method of Mayeda and Walter (1996) provides stable source spectra and moment magnitudes (Mw) for local to regional events from as few as one station that are virtually insensitive to source and path heterogeneity. The method allows for a consistent measure of Mw over a broad range of event sizes rather than relying on empirical magnitude relationships that attempt to tie various narrowband relative magnitudes (e.g., ML, MD, mb, etc.) to absolute Mw derived from long-period waveform modeling. The use of S-coda and P-coda envelopes has been well documented over the past several decades for stable source spectra, apparent stress scaling, and hazard studies. However, up until recently, the method requires extensive calibration effort and routine operational use was limited only to proprietary US NDC software. The Coda Calibration Tool (CCT) stems from a multi-year collaboration between the US NDC and LLNL scientists with the goal of developing a fast and easy Java-based, platform independent coda envelope calibration and processing tool. We present an overview of the tool and advantages of the method along with several calibration examples, all of which are freely available to the public via GitHub (https://github.com/LLNL/coda-calibration-tool). Once a region is calibrated, the tool can then be used in routine processing to obtain stable source spectra and associated source information (e.g., Mw, radiated seismic energy, apparent stress, corner frequency, source discrimination on event type and/or depth). As more events are recorded or new stations added, simple updates to the calibration can be performed. All calibration and measurement information (e.g., site and path correction terms, raw & measured amplitudes, errors, etc.) is stored within an internal database that can be queried for future use. We welcome future collaboration, testing and suggestions by the geophysical community.  

How to cite: Mayeda, K., Gok, R., Barno, J., Walter, W., and Roman-Nieves, J.: The Coda Calibration and Processing Tool: Java-Based Freeware for the Geophysical Community, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5874, https://doi.org/10.5194/egusphere-egu2020-5874, 2020

D1678 |
alon ziv and Itzhak Lior

A generic approach for real-time magnitude and stress drop is introduced that is based on the omega-squared model (Brune, 1970) and results from Lior and Ziv (2018). This approach leads to approximate expressions for earthquake magnitude and stress drop as functions of epicentral distance and ground motion root-mean-squares (rms). Because the rms of the ground motion (acceleration, velocity and displacement) may be calculated directly from the seismogram in the time domain, the use of this approach for automated real-time processing is rather straightforward. Once the seismic moment and stress drop are known, they may be plugged in the ground motion prediction equations (GMPE) of Lior and Ziv (2018) to map the predicted peak shaking.

This method is generic in the sense that it is readily implementable in any tectonic environment, without having to go through a calibration phase. The potential of these results for automated early warning applications is demonstrated using a large dataset of about 6000 seismograms recorded by strong-motion and broadband velocity sensors from different tectonic environments. Optimal real-time performance is achieved by integrating magnitude and stress drop estimates into an evolutionary algorithm. The result of such an evolutionary calculation for the Mw 7.1 Ridgecrest earthquake indicates close agreement with the true magnitude.

How to cite: ziv, A. and Lior, I.: Generic Source Parameter Determination for Earthquake Early Warning: Theory, Observations and Implications for the Mw 7.1 Ridgecrest Earthquake, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15078, https://doi.org/10.5194/egusphere-egu2020-15078, 2020

D1679 |
Marisol Monterrubio-Velasco, José Carlos Carrasco-Jimenez, Otilio Rojas, Juan Esteban Rodríguez, and Josep de la Puente

Earthquake and tsunami early warning systems and post-event urgent computing simulations require of fast and accurate quantification of earthquake parameters such as magnitude, location and Focal Mechanism (FM). Methodologies to estimate earthquake location and magnitude are well-established and in place. However, automatic solutions of FMs are not always provided by operational institutions and are, in some cases, available only after a time-consuming inversion of the wave-forms needed to determine the moment tensor components. This precludes urgent seismic simulations, which aim at providing ground shaking maps with severe time constraints. We propose a new strategy for fast (<60 s) determination of FM based on historical data sets, tested it at five different active seismic regions, Japan, New Zealand, California, Iceland, and Italy. The methodology includes the k-nearest neighbor's algorithm in a spatial dimension domain to search the most similar FMs between the data set. In our research, we focus on moderate to large earthquakes. The comparison algorithm includes the four closest events, and also a hypothetical event building by the median values of strike, dip, and rake of the k-neighbors. The validation stage includes the minimum rotated angle measure to compute the similitude between a pair of FMs. We find three model parameters, such as the minimum number of neighbors, the threshold radius that defines the neighboring sphere, and the magnitude threshold, that could improve the statistical similitude results. Our fast methodology has a 75%-90% agreement with traditional inversion mechanisms, depending on the particular tectonic region and dataset size. Our work is a key component of an urgent computing workflow, where the FM information will be used as input for ground motion simulations. Future work will assess the sensitivity of FM uncertainty in the resulting ground-shaking maps.

How to cite: Monterrubio-Velasco, M., Carrasco-Jimenez, J. C., Rojas, O., Rodríguez, J. E., and de la Puente, J.: Fast acquisition of focal mechanism based on statistical analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3224, https://doi.org/10.5194/egusphere-egu2020-3224, 2020

D1680 |
Sebastian Heimann, Marius Isken, Daniela Kühn, Hannes Vasyura-Bathke, Henriette Sudhaus, Andreas Steinberg, Gesa Petersen, Marius Kriegerowski, Simon Daout, Simone Cesca, and Torsten Dahm

Seismic source and moment tensor waveform inversion is often ill-posed or non-unique if station coverage is poor or signals are weak. Three key ingredients can help in these situations: (1) probabilistic inference and global search of the full model space, (2) joint optimisation with datasets yielding complementary information, and (3) robust source parameterisation or additional source constraints. These demands lead to vast technical challenges, on the performance of forward modelling, on the optimisation algorithms, as well as on visualisation, optimisation configuration, and management of the datasets. Implementing a high amount of automation is inevitable.

To tackle all these challenges, we are developing a sophisticated new seismic source optimisation framework, Grond. With its innovative Bayesian bootstrap optimiser, it is able to efficiently explore large model spaces, the trade-offs and the uncertainties of source parameters. The program is highly flexible with respect to the adaption to specific source problems, the design of objective functions, and the diversity of empirical datasets.

It uses an integrated, robust waveform data processing, and allows for interactive visual inspection of many aspects of the optimisation problem, including visualisation of the result uncertainties. Grond has been applied to CMT moment tensor and finite-fault optimisations at all scales, to nuclear explosions, to a meteorite atmospheric explosion, and to volcano-tectonic processes during caldera collapse and magma ascent. Hundreds of seismic events can be handled in parallel given a single optimisation setup.

Grond can be used to optimise simultaneously seismic waveforms, amplitude spectra, waveform features, phase picks, static displacements from InSAR and GNSS, and gravitational signals.

Grond is developed as an open-source package and community effort. It builds on and integrates with other established open-source packages, like Kite (for InSAR) and Pyrocko (for seismology).

How to cite: Heimann, S., Isken, M., Kühn, D., Vasyura-Bathke, H., Sudhaus, H., Steinberg, A., Petersen, G., Kriegerowski, M., Daout, S., Cesca, S., and Dahm, T.: A user-friendly probabilistic earthquake source inversion framework for joint inversion of seismic, geodetic, and gravitational signals - The Grond toolkit, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18973, https://doi.org/10.5194/egusphere-egu2020-18973, 2020

D1681 |
Conor Bacon, Amy Gilligan, Nicholas Rawlinson, Felix Tongkul, David Cornwell, Simone Pilia, Omry Volk, and Tim Greenfield

The aim of the Northern Borneo Orogeny Seismic Survey (nBOSS) is to better understand the mechanisms driving the processes that occur in a post-subduction setting. A network of 46 seismometers was deployed across Sabah, Borneo, between March 2018 and January 2020 (22 months) in order to investigate these mechanisms using a suite of seismic imaging techniques.


Mt. Kinabalu (~4100 m) is a large granitic pluton that was emplaced between ~7.9 and 7.2 Ma. The region around the mountain experiences infrequent earthquakes, with the M6.0 Sabah earthquake in 2015 being the second largest earthquake to strike the region in the past century. This earthquake caused the loss of 18 lives and an estimated 100 million Ringgit (~€22 million) of damage to buildings, roads and infrastructure. The 2015 earthquake has highlighted the importance of improving our understanding of seismic hazards in northern Borneo. Although both a network of faults striking along the spine of the Crocker range, and a complex network of faults around the Kinabalu massif have been mapped, which of these are currently active remains poorly understood. Using data from the nBOSS seismic network, together with additional data from the Malaysian Meteorological Service, we aim to quantify and categorise the seismicity associated with this fault system.


We have used QuakeMigrate, a new, modular, open-source Python package for waveform backprojection to efficiently, automatically and robustly detect and locate microseismicity in the region around Mt. Kinabalu. We provided QuakeMigrate with continuous raw seismic data, a velocity model derived using nBOSS seismic data, and a list of station locations. A realistic estimate of the event location uncertainty, phase picks with uncertainties, and a suite of visual outputs allows for rigorous selection of real events at a sub-SNR detection threshold.


Using data from March 7 2018 to 28 August 2018, we have detected and located over 1500 events with hypocentres highly concentrated beneath the Kinabalu massif. Given existing catalogues for the area around Mt. Kinabalu only record on the order of tens of events between 1990 and the present day, our results demonstrate that these catalogues are highly incomplete at low magnitudes and thus existing tectonic and hazard models for the area need to be revised.

How to cite: Bacon, C., Gilligan, A., Rawlinson, N., Tongkul, F., Cornwell, D., Pilia, S., Volk, O., and Greenfield, T.: Seismicity of the Mt. Kinabalu fault system in Sabah, Borneo, revealed using waveform backprojection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20783, https://doi.org/10.5194/egusphere-egu2020-20783, 2020

D1682 |
Fabrizio Bosco, Daniele Spallarossa, and Anne Deschamps

The Alpine chain marks the border between different nations, so it’s important in this area the cooperation, the data sharing and the coordination among institutions operating in contiguous regions and nations that are involved in the observation and the management of natural hazards such as earthquakes affecting large portions of the territory.

As part of the Interreg Alcotra cross-border program, one of the objectives of the RISVAL project concerns the improvement of the seismic hazard assessment and in general of the knowledge of seismicity in the Western Alps. In this area, Italian, French and Swiss stations operate in various national and regional networks, connected to each other, sharing data also with European services (e.g. EIDA). Streaming raw data are the basic type of data shared, since each institution produces its own analyses and computed data, resulting for instance in different seismic catalogs, with of course different characteristics, also in spatio-temporal boundaries.

Furthermore the monitoring and analysis systems have been interested over the years by technological developments, so that the available data grow exponentially and the catalogs derived from the surveillance activities in near-real time show several internal inhomogeneities in the various time intervals, also considering the different sensitivity and subjectivity of the operators who alternate in carrying out the manual review.

Therefore emerges the need to process increasingly large amounts of data available, that could be re-analyzed and updated in a homogeneous way according to new developments. To face this effort we wanted to test the performance of a complete automatic procedure (Scafidi et. al, 2019) to re-compile a portion (2012-2019) of the seismic catalog derived by RSNI network (Regional Seismic network of Northwestern Italy) operating routines, including travel-time and strong-motion parameters dataset.

The procedure, driven by customizable set of parameters suitable for network geometry and seismicity features, relies on a multistep algorithm, that in this work we tested skipping the initial steps concerning the event detection tool on continuous raw data. So we perform it on 21391 already available detected waveform traces for 1549 events: 1) automatic P- and S-phase picker, 2) hypocenter locator (using NonLinLoc package and 3D velocities model), 3) magnitude and strong-motion parameter calculator.

We firstly evaluate the results for the re-compiled catalog both in terms of distributions of errors and other quality parameters and in terms of time-residuals distributions on the basis of azimuth variation for each station, distinguishing shorter and longer epicentral distances, in order to evaluate anomalies in propagation velocities pattern.

Then we compare the new catalog results with manual catalogs available in the area, to point out differences in sources and stations calculated parameters: primarily the original RSNI, confirming the reliability of the method, then the Italian national CPTI by INGV, and, with a closer view in the cross-border Alps area, the French ones (RéNaSS, Sismoazur, SISmalp).

Scafidi D. et al. 2019. A Complete Automatic Procedure to Compile Reliable Seismic Catalogs and Travel-Time and Strong-Motion Parameters Datasets, in Seismological Research Letters, Volume XX, Number XX – 2019, DOI: 10.1785/02201802

How to cite: Bosco, F., Spallarossa, D., and Deschamps, A.: Assessment of performance of an automatic procedure for a review of recent seismicity in Western Alps compiling an homogeneous and reliable catalog, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16477, https://doi.org/10.5194/egusphere-egu2020-16477, 2020

D1683 |
Athanasios Lois, Fotis Kopsaftopoulos, Dimitrios Giannopoulos, Katerina Polychronopoulou, and Nikos Martakis

Methodologies dealing with the detection of micro-earthquakes and the accurate estimation of body waves’ arrival time constitute, during the last decades, a topic of ongoing research. The extraction and efficient analysis of the useful information from the continuous recordings is of great importance, since it is a prerequisite for reliable interpretations.  Small magnitude seismic events, either naturally-occuring or induced, have been increasingly used in a wide range of industrial fields, with applications ranging from hydrocarbon and geothermal reservoir exploration, to passive seismic tomography surveys.

A great number of algorithms have been proposed and applied up to now for seismic event detection, exploiting specific properties of the seismic signals both in time and in frequency domain, with the energy-based detectors (STA/LTA) to be the most commonly used, due to their simplicity and the low computational cost they require. A significant obstacle emerging at seismological identification problems lies on the fact that such processes usually suffer from a number of false alarms, which is significantly increased in extremely noisy environments.

For that scope, we propose a “Decision-Making” mechanism, independent of the applied detection algorithm, which controls the results obtained during the detection process by minimizing false detections and providing the best possible outcome for further analysis. The specific scenario is based on the comparison among autoregressive models estimated on isolated seismic noise recordings, as well as on the detected intervals that resulted during the event identification procedure. A number of examples, associated with the implementation of the proposed scenario on real data, is presented with the scope of evaluating its performance. Several issues concerning the isolation of the seismic noise from the raw data, the estimation of the autoregressive models, the choice of the orders of the stochastic models etc., are discussed.

How to cite: Lois, A., Kopsaftopoulos, F., Giannopoulos, D., Polychronopoulou, K., and Martakis, N.: Remarks on the micro-earthquake detection problem: Refining the outcome using stochastic modeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9095, https://doi.org/10.5194/egusphere-egu2020-9095, 2020

D1684 |
Yousef Rajaeitabrizi, Robabeh Salehiozoumchelouei, Luca D'Auria, and José Luis Sánchez de la Rosa

The detection of microearthquakes is an important task in various seismological applications as volcano seismology, induced seismicity, and mining safety. In this work we have developed a novel technique in order to improve the quality and efficiency of STA/LTA based detection of microearthquakes. This technique consists of different stages of filtering employing an adaptive spectral subtraction method, which allows greatly improving the signal/noise ratio.

The implemented technique consists in a preliminary band-pass filtering of the signal followed by different stages of an adaptive spectral subtraction. The spectral subtraction technique is a non-linear filtering which allows taking into account the actual noise spectrum shape. It allows achieving a good filtering even in cases where the signal and noise spectrum overlaps. In order to take into account of the temporal variation in the background noise spectrum, we designed an adaptive technique. We first divide the incoming signals into short temporal windows. Each window is classified as “noise only” or “meaningful signal” (which can be either a microearthquake or any other relevant transient signal) using different features as the signal energy and the zero-crossing rate. Windows classified as “noise only” are continuously accumulated in a dynamic buffer which allows the average noise spectrum to be estimated and updated in an adaptive manner. This technique can be applied on subsequent stages to further improve the signal/noise ratio. This technique has been implemented in Python for the automatic detection of the microearthquakes on both off-line and near-real time data.

In order to check the efficiency of the results, we compared the results of an STA/LTA based automatic detection on the initial band-pass filtered signal and on the spectral subtracted signals after different stages of filtering. A notable improvement of the quality of the detection process is observed when repeated spectral subtraction stages are applied. 

We applied this procedure to seismic data recorded by Red Sísmica Canaria, managed by Instituto Volcanológico de Canarias (INVOLCAN), on Tenerife (Canary Islands), comparing results from the proposed detection algorithm with standard approaches as well as with manual detections. We present an extensive statistical analysis of the results, determining the percentage of correct detections, novel detections, false positives and false negatives after each stage of filtering. First results have shown that this technique is also able to detect automatically microearthquakes which went undetected after a manual analysis.

How to cite: Rajaeitabrizi, Y., Salehiozoumchelouei, R., D'Auria, L., and Sánchez de la Rosa, J. L.: Multistage adaptive spectral subtraction of seismic signals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4853, https://doi.org/10.5194/egusphere-egu2020-4853, 2020

D1685 |
Tuija Luhta, Kari Komminaho, Kati Oinonen, Timo Tiira, Marja Uski, Toni Veikkolainen, and Tommi Vuorinen

Kouvola area, a part of the Vyborg rapakivi batholith in southeastern Finland, has been experiencing an intraplate earthquake swarm since December 2011. The events have magnitudes ranging from ML -1.2 to 2.8 and they happen in the uppermost two kilometers of the crust. The Vyborg batholith has a long history of earthquake swarms with macroseismic data from 1751 onwards and the first instrumentally recorded swarm in 2003-2004.

Inspired by the ongoing activity, Institute of Seismology of University of Helsinki (ISUH) has installed temporary seismic stations in the area to complement seismic stations of the Finnish National Seismic network (FNSN). The detection threshold of FNSN is ML1.0, not sufficiently low to catch the smallest earthquakes of the swarm.

Several tailored cross-correlators have been developed at the ISUH to lower the event detection threshold. These can be used to detect even very small seismic events well below the current FNSN detection threshold. The method is especially well suited to swarm events, which generate nearly identical signals due to their common origin.

Only the largest events of the swarm can be used to calculate focal mechanisms or other event parameters reliably. One approach to use all data is waveform clustering. Event groups with identical signal can be formed, allowing e.g. calculation of composite focal mechanisms for each event cluster.

How to cite: Luhta, T., Komminaho, K., Oinonen, K., Tiira, T., Uski, M., Veikkolainen, T., and Vuorinen, T.: Kouvola earthquake swarm - using a cross-correlator to find very small events and cluster them, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19417, https://doi.org/10.5194/egusphere-egu2020-19417, 2020

D1686 |
Konstantinos Michailos, Calum J. Chamberlain, and John Townend

The Alpine Fault is a major plate boundary oblique strike-slip fault, known to fail in large M 7-8 earthquakes, posing a significant seismic hazard to southern and central New Zealand. The central part of the Alpine Fault exhibits low seismic activity when compared to adjacent areas. We have examined the smaller-magnitude earthquake activity occurring along the central portion of the Alpine Fault using data from five temporary seismic networks from late 2008 to early 2017.

We have created the most complete and accurate earthquake catalog at the central Alpine Fault to date (9,111 earthquake locations with magnitudes ranging from ML -1.2 to 4.6). We used this catalog as templates with a matched-filtering earthquake detection method and further extend the earthquake catalog. This even more comprehensive earthquake catalog will provide more definitive evidence for the seismicity characteristics observed and better insights into the fault zone’s geometry. 

Taking advantage of this extensive earthquake catalog, we also aim to examine whether there are any repeating highly similar seismic signals (repeating earthquakes). These repeating earthquakes can potentially help better determine the locked and creeping sections of the Alpine Fault and possibly quantify the total amount of creep taking place with respect to seismic deformation.

How to cite: Michailos, K., Chamberlain, C. J., and Townend, J.: Waveform cross-correlation-based earthquake detection applied to microseismicity near the central Alpine Fault, New Zealand, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5099, https://doi.org/10.5194/egusphere-egu2020-5099, 2020

D1687 |
Dragos Tataru, Natalia Poiata, and Bogdan Grecu

In September–November 2013 a seismic swarm occurred in Galati region of southeastern Romania. The area was previously known as characterized by low seismic activity along the major crustal faults. During the period of swarm, between September 23rd and November 5th, over 1000 events with the magnitudes (Ml) of 0.2–4.0, located at the depth of 5–10 km, have been detected. Despite the relatively small magnitude, events generated ground motions that were well felt by local people, leading to panic in the area. The proximity of active oil fields caused additional annoyance.

Advanced seismic monitoring in the region started in 2013 with deployment of mobile seismic stations immediately after the beginning of the swarm. Additionally, active seismic measurements were performed in order to characterize the shallow velocity structure at specific sites. Starting from July 2015 new permanents stations were installed in the area marking the beginning of Galati local network development. The routine seismic catalog derived using the acquired data and applying the standard detection and location techniques pointed that area continues to be seismically active, however with low rate of activity and magnitude of events. These made it a perfect study case for development of new advanced schemes for seismic monitoring of the regions with low and complex seismicity aiming on an understanding of the phenomenon underlying the 2013 seismic swarm as well as the current seismic activity in the area.

We developed and automatic monitoring scheme based on the network-based full waveform detection and location method BackTrackBB (Poiata et al. 2016) that exploits the coherency of signals’ statistical features recorded across the seismic network. Once extracted from the flux of continuous data, seismic events are compared against the database of previously detected events using coherency and allowing to identify potential repeaters or multiplets. The earthquake catalog provided by the system starting from 2017 was compared to the routine ROMPLUS catalog of NIEP showing an increase in the number of detected events by the order of 3. We present the details of the implementation and discuss its advantages and drawbacks.

How to cite: Tataru, D., Poiata, N., and Grecu, B.: Automatic monitoring of crustal seismic activity in Galati region of southeastern Romania using full waveform-based approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21361, https://doi.org/10.5194/egusphere-egu2020-21361, 2020

D1688 |
Luca Scarabello, Tobias Diehl, Philipp Kästli, John Clinton, and Stefan Wiemer

In order to assess the fault-geometry and the spatio-temporal evolution of natural and induced seismicity, high-precision (relative) micro-seismic hypocenter locations are key information. From such precise relative hypocenter locations, we can infer e.g. the spatial extent during a seismic sequence, the seismogenic volume affected by stimulation procedures as well as geometries (orientation, segmentation) of potentially activated faults. Additionally, in the case of induced seismicity, the spatio-temporal evolution of seismicity (e.g. migration velocities of seismicity, r-t-diagrams) can be indicative for fluid-flow processes and provides first-order estimates of hydraulic properties of the reservoir as well as on the existence of possible hydraulic connections. Information on spatial extent, geometries and the spatio-temporal evolution of seismogenic structures can help to improve the seismic hazard assessment of natural and induced seismicity in real-time or near-real-time.

However, to make prompt use of information provided by such high-precision hypocenter locations requires relative relocations computed in near-real-time. This can be rather challenging, especially at the beginning of a seismic sequence, when only little or no background seismicity is available for relative relocation. In addition, an automated relative relocation process requires differential times derived from precise and reliable (absolute) automatic picks as well as from waveform cross-correlation.

In this work, we present our strategy towards a near-real-time relative relocation procedure. The procedure follows the methodology described by Waldhauser 2009 (BSSA; doi:10.1785/0120080294) and combines differential times derived from automatic as well as manual picks with waveform cross-correlations measurements. Differential times of new events are inverted for relative locations with respect to a background reference catalog using the double-difference algorithm. We present results derived by a python-based prototype applied to natural and induced earthquake sequences. In addition, the prototype is fully implemented in a new SeisComP3 (SC3) module “scrtdd”, which allows the application in a full real-time environment, using detections and locations from various existing SC3 modules (“scautoloc”, “scanloc”, “screloc”) as input for relative relocation. We outline our implementation strategy, and compare SC3 results with results derived by our software for natural and induced earthquakes monitored be dense near-fault monitoring networks in the Valais (SW Switzerland), St. Gallen (Switzerland) and the Hengill Geothermal Field (SW Iceland).

How to cite: Scarabello, L., Diehl, T., Kästli, P., Clinton, J., and Wiemer, S.: Towards Real-Time Double-Difference Hypocenter Relocation of Natural and Induced Seismicity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13058, https://doi.org/10.5194/egusphere-egu2020-13058, 2020

D1689 |
Márta Kiszely, Süle Bálint, and István Bondár

Contamination of earthquake catalogues with anthropogenic events largely complicates seismotectonic interpretation. It is especially true for relatively low seismicity areas, such as Hungary. In the present study, we analyze the characteristics of the waveforms of earthquakes and quarry blasts that occurred in the close vicinity of Csokako (CSKK) station between 2017 and 2019 in the Vértes Hills, Hungary.

The objective of this study was to determine the linear discrimination line between the of earthquake and explosion populations. We investigated the effectiveness of P/S amplitude ratios using filtered waveforms at different frequency bands. We applied waveform cross-correlation to build correlation matrices at CSKK and performed hierarchical cluster analysis to identify event clusters. Because most of the quarry blasts were carried out by ripple-fire technology, we computed spectrograms and examined the spectral ratio between low and high frequencies and the steepness of spectra.

Overall, classes of earthquakes and quarry blasts have separated well from each other by combining the amplitude ratio, waveform similarity and the different spectral methods. We created a set of master events for individual quarries to run correlation detectors on past waveforms and identify the explosions of analyzed quarries that were misclassified as earthquakes in the annual Hungarian National Seismological Bulletins.

How to cite: Kiszely, M., Bálint, S., and Bondár, I.: Discrimination between earthquakes and quarry blasts in the Vertes Hills, Hungary using a correlation detector , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7052, https://doi.org/10.5194/egusphere-egu2020-7052, 2020

D1690 |
Julian Kuehnert, Anne Mangeney, Yann Capdeville, Emmanuel Chaljub, Eleonore Stutzmann, and Jean-Pierre Vilotte

Rockfall generated seismic signals have been shown to be of great utility in order to detect and monitor rockfall activity. Furthermore, event locations were successfully estimated using methods which rely on either arrival times, amplitudes or polarization of the seismic signal. However, strong surface topography can significantly influence seismic wave propagation and thus flaw the estimates if not taken into account correctly.

On the upside, the imprint of topography on the seismic signal can be characteristic of the source position. We show that this additional information can be used to get a more detailed rockfall location estimation. In order to do so, the seismic impulse response is modeled on a domain with 3D topography using the Spectral Element Method. Subsequently, in order to locate events, station energy ratios of the synthetic seismograms are compared with energy ratios of rockfall signals in a sliding time window.

We test the method on rockfalls which occurred at Dolomieu crater of Piton de la Fournaise, La Réunion. The sensitivity of the method on the resolution of the modeled topography and the underlying velocity model is tested. We propose that the method can be applied for monitoring rockfall activity in a specific area with multiple seismic stations after calculating once the impulse response for the corresponding topography.

How to cite: Kuehnert, J., Mangeney, A., Capdeville, Y., Chaljub, E., Stutzmann, E., and Vilotte, J.-P.: Localization of Rockfalls at Dolomieu Crater, La Réunion, through Simulation of Seismic Waves on Real Topography, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7103, https://doi.org/10.5194/egusphere-egu2020-7103, 2020

D1691 |
Davide Scafidi, Daniele Spallarossa, Matteo Picozzi, and Dino Bindi

Understanding the dynamics of faulting is a crucial target in earthquake source physics (Yoo et al., 2010). To study earthquake dynamics it is indeed necessary to look at the source complexity from different perspectives; in this regard, useful information is provided by the seismic moment (M0), which is a static measure of the earthquake size, and the seismic radiated energy (ER), which is connected to the rupture kinematics and dynamics (e.g. Bormann & Di Giacomo 2011a). Studying spatial and temporal evolution of scaling relations between scaled energy (i.e., e = ER/M0) versus the static measure of source dimension (M0) can provide valuable indications for understanding the earthquake generation processes, single out precursors of stress concentrations, foreshocks and the nucleation of large earthquakes (Picozzi et al., 2019). In the last ten years, seismology has undergone a terrific development. Evolution in data telemetry opened the new research field of real-time seismology (Kanamori 2005), which targets are the rapid determination of earthquake location and size, the timely implementation of emergency plans and, under favourable conditions, earthquake early warning. On the other hand, the availability of denser and high quality seismic networks deployed near faults made possible to observe very large numbers of micro-to-small earthquakes, which is pushing the seismological community to look for novel big data analysis strategies. Large earthquakes in Italy have the peculiar characteristic of being followed within seconds to months by large aftershocks of magnitude similar to the initial quake or even larger, demonstrating the complexity of the Apennines’ faults system (Gentili and Giovanbattista, 2017). Picozzi et al. (2017) estimated the radiated seismic energy and seismic moment from P-wave signals for almost forty earthquakes with the largest magnitude of the 2016-2017 Central Italy seismic sequence. Focusing on S-wave signals recorded by local networks, Bindi et al. (2018) analysed more than 1400 earthquakes in the magnitude ranges 2.5 ≤ Mw ≤ 6.5 of the same region occurred from 2008 to 2017 and estimated both ER and M0, from which were derived the energy magnitude (Me) and Mw for investigating the impact of different magnitude scales on the aleatory variability associated with ground motion prediction equations. In this work, exploiting first steps made in this direction by Picozzi et al. (2017) and Bindi et al. (2018), we derived a novel approach for the real-time, robust estimation of seismic moment and radiated energy of small to large magnitude earthquakes recorded at local scales. In the first part of the work, we describe the procedure for extracting from the S-wave signals robust estimates of the peak displacement (PDS) and the cumulative squared velocity (IV2S). Then, exploiting a calibration data set of about 6000 earthquakes for which well-constrained M0 and theoretical ER values were available, we describe the calibration of empirical attenuation models. The coefficients and parameters obtained by calibration were then used for determining ER and M0 of a testing dataset

How to cite: Scafidi, D., Spallarossa, D., Picozzi, M., and Bindi, D.: Real-time monitoring of seismic moment and radiated energy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19253, https://doi.org/10.5194/egusphere-egu2020-19253, 2020

D1692 |
Maria Catania, Antonino D'Alessandro, Luca Greco, Raffaele Martorana, and Salvatore Scudero

The Italian Seismic Network (IV) consists of more than 500 stations located throughout the Italian territory.

The detection capability of  network is constrained by its location performance that is affected by the seismic noise levels variations depending on the characteristics of the dominant source. Discriminating the noise level in each stations may allow to improve in its performance, in order to reduce noisy stations to detect even the smaller energetic seismic events sometimes hidden by high noise values. The main goal of this research has been to establish the characteristics (frequency content) and origin of seismic noise background at these sites and secondly to assess the effects of performance of the network.

For this purpose we have estimated the Power Spectral Density (PSD) of seismic noise selecting only a subset of 233 stations equiped with broadband velocimeters (with minimum period of 40 seconds and with a high sensitivity until to 120s) and operating at least three consecutive years of available data (2015-2017).

The variations of seismic background noise have been investigated using also the relative Probability Density Funcionts (PDF). The data processing of signals carried out with the robust method proposed by McNamara and Buland, (2004). In this study, the analysis was limited in the frequency band from 0.025 to 30 Hz, in accordance with the seismic sensors bandwidth. Four different frequency bands have been identified: 0.025-0.12, 0.12-1.2, 1.2-10 and 10-30 Hz. Each of these has been associated to a main type of source, in agreement with the literature.

A preliminary data analysis has been carried out to understand the statistical properties of the noise power, in the four class identified, both in space and frequency domains. Extracting  the PDFs  all stations, it was produced a representative seismic noise model that it could be considered as a new reference noise for Italian territory. Histograms have been computed for each band, both for vertical and horizontal components and its ratio. In addition, a spatial-statistical analysis was performed showing a good correlation of noise level with some weather conditions and anthropogenic source. Several clustering techniques were applied to the data to identified group of stations with similar PSD level, attributable to the same noise source. Furthermore, a correlation between the noise found at the different stations and spatial data (maps of rainfall, winds, coastlines, ect…) was carried out for a better characterization of the type of source.

How to cite: Catania, M., D'Alessandro, A., Greco, L., Martorana, R., and Scudero, S.: Seismic noise analysis of broadband stations of the Italian Seismic Network by Power Spectral Density, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20268, https://doi.org/10.5194/egusphere-egu2020-20268, 2020

D1693 |
Fakhraddin Gadirov (Kadirov), Luciano Telesca, Gulam Babayev, Gurban Yetirmishli, and Rafig Safarov

Reservoir-induced seismicity has been studied worldwide due to its potential to provoke damage to buildings and constructions, and, more important, human loss. Reservoir-induced seismicity (RIS) is normally related with additional static loading (the weight of the water reservoir and its seasonal variations), tectonic faults, liquefaction and pore pressure variations.The Mingechevir reservoir is located in the north-west of Azerbaijan on the Kurriver. This water reservoir is extended from north-west towards south-east through Kur river valley by 75 km. The area of the dam is 625 km2 with the average width accounting for 6-8 km. The volume of the dam is 16 km3. The dam filling started in 1953. This reservoir is the largest one in the Caucasus and carries a number of geo-hazards interrelated with geodynamics and technogenic factors. The aim of the present study in the Mingechevir reservoir is to investigate relationship between the fluctuations of the water level and the onset of seismicity in the area around the dam more in detail, by using several and independent statistical methods.The temporal variations of the instrumental seismicity (0.5≤ML≤3.5) recorded in the Mingechevir area (Azerbaijan) between January 2010 to April 2018 and its relationship with the level variation of the water reservoir was analysed in this study. Due to the relative high completeness magnitude (MC = 1.6) of the seismic catalogue of the area, only 136 events were selected over a period of more than 8 years. Thus, the monthly number of events was analysed by using the correlogram-based periodogram, the singular spectrum analysis (SSA) and the empirical mode decomposition (EMD), which are robust against the short size of the time series. Our results point out to the following findings: 1) annual periodicity was found in one SSA reconstructed component of the monthly number of events; 2)quasi-annual periodicity was found in one EMD intrinsic mode function of the monthly number of earthquakes. These obtained results could support in a rigorously statistical manner that the seismicity occurring in Minghechevir area could be triggered by the yearly cycle of the water level of the reservoir.


Keywords:water reservoir, induced seismicity, water level change, Mingechevir reservoir, Azerbaijan

How to cite: Gadirov (Kadirov), F., Telesca, L., Babayev, G., Yetirmishli, G., and Safarov, R.: Relationship between water level temporal changes and seismicity in the Mingechevir reservoir (Azerbaijan), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1846, https://doi.org/10.5194/egusphere-egu2020-1846, 2019