In the last two decades the number of high quality seismic instruments being installed around the world has grown exponentially and probably will continue to grow in the coming decades. This led to a dramatic increase in the volume of available seismic data and pointed out the limits of the current standard routine seismic analysis, often performed manually by seismologists. Exploiting this massive amount of data is a challenge that can be overcome by using new generation, fully automated and noise-robust seismic processing techniques. In the last years waveform-based detection and location methods have grown in popularity and their application have dramatically improved seismic monitoring capability. Moreover, machine learning techniques, which are dedicated methods for data-intensive applications, are showing promising results in seismicity characterization applications opening new horizons for the development of innovative, fully automated and noise-robust seismic analysis methods. Such techniques are particularly useful when working with data sets characterized by large numbers of weak events with low signal-to-noise ratio, such as those collected in induced seismicity, seismic swarms and volcanic monitoring operations. This session aims on bringing to light new methods and also optimizations of existing approaches that make use of High Performance Computing resources (CPU, GPU) and can be applied to large data sets, either retro-actively or in (near) real-time, to characterize seismicity (i.e. perform detection, location, magnitude and source mechanism estimation) at different scales and in different environments. We thus encourage contributions that demonstrate how the proposed methods help improve our understanding of earthquake and/or volcanic processes.
vPICO presentations: Wed, 28 Apr
Conventionally, the way of storing and exchange numerical data depends mainly on binary data files in compressible form. In this era of the Big Data and machine learning systems and with the accumulation of data with different forms and types, it is important to find an alternative way for handling the data. The binary data are software dependent which does not exhibit its content and type without accessing the data by the proper software. In addition, it does not have any encryption ability. To solve this issue, we propose a new concept to handle the digital data in a descriptive, encrypted, compressed form, and able to be previewed. The idea is to pack the binary bits into a bitmap image with specific coding scheme. This approach employs the Steim scheme as a primary compression tool with a 128-bit encryption method then packs the encrypted codes into a WebP image file. The WebP image is featured by being an independent, web friendly, and highly compressed file. In order to make the file describing its contents, we reserved some pixels as coded descriptive pixels. By this way, the now packed data exhibits its contents and type during image preview.
It is proven that the Data-In-Image format, regardless of being encrypted, occupies the least amount of storage space among other image formats that can be easily handled, stored, and shared through clouds and devices safely with a lower cost. For seismic data, the size of the WebP image comprises ~20% of the corresponding binary size with a bit-rate of ~5.6 b/s which is smaller than that of the Steim form, 27% and 8.9 b/s, respectively. Regarding the compression speed, it is found that the code compresses data with a rate of ~11,118 samples/s or ~ 44 Kbytes/s in average.
In addition, the data image is able to be digitally scanned and with some modifications can be remotely accessed like the quick response code, the thing that is not possible in the binary form. Moreover, the descriptive pixels in the image allow the implementations of smart tools to archive and classify data by machine learning and recognition algorithms.
How to cite: Abdelwahed, M.: Data-In-Image: a novel concept for data manipulation and encryption implementation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-309, https://doi.org/10.5194/egusphere-egu21-309, 2020.
In the last decades, the discovery of seismic signals other than those known as classical earthquakes have changed our understanding of the dynamic process of lithosphere fracturing with implications for seismic monitoring. These signals are hypothesized to be generated by slow slip on faults and/or by the motion of fluids in the crust. The earthquakes that occur in the Azores archipelago are thought to result from the interaction between a tectonic triple junction and a low-velocity (possibly hot) anomalous mantle. Although most of the seismic activity in this region is tectonic, there is also evidence of seismic activity related to hydrothermal and magmatic activity, which makes the Azores region a privileged natural observatory for studying different types of seismic signals. In this work we will then focus on the spatio-temporal evolution of the February 2018 seismic sequence which occurred in the island of São Miguel. We will carry out detection and preliminary location of seismic events using Lassie, an open-source software for earthquake detection. We will also perform waveform similarity and clustering analysis to understand the detailed spatio-temporal evolution of the crisis.
This work is funded by FCT through projects UIDB/50019/2020 – IDL and PTDC/CTAGEF/ 6674/2020 (RESTLESS).
How to cite: Soares, A., Custodio, S., and Cesca, S.: Tools for monitoring the spatio-temporal evolution of seismic sequences: An application to the Azores triple junction, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16378, https://doi.org/10.5194/egusphere-egu21-16378, 2021.
A novel geotomography technique has been applied at the epicentral area around capitol of Macedonia - Skopje, using selected earthquakes that occurred over a period of 57 years and were recorded on temporary and permanent seismograph stations. This study will test the tomography method for the first time in investigation of the crustal shape and structures in our tectonic environment using specially designed datasets covering 1964-1967 and 2016-2020 periods.
In the initial phase, the analysis will show the potential of the geotomography application in revealing detailed velocity perturbation in the lithosphere. Then, the events are relocated in the 3-D models and new cross-sections of the crust produced by a simultaneous approach. The images can help in constraining the velocity vs depth relationship and thus can contribute towards redefinition of the earthquake zones. The results are discussed in terms of general stress and seismic regime and their temporal changes.
Better understanding of the seismicity and tectonics processes in the Skopje region will lead to an overall improvement of the earthquake hazard assessment at local and national level, as well as further integration in research programs with other geophysical methods.
How to cite: Sinadinovski, C., Pekevski, L., Cernih, D., Drogreska, K., and Najdovska, J.: Seismicity Detection in Skopje Region Using Tomographic Methods and 3-D Modelling , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6950, https://doi.org/10.5194/egusphere-egu21-6950, 2021.
Earthquake location is a primary function of volcano observatories worldwide and the resulting catalogs of seismicity are integral to interpretations and forecasts of volcanic activity. Ensuring earthquake location accuracy is therefore of critical importance. However, accurate earthquake locations require accurate velocity models, which are not always available. In addition, difficulties involved in applying traditional velocity modeling methods often mean that earthquake locations are computed at volcanoes using velocity models not specific to the local volcano.
Traditional linearized methods that jointly invert for earthquake locations, velocity structure, and station corrections depend critically on having reasonable starting values for the unknown parameters, which are then iteratively updated to minimize the data misfit. However, these deterministic methods are susceptible to local minima and divergence, issues exacerbated by sparse seismic networks and/or poor data quality common at volcanoes. In cases where independent prior constraints on local velocity structure are not available, these methods may result in systematic errors in velocity models and hypocenters, especially if the full range of possible starting values is not explored. Furthermore, such solutions depend on subjective choices for model regularization and parameterization.
In contrast, Bayesian methods promise to avoid all these pitfalls. Although these methods traditionally have been difficult to implement due to additional computational burdens, the increasing use and availability of High-Performance Computing resources mean widespread application of these methods is no longer prohibitively expensive. In this presentation, we apply a Bayesian, hierarchical, trans-dimensional Markov chain Monte Carlo method to jointly solve for hypocentral parameters, 1D velocity structure, and station corrections using data from monitoring networks of varying quality at several volcanoes in the U.S. and South America. We compare the results with those from a more traditional deterministic approach and show that the resulting velocity models produce more accurate earthquake locations. Finally, we chart a path forward for more widespread adoption of the Bayesian approach, which may improve catalogs of volcanic seismicity at observatories worldwide.
How to cite: Pesicek, J., Ryberg, T., Machacca, R., and Raigosa, J.: Velocity models for routine earthquake monitoring at volcanoes: from deterministic to Bayesian, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3310, https://doi.org/10.5194/egusphere-egu21-3310, 2021.
Currently the capability of detecting earthquakes with decreasing magnitudes demands efficient source localization, especially in seismic monitoring. This work is a step towards automatic high-resolution earthquake localization in a seismic monitoring setup that makes use of Distributed Acoustic Sensing (DAS) as its primary measuring technique. With DAS, the dense spatial sampling of the seismic wavefield leads to an improvement of both event detection and localization of earthquakes. The advantage of DAS is easy and cost-effective deployment compared to traditional seismic instruments (especially in boreholes). However, the single-component nature and the large storage requirements of DAS data demand novel methods for efficient analysis of the recorded events.
We apply a new seismic event location method to DAS data, based on a distance geometry problem in biochemistry for protein structure determination (HADES1). From the distances between individual earthquakes and a seismic station, the relative distance between the events can be computed. This approach allows us to first determine the relative location of earthquakes within a seismic cluster, and subsequently position the cluster in its correct absolute location. The technique has already been successfully applied for a single traditional seismometer. The densely spaced channels in DAS measurements accommodate accurate relative distance computation, without the ability to constrain the azimuth of the seismic cluster. Therefore, after finding the relative locations within the cluster, the position and orientation of the cluster with respect to the fiber-optic cable is calculated by minimizing the difference between observed and calculated P- and S-wave first arrival times, using a grid search approach (multi-event location). In this way, the absolute locations of all earthquakes present in the cluster are found efficiently. We first test this DAS-adapted method on synthetics, then we will move towards a real data application.
1 HADES: https://github.com/wulwife/HADES
How to cite: Tuinstra, K., Lanza, F., Grigoli, F., Rinaldi, A. P., Fichtner, A., and Wiemer, S.: Towards automatic microseismic cluster localization with DAS, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6592, https://doi.org/10.5194/egusphere-egu21-6592, 2021.
Dense seismic swarms usually show a high number of earthquakes per hour, the events may overlap and in most of the cases seismic records are characterized by a low signal-to-noise ratio. As a result, the manual characterization performed by seismic and volcanic observatories can become very complicated or impossible to perform. In order to solve this problem, we have developed a set of algorithms whose purpose is to detect the events, pick their phases and give a location (in an absolute and relative way) of the earthquakes associated with a known swarm.
These algorithms have been tested in two different tectonic environments: the volcano-tectonic pre-eruptive swarm of El Hierro, Spain (2011) and the tectonic seismic series of Torreperogil, Spain (2012-2013). Both crises mainly differ in the distances from the seismic stations to the hypocentres of the swarms: in the case of El Hierro, data corresponds to local epicentral distances (5-20Km) while the case of Torreperogil seismic series deals with regional distances (10-180km). Otherwise, both series present a similar evolution of the seismic network: as the number of earthquakes increased, more stations were deployed and the network became denser.
To analyze these series, we have used two sets of well relocated earthquakes of both swarms as masters, considering manually analyzed events by National Geographic Institute (IGN) with magnitude mbLg greater than 1.5. After the application of the new algorithms, we have increased the number of earthquakes of the IGN seismic catalog by a factor of 4.5 for Torreperogil and 2.9 for El Hierro. Similarly, the number of picked phases for these two series has been increased by a factor of 4.5 and 3.5, respectively.
How to cite: Díaz Suárez, E. A., Domínguez Cerdeña, I. F., Del Fresno Rodríguez-Portugal, C., Cantavella Nadal, J. V., and Barco De La Torre, J.: Automatic swarm analyzer based on matched filtering algorithms: El Hierro 2011 and Torreperogil 2012-2013, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2672, https://doi.org/10.5194/egusphere-egu21-2672, 2021.
When the earthquake rupture is complex and ruptures of multiple fault segments contribute to the total energy release, the produced wavefield is the superposition of individual signals produced by single subevents. Resolving source complexity for large, shallow earthquakes can be used to improve ground shaking and surface slip estimations, and thus tsunami models. The 2018 Mw 7.9 Alaska earthquake showed such complexity: according to previous studies, the rupture initiated as a right-lateral strike-slip fault on a N-S oriented fault plane, but then jumped onto a left-lateral strike-slip fault oriented westward. Rupture complexity and presence of multiple subevents may characterize a number of other earthquakes. However, even when individual subevents are spatially and/or temporally separated, it is very difficult to identify them from far field recordings. In order to model complex earthquakes we have implemented a multiple double couple inversion scheme within Grond, a tool devoted to the robust characterization of earthquake source parameters included in the Pyrocko software. Given the large magnitude of the target earthquake, we perform our source inversions using broadband body waves data (P and S phases) at teleseismic distances. Our approach starts with a standard moment tensor inversion, which allows to get more insights about the centroid location and overall moment release. These values can then be used to constrain the double source inversion. We discuss the performance of the inversion for the Alaska earthquake, using synthetic and real data. First, we generated realistic synthetic waveforms for a two-subevents source, assuming double couple sources with the strike-slip mechanisms proposed for the Alaska earthquake. We model the synthetic dataset both using a moment tensor and a double double couple source, and demonstrate the stability of the double double couple inversion, which is able to reconstruct the two focal mechanisms, the moment ratio and the relative centroid locations of the two subevents. Synthetic tests show that the inversion accuracy can be in some cases reduced, in presence of noisy data and when the interevent time between subevents is short. A larger noise addition affects the retrieval of the focal mechanism orientations only in some cases, but in general all the parameters were well retrieved. Then, we test our tool using real data for the earthquake. The single source inversion shows that the centroid is shifted 27 s in time and 40 km towards NE with respect to the original assumed location retrieved from the gCMT catalogue. The following double double couple source inversion resolves two subevents with right-lateral and left-lateral strike-slip focal mechanisms and Mw 7.6 and 7.8 respectively. The subevent centroids are separated by less than 40 km in space and less than 20 s in time.
How to cite: Carrillo Ponce, A., Dahm, T., Cesca, S., Tilmann, F., Babeyko, A., and Heimann, S.: Bayesian multiple rupture plane inversion to assess rupture complexity: application to the 2018 Mw 7.9 Alaska earthquake , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1583, https://doi.org/10.5194/egusphere-egu21-1583, 2021.
The aim of this study is to implement at the CEA a grid search moment tensor inversion scheme (GRiD MT) for the rapid detection and characterization of seismic events, to monitor low to moderate magnitude earthquakes in France. Given the heterogeneity of the seismic network and of the crustal geology over the country, we propose to use a combination of source grids focusing on specific regions. The GRiD MT approach requires an extensive preliminary analysis to define the different inversion parameters (velocity model, frequency band, and stations) and grid spacing (in latitude, longitude and depth). Here, we present the advances made towards the GRiD MT implementation for France. Using recent earthquakes, such as the 2019 ML 5.4 Le Teil earthquake, near Montelimar in Southern France, we discuss the validity of this detection and characterization tool for earthquake routine. The GRiD MT results applied in the South-East region for moderate earthquakes are close to those published by other agencies (USGS, IPGP, OCA, INGV) in terms of location, magnitude and focal mechanism. Nonetheless, more care is needed for the smallest events. We also discuss the precision and the uncertainties in constraining the source parameters using this method, especially when considering the goodness of fit as the unique criterion to identify a potential source, and different Earth’s structures. In the end, we show that the GRiD MT approach may be an interesting tool to analyze seismic events in France within only a few minutes after their occurrence. However, their low magnitude range raises some challenging questions to be answered.
How to cite: Menager, M., Trilla, A., and Delouis, B.: Toward implementing a grid search moment tensor (GRiD MT) tool for the rapid detection and characterization of seismic events in Metropolitan France, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2000, https://doi.org/10.5194/egusphere-egu21-2000, 2021.
Automatic Moment Tensor (MT) determination for regional areas is essential for real-time seismological applications such as stress inversion, shakemap generation, and tsunami warning. In recent years, the combination of two powerful tools, SeisComP and ISOLA (Sokos and Zahradník, 2008), paved the way for the release of Scisola (Triantafyllis et al., 2016), an open-source Python-based software for automatic MT calculation of seismic events provided by SeisComP -a well-known software package-, in real-time. ISOLA is an extensively used manual MT retrieval utility, based on multiple-point source representation and iterative deconvolution, full wavefield is taken into consideration and Green's functions are calculated with the discrete wavenumber method as implemented in the Axitra Fortran code (Cotton and Coutant, 1997). MT of subevents is found by least-square minimization of misfit between observed and synthetic waveforms, while position and time of subevents is optimized through grid search. Scisola monitors SeisComP and passes the event information, the respective waveform data and the station information to the ISOLA software for the Green’s Functions and MT computation. Gisola is a highly evolved version of Scisola software, oriented for High-Performance Computing. Unlike Scisola, the new program applies enhanced algorithms for waveform data filtering via quality metrics such as signal-to-noise ratio, waveform clipping, data and meta-data inconsistency, long-period (“mouse”) disturbances, and current station evaluation based on comparison between its daily Power Spectral Density (PSD) and various reference metrics for the frequency bands of interest, featuring a CPU multiprocessing implementation for faster calculations. Concerning the Green’s Functions computation, Gisola operates a newer version of the Axitra, highlighting the power of simultaneous processing in the CPU domain. Likewise, the inversion procedure code has been intensively improved by exploiting the performance efficiency of GPU-based multiprocessing implementation (with an automatic fallback to CPU-based multiprocessing in case of GPU hardware absence) and by unifying sub-programs to minimize I/O operations. In addition, a fine-grained 4D (space-time) adjustable source grid search is available for more accurate MT solutions. Moreover, Gisola expands its seismic data input resources by interconnecting to the FDSN Web Services. Furthermore, the new software has the ability to export the results in the well-known QuakeML standard, and in this way, provide clients -such as SeisComP- with MT results attached to the seismic event information. Finally, the operator has full control of all calculation aspects, with an extensive and adapted to regional data, configuration. The program can be installed on any computer that operates a Linux OS and has access to the FDSN Web Services, while the source code will be open and free to the scientific community.
Cotton F. and Coutant O., 1997, Dynamic stress variations due to shear faults in a plane-layered medium, GEOPHYSICAL JOURNAL INTERNATIONAL,Vol 128, 676-688, doi: 10.1111/j.1365-246X.1997.tb05328.x.
Sokos, E. N., and J. Zahradník (2008). ISOLA a FORTRAN code and a MATLAB GUI to perform multiple-point source inversion of seismic data, Comp. Geosci. 34, no. 8, 967–977, doi: 10.1016/j.cageo.2007.07.005.
Triantafyllis, N., Sokos, E., Ilias, A., & Zahradník, J. (2016). Scisola: automatic moment tensor solution for SeisComP3. Seismological Research Letters, 87(1), 157-163, doi: 10.1785/0220150065.
How to cite: Triantafyllis, N., Venetis, I., Fountoulakis, I., Pikoulis, E.-V., Sokos, E., and Evangelidis, C.: Gisola: Real-Time Moment Tensor computation optimized for multicore and manycore architectures, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15888, https://doi.org/10.5194/egusphere-egu21-15888, 2021.
Despite advanced seismological methods, source characterization for micro-seismic events remains challenging since current inversion and modelling of high-frequency waveforms are complex and time consuming. For a real-time application like induced-seismicity monitoring, these methods are slow for true real-time information because they require repeated evaluation of the often computationally expensive forward operation. Moreover, because of the low amplitude and high-frequency content of the recorded micro-seismic signals, routine inversion procedure can become unstable and manual parameter tuning is often required. Therefore, real-time and automatic source inversion procedures are difficult and not standard. A more promising alternative to the current inversion methods for rapid source parameter inversion is to use a deep-learning neural network model that is calibrated on a data set of past and/or possible future observations. Such data-driven model, once trained, offers the potential for rapid real-time information on seismic sources in a monitoring context.
In this study, we investigate how a supervised deep-learning model trained on a data set of synthetic seismograms can be used to rapidly invert for source parameters. The inversion is represented in compact form by a convolutional neural network which yields seismic moment tensor. In other words, a neural-network algorithm is trained to encapsulate the information about the relationship between observations and underlying point-source models. The learning-based model allows rapid inversion once seismic waveforms are available. Moreover, we find that the method is robust with respect to perturbations such as observational noise and missing data. In this study, we seek to demonstrate that this approach is viable for micro-seismicity real-time estimation of source parameters. As a demonstration test, we plan to apply the new approach to data collected at the geothermal field system in the Hengill area, Iceland, within the framework of the COSEISMIQ project funded through the EU GEOTHERMICA programme.
How to cite: Nooshiri, N., Bean, C. J., Grigoli, F., and Dahm, T.: Towards Real-Time Moment Tensor Inversions in a Data Rich Micro-Seismic Environment using Deep Learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12686, https://doi.org/10.5194/egusphere-egu21-12686, 2021.
In seismology, when dealing with low signal-to-noise recordings, traditional event detection methods are often unable to recognise all the weak events hidden within the seismic noise. We are interested in investigating how machine learning techniques can be a useful tool to improve automatic event detection by recognising the similarity between events. We are interested in studying areas where anthropogenic activity, related to the exploitation of subsoil resources, can generate induced seismicity. Therefore, it is essential to increase the detection of weak events to improve knowledge about the seismicity of the area and its related consequences.
The SOM (Self-Organizing Map) is an unsupervised machine learning approach that is widely used for clustering, visualization and data-exploration tasks in various applications. The SOM carries out a nonlinear mapping of data onto a two-dimensional map, preserving the most important topological and metric relationships of the data. One of the reasons for using SOM for clustering indeed is to benefit from its topological structure when interpreting the data clusters.
In the preprocessing stage, features extraction is done by using both the linear prediction coding (LPC) technique for coding the spectrograms, and a waveform parameterization for characterizing amplitude characteristics in the time domain, for each of the three components.
The SOM was trained on dataset, recorded at the St Gallen geothermal site, composed of 388 records of seismic noise and 347 earthquakes with magnitude (MLcorr) between -1.2 and 3.5 collected by the Swiss Seismological Service in 2013 while realizing well control measures after drilling and acidizing the GT-1 well.
We obtained promising first results as SOM strategy correctly discriminates all known earthquakes events, clustering them into different nodes, distant from the group of nodes where noise falls. We also jointly tested synthetic traces in which we have hidden events traces within seismic noise or noise artificially generated. We studied the signals of each cluster individually, assessing the similarities of the waveform and spectral characteristics for the three components. In addition, the results are also evaluated in terms of events location, hypocentral distance, magnitude, and origin time.
This work has been supported by PRIN-2017 MATISSE project, No 20177EPPN2, funded by the Italian Ministry of Education and Research.
How to cite: Forlenza, P., Scarpetta, S., Amoroso, O., Capuano, P., and Scarpa, R.: An application of automatic event detection based on neural network at St Gallen (Switzerland) deep geothermal field, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9627, https://doi.org/10.5194/egusphere-egu21-9627, 2021.
Seismology is one of the main techniques used to monitor volcanic activity worldwide. Seismicity analysis through several seismic sensor deployments has been used to monitor Mayotte volcano crisis since its beginning in May 2018. Because volcanic activity can evolve rapidly, efficient and accurate seismicity detectors are crucial to assess in real-time the activity level of the volcano and, if needed, to issue timely warnings.
Traditional real-time seismic processing software, such as EarthWorm or SeisComP, use phase onset pickers followed by a phase association algorithm to declare an event and proceed with its location. Real-time phase pickers usually cannot identify whether the detected phase is a P or S arrival and this decision or assumption is made by the associator. The lack of S arrival has an obvious impact on the hypocentral location quality. S-phases can also help detection on small earthquakes where weak P-phases can be missed.
We implemented the deep neural network-based method PhaseNet to identify in real-time seismic P and S waves on 3-component seismometers deployed on Mayotte island. We also built an interface to subsequently process PhaseNet results and send pick objects to EarthWorm. We use EarthWorm binder_ew associator module specifically tuned for PhaseNet a priori phase identification to detect and locate the events, which are finally archived in a SeisComP database. We implemented this innovative real-time processing system for the REVOSIMA (Reseau de surveillance Volcanologique et Sismologique de Mayotte) hosted at OVPF (Observatoire Volcanologique du Piton de la Fournaise). We assess the robustness of the algorithm by comparing the results to existing automatic and manually detected seismicity catalogs.
We show that the existing SeisComP automatic system is outperformed by our new algorithm, both in number of earthquake detections and location reliability. Our implementation also detects more events than the daily manual data screening. While this promising new processing system was first applied to study the Mayotte seismicity, it can be used in any seismic active zone, of volcanic or tectonic origin. Indeed, it will be installed at Martinique volcanic and seismic observatory later this year.
How to cite: Saurel, J.-M., Retailleau, L., Zhu, W., Issartel, S., Satriano, C., and Beroza, G. C.: Implementation of a new real time seismicity detector for the Mayotte crisis, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10646, https://doi.org/10.5194/egusphere-egu21-10646, 2021.
Seismic networks record vibrations that are captured by their stations. These vibrations can be coming from various sources, such as tectonic tremors, quarry blasts and anthropogenic sources. Separation of earthquakes from other sources may require human intervention and it can be a labor-intensive work. In case of lack of such a separation, seismic hazard may be miscalculated. Our goal is to discriminate earthquakes from quarry blasts by using artificial neural networks. To do that, we used two different databases for earthquake signals and quarry blasts. Neither of them have data from our study of interest, which is North-East of Italy. We used three channel signals from the stations as an input for the neural networks. Signals are used as both time series and their spectral characteristics and are fed to the neural networks with this information. We then separated earthquakes from quarry blasts in North-East Italy by using our algorithm. We conclude that our algorithm is able to discriminate earthquakes from quarry blasts with high accuracy and the database can be used in different regions with different earthquake and quarry blast sources in a large variety of distances.
How to cite: Ertuncay, D., De Lorenzo, A., and Costa, G.: Discrimination of quarry blasts from earthquakes using artificial neural networks, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1285, https://doi.org/10.5194/egusphere-egu21-1285, 2021.
Estimating fast earthquakes’ source mechanism is essential for near real time hazard assessments, which are based on shakemaps and further downstream analysis such as physics based aftershock probability calculations. The model and data uncertainties associated to the estimated source mechanism are also crucial. We propose a Baysian Machine Learning algorithm trained on normalized synthetic waveforms for estimating the full moment tensor of earthquakes almost instantaneously with associated source parameter uncertainties.
A prior assumption is an appropriate location of the earthquakes along with its associated uncertainties. Here, this is obtained by already established Machine learning based algorithms, where the training data set is computed by forward calculations of synthetic waveforms based on Green’s functions calculated for a specified 1-D velocity model using the Pyrocko software package. The learned labels, which are the information learned by the Machine Learning algorithm associated to the data, are the moment tensor components, described with only five unique parameters. For predefined locations in an area of interest we train a full independent Bayesian Convolutional Neural Network (BNN).
With variational inference the weights of the network are not scalar but represent a distribution of weights for the activation of neurons. Each evaluation of input data into our BNN yields therefore to a set of predictions with associated probabilities. This allows us to evaluate an ensemble of possible source mechanisms for each evaluation of input waveform data.
As a test set, we trained our models for an area south of the Coso geothermal field in California for a fixed set of broadband stations at maximum 150 km distance. We validate our approach with a subset of earthquakes from the Ridgecrest 2019-2020 sequence. For this data set we compare the results of the estimates of our Machine Learning based approach with independently determined focal mechanism and moment tensors. Overall, we benchmark our approach with data unseen during the training process of the Machine Learning models and show its capabilities for generating similar source mechanism estimations as independent studies within only a few seconds processing time per earthquake. We finally apply the method to seismic data of a research network monitoring the area around two south-german geothermal power plants. Our approach demonstrates the potential of Machine Learning for being implemented in operational frameworks for fast earthquake source mechanism estimation with associated uncertainties.
How to cite: Steinberg, A., Vasyura-Bathke, H., Gaebler, P., and Ceranna, L.: Estimating Seismic Moment Tensors based on Bayesian Machine Learning, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-707, https://doi.org/10.5194/egusphere-egu21-707, 2021.
Machine learning methods have seen widespread adoption within the seismological community in recent years due to their ability to effectively process large amounts of data, while equalling or surpassing the performance of human analysts or classic algorithms. In the wider machine learning world, for example in imaging applications, the open availability of extensive high-quality datasets for training, validation, and the benchmarking of competing algorithms is seen as a vital ingredient to the rapid progress observed throughout the last decade. Within seismology, vast catalogues of labelled data are readily available, but collecting the waveform data for millions of records and assessing the quality of training examples is a time-consuming, tedious process. The natural variability in source processes and seismic wave propagation also presents a critical problem during training. The performance of models trained on different regions, distance and magnitude ranges are not easily comparable. The inability to easily compare and contrast state-of-the-art machine learning-based detection techniques on varying seismic data sets is currently a barrier to further progress within this emerging field. We present SeisBench, an extensible open-source framework for training, benchmarking, and applying machine learning algorithms. SeisBench provides access to various benchmark data sets and models from literature, along with pre-trained model weights, through a unified API. Built to be extensible, and modular, SeisBench allows for the simple addition of new models and data sets, which can be easily interchanged with existing pre-trained models and benchmark data. Standardising the access of varying quality data, and metadata simplifies comparison workflows, enabling the development of more robust machine learning algorithms. We initially focus on phase detection, identification and picking, but the framework is designed to be extended for other purposes, for example direct estimation of event parameters. Users will be able to contribute their own benchmarks and (trained) models. In the future, it will thus be much easier to compare both the performance of new algorithms against published machine learning models/architectures and to check the performance of established algorithms against new data sets. We hope that the ease of validation and inter-model comparison enabled by SeisBench will serve as a catalyst for the development of the next generation of machine learning techniques within the seismological community. The SeisBench source code will be published with an open license and explicitly encourages community involvement.
How to cite: Woollam, J., Münchmeyer, J., Giunchi, C., Jozinovic, D., Diehl, T., Saul, J., Michelini, A., Haslinger, F., Lange, D., Tilmann, F., and Rietbrock, A.: SeisBench: A toolbox for benchmarking and applying machine learning in seismology., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12218, https://doi.org/10.5194/egusphere-egu21-12218, 2021.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.
We are sorry, but presentations are only available for users who registered for the conference. Thank you.