A deep learning-based workflow for microseismic event detection
- 1Department of Earth Sciences, University of Pisa, Italy
- 2Swiss Seismological Service (SED), ETH-Zurich, Switzerland
In the last few years, the number of dense seismic networks deployed around the world has grown exponentially and will continue to grow in the next years, producing larger and larger datasets. Among the different seismological applications where these massive datasets are usually collected microseismic monitoring operations are certainly the most relevant and are a perfect playground for data-intensive techniques. In these applications we generally deal with seismic sequences characterized by a large number of weak earthquakes overlapping each other or with short inter-event times; in these cases, pick-based detection and location methods may struggle to correctly assign picks to phases and events, and errors can lead to missed detections and/or reduced location resolution. Among the seismological data analysis methods recently developed, waveform-based approaches have gained popularity due to their ability to detect and locate earthquakes without the phase picking and association steps. These approaches exploit the information of the entire network to simultaneously detect and locate seismic events, producing coherence matrices whose maximum corresponds to the coordinates of the seismic event. These methods are particularly powerful at locating microseismic events strongly noise-contaminated, but despite their excellent performance as locators, waveform-based methods still show several disadvantages when used as detectors. Waveform-based earthquake detectors strongly depend on the threshold selected for a certain application. If it is too high, small events may be missed; if it is too low, false events might be detected. To solve this problem, deep learning techniques used for the classification of images can be used to remove the dependence on threshold levels during the detection process. When applied to continuous seismic data, waveform staking methods produce coherence matrices with clear patterns that can be used to distinguish true events from false ones (i.e. noise). The coherence matrices for a seismic event generally show a single and well-focused maximum while pure noise waveforms produce blurred images with low coherence values or many poorly focused maxima. Deep Learning algorithms are the perfect tool to classify these kinds of images and improve the detection capability of waveform-based techniques. The aim of this work is the development of a workflow that, through a Convolutional Neural Network (CNN), detects seismic events by classifying the coherence matrices. We aim to train the CNN by feeding them with synthetic coherence matrices. To generate realistic coherence matrices both for events and noise we use a stochastic modeling approach to generate synthetic noise records with the same spectral properties as the observed one. For each synthetic event or pure noise recording, we finally use waveform stacking to generate coherence matrices that will be used to train the CNN. One important feature of the workflow here exposed is that the training is performed entirely on synthetics without the need for large labeled data, often missing when new microseismic networks are deployed. To test the workflow we apply it to the recently released dataset collected in Iceland, within the COSEISMIQ project.
How to cite: De solda, M., Grigoli, F., Shi, P., Lanza, F., and Wiemer, S.: A deep learning-based workflow for microseismic event detection, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-1638, https://doi.org/10.5194/egusphere-egu23-1638, 2023.