EGU21-2502, updated on 03 Mar 2021
https://doi.org/10.5194/egusphere-egu21-2502
EGU General Assembly 2021
© Author(s) 2021. This work is distributed under
the Creative Commons Attribution 4.0 License.

A self-supervised Deep Learning approach for improving signal coherence in Distributed Acoustic Sensing

Martijn van den Ende1,2, Itzhak Lior1, Jean Paul Ampuero1, Anthony Sladen1, and Cédric Richard2
Martijn van den Ende et al.
  • 1Université Côte d'Azur, Géoazur, France (vanden@geoazur.unice.fr)
  • 2Université Côte d'Azur, OCA, UMR Lagrange, France

Fibre-optic Distributed Acoustic Sensing (DAS) is an emerging technology for vibration measurements with numerous applications in seismic signal analysis as well as in monitoring of urban and marine environments, including microseismicity detection, ambient noise tomography, traffic density monitoring, and maritime vessel tracking. A major advantage of DAS is its ability to turn fibre-optic cables into large and dense seismic arrays. As a cornerstone of seismic array analysis, beamforming relies on the relative arrival times of coherent signals along the optical fibre array to estimate the direction-of-arrival of the signals, and can hence be used to locate earthquakes as well as moving acoustic sources (e.g. maritime vessels). Naturally, this technique can only be applied to signals that are sufficiently coherent in space and time, and so beamforming benefits from signal processing methods that enhance the signal-to-noise ratio of the spatio-temporally coherent signal components. DAS measurements often suffer from waveform incoherence, and processing submarine DAS data is particularly challenging.

In this work, we adopt a self-supervised deep learning algorithm to extract locally-coherent signal components. Owing to the similarity of coherent signals along a DAS system, one can predict the coherent part of the signal at a given channel based on the signals recorded at other channels, referred to as "J-invariance". Following the recent approach proposed by Batson & Royer (2019), we leverage the J-invariant property of earthquake signals recorded by a submarine fibre-optic cable. A U-net auto-encoder is trained to reconstruct the earthquake waveforms recorded at one channel based on the waveforms recorded at neighbouring channels. Repeating this procedure for every measurement location along the cable yields a J-invariant reconstruction of the dataset that maximises the local coherence of the data. When we apply standard beamforming techniques to the output of the deep learning model, we indeed obtain higher-fidelity estimates of the direction-of-arrival of the seismic waves, and spurious solutions resulting from a lack of waveform coherence and local seismic scattering are suppressed.

While the present application focuses on earthquake signals, the deep learning method is completely general, self-supervised, and directly applicable to other DAS-recorded signals. This approach facilitates the analysis of signals with low signal-to-noise ratio that are spatio-temporally coherent, and can work in tandem with existing time-series analysis techniques.

References:
Batson J., Royer L. (2019), "Noise2Self: Blind Denoising by Self-Supervision", Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, California

How to cite: van den Ende, M., Lior, I., Ampuero, J. P., Sladen, A., and Richard, C.: A self-supervised Deep Learning approach for improving signal coherence in Distributed Acoustic Sensing, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2502, https://doi.org/10.5194/egusphere-egu21-2502, 2021.