Can deep learning help understand and characterize earthquakes? An example with deep learning optical satellite image correlation.
- 1Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, IRD, Univ. Gustave Eiffel, ISTerre, 38000 Grenoble, France
- 2Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
- 3Department of Earth and Environmental Sciences, Ludwig-Maximilians-Universität München, Munich, Germany
Recent advances in machine learning are having a revolutionizing effect on our understanding of the Solid Earth, in particular in the automatic detection of geophysical events and objects (such as volcano movements in InSAR [Anantrasirichai et al. 2018], landslides in optical satellite imaging [Mohan et al. 2021], and earthquakes in seismic recordings [Zhu et al. 2019]). Yet, the understanding of geophysical phenomena requires us to be able to accurately characterize them: automatizing such tasks by machine learning is the new challenge for future years. One main difficulty resides in the availability of a high quality labeled database, that is a database with both input data (such as remote sensing acquisitions) together with their ground truth (what we are looking for). In this context, the problem of ground deformation estimation by sub-pixel optical satellite image registration (or correlation) is a good example.
Precise estimation of ground displacement at regional scales from optical satellite imagery is fundamental for the understanding of earthquake ruptures. Current methods make use of correlation techniques between two image acquisitions in order to retrieve a fractional pixel shift [Rosu et al. 2014, Leprince et al. 2007]. However, the precision and accuracy of image correlation can be limited by various problems, such as differences in local lighting conditions between acquisitions, seasonal changes in image reflectance, stereoscopic and resampling artifacts, which can all bias the displacement estimate, especially in the sub-pixel domain.
Image correlation is a valuable and unique source of information on the coseismic strain particularly in the near-field of earthquake ruptures, where InSAR can often decorrelate. However, the correlation process can be limited by the underlying assumption of a locally homogenous displacement within the correlation window (typically 3x3 to 32x32 pixels wide), leading to a bias when the correlation window crosses a fault discontinuity. Data-driven methods may provide a way to overcome these errors. Yet, no ground truth displacement field exists in real world datasets. From the generation of a realistic simulated database based on Landsat-8 satellite image pairs, with added simulated sub-pixel shifts, we developed a Convolutional Neural Network (CNN) able to retrieve sub-pixel displacements. In particular, we show how to specifically design discontinuities in the training set in order to reduce the near-field bias where the correlation window crosses the fault. Comparisions are made with state-of-the-art correlations methods both on synthetic (and realistic) data, and on real images from the Ridgequest area.
This preliminary study provides an example of how to use realistic synthetic data generation (combining real data with synthetic numerical approaches) for training a machine learning model able to estimate fault displacement fields. Such an approach could be applied to other characterization tasks, e.g. when realistic numerical simulation data is available, while sufficient ground truth data is not.
How to cite: Giffard-Roisin, S., Montagnon, T., Pathier, E., Dalla Mura, M., Marchandon, M., and Hollingsworth, J.: Can deep learning help understand and characterize earthquakes? An example with deep learning optical satellite image correlation., EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7571, https://doi.org/10.5194/egusphere-egu23-7571, 2023.