Using Deep Learning for Sentinel-1-based Landslide Mapping
- 1German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Muenchener Strasse 20, 82234 Wessling, Germany
- 2German Climate Computing Center (DKRZ), Bundesstraße 45a, 20146 Hamburg, Germany
- 3Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Muenchener Strasse 20, 82234 Wessling, Germany
Every year, landslides kill or injure thousands of people worldwide and substantially impact human livelihood. With the increasing number of extreme weather events due to the changing climate, urban sprawl and intensification of human activities, the amount of deadly landslide events is expected to grow. Landslides often occur unexpectedly due to the difficulty of predicting their location and timing. In such cases, providing information on the spatial extent of the landslide hazard is essential for organising and executing first-response actions on the ground.
This study explores the advantages and limitations of using high-resolution Synthetic Aperture Radar (SAR) data from Sentinel-1 within a deep learning framework for rapidly mapping landslide events. The objectives of the research are four-fold: 1) to investigate how Sentinel-1 landslide mapping can be improved using deep learning; 2) to explore if the addition of up to three pre-event scenes could improve the SAR-based classification accuracies; 3) to test if and how much the addition of polarimetric decomposition features and interferometric coherence help to improve classification accuracies; 4) to test if performing data augmentation affects the final results.
We adopt a semantic segmentation model – U-Net, and a novel deep network - U2-Net, to map landslides based on limited but globally distributed landslide inventory data. In total, 306 image patches with 128x128 pixels size were split into 80% for training/validation of the model and 20% for testing it. We calculate radar backscatter information (gamma nought VV and VH), polarimetric decomposition features (alpha angle, entropy, anisotropy) and interferometric coherence between temporally adjacent scenes. The features are calculated for three pre-event scenes and one post-event scene. Copernicus Digital Elevation Model (DEM) data are used to integrate land surface elevation and slope information into the classification process.
Using all Sentinel-1 features, the best result of deep learning model obtained 0.96 for the Dice coefficient on validation data. The landslide detection based on U2-Net gave slightly better results than the U-Net-based approach. The accuracies of models based on one, two or three pre-event scenes did not substantially differ, indicating no added values of increasing pre-event SAR features. Higher accuracies were reached when polarimetric decomposition features were combined with interferometric coherence compared to runs with only radar backscatter. Increasing the sample size using image augmentation methods such as four-directional rotation and flipping helped advance the accuracy.
Future research is directed towards (i) increasing and diversifying the landslide examples, (ii) performing landslide-events-based resampling and (iii) adding pre- and post-event optical data from Sentinel-2.
How to cite: Orynbaikyzy, A., Albrecht, F., Yao, W., Plank, S., Camero, A., and Martinis, S.: Using Deep Learning for Sentinel-1-based Landslide Mapping , EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-6884, https://doi.org/10.5194/egusphere-egu23-6884, 2023.