EGU26-13408, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-13408
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Tuesday, 05 May, 14:00–15:45 (CEST), Display time Tuesday, 05 May, 14:00–18:00
 
Hall X3, X3.104
Interpreting landslide susceptibility models using explainable machine learning
Carla Mae Arellano1, Daniel Hölbling1, Elena Nafieva1, Jachin Jonathan van Ek1, Stéphane Henriod1, Yann Rebois2, Albert Schwingshandl3, Sarah Forcieri4, Raimund Heidrich3, Isabella Hörbe3, and Lorena Abad1
Carla Mae Arellano et al.
  • 1University of Salzburg, Z_GIS, Salzburg, Austria (carlamae.arellano@plus.ac.at)
  • 2Médecins Sans Frontières (MSF), Vienna, Austria 
  • 3RIOCOM – Ingenieurbüro für Kulturtechnik und Wasserwirtschaft DI Albert Schwingshandl , Vienna, Austria
  • 4National School of Geographic Sciences - Geomatics (ENSG - Géomatique), Champs-sur-Marne, France

Machine learning approaches are increasingly applied to landslide susceptibility mapping. Despite their growing use, limited insight into model behavior and variable influence remains a major challenge, particularly in data-scarce settings where inventories are incomplete and input data are heterogeneous. 

This study explores how explainability methods can be used to analyze and interpret machine learning-based landslide susceptibility models. First, a landslide susceptibility dataset is constructed by combining an available landslide inventory with commonly used environmental conditioning factors. These include topographic data (e.g. elevation, slope, curvature, flow accumulation), proximity variables (e.g. distance to rivers and roads), and land cover or vegetation proxies derived from Earth Observation (EO) data, such as the Normalized Difference Vegetation Index (NDVI). Our focus is on understanding how different input variables influence model predictions and how these influences vary spatially.  

For this, explainability techniques are applied to assess variable importance and spatial patterns in model responses. Feature attribution methods such as SHapley Additive exPlanations (SHAP) are used to quantify the contribution of individual conditioning factors at both the global model level and locally in space. The results are examined for consistency with established geomorphological understanding, and sensitivities related to data limitations, inventory characteristics, and sampling strategies are identified. 

This study provides insight into the strengths and limitations of machine learning-based landslide susceptibility modelling in data-scarce contexts and demonstrates how explainability can support more transparent and critically assessed susceptibility analyses. This work contributes to the development of interpretable susceptibility mapping approaches suited to preparedness and decision-support applications. 

How to cite: Arellano, C. M., Hölbling, D., Nafieva, E., van Ek, J. J., Henriod, S., Rebois, Y., Schwingshandl, A., Forcieri, S., Heidrich, R., Hörbe, I., and Abad, L.: Interpreting landslide susceptibility models using explainable machine learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13408, https://doi.org/10.5194/egusphere-egu26-13408, 2026.