EGU25-7584, updated on 14 Mar 2025
https://doi.org/10.5194/egusphere-egu25-7584
EGU General Assembly 2025
© Author(s) 2025. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Friday, 02 May, 09:45–09:55 (CEST)
 
Room N2
Improving Landslide Susceptibility Mapping with Explainable AI: Enhancing Prediction and Interpretability
Mohamed Abdelkader1,2 and Árpád Csámer1,3
Mohamed Abdelkader and Árpád Csámer
  • 1Debrecen University, Institute of Earth Sciences , Department of Mineralogy and Geology, Debrecen, Hungary (mohamed.abdelkader@science.unideb.hu)
  • 2Geology Department, Faculty of Science, Ain Shams University, Cairo, Egypt.
  • 3Cosmochemistry and Cosmic Methods Research Group, University of Debrecen, Debrecen , Hungary

Landslides are one of the most serious natural disasters, causing many deaths and damage to infrastructure. In developing countries with rapidly growing cities, having accurate landslide susceptibility maps (LSM) is crucial for predicting landslides and minimizing risks. These maps play a key role in effective disaster management and mitigation strategies. While the development of advanced machine learning models such as Random Forest (RF) and XGBoost has significantly improved LSM accuracy, their complexity and "black box" nature make them challenging to interpret. This study uses SHapley Additive exPlanations (SHAP) as an explainable artificial intelligence (XAI) approach to enhance the interpretability of these ensemble models in an arid region in East Cairo, Egypt. A total of 183 landslides were identified using field surveys and satellite imagery, with 70% of the data allocated for training and 30% for validation. Fourteen predictor variables were incorporated from different categories. Both RF and XGBoost were used to create LSM, and their accuracy was compared to evaluate the most effective model. SHAP values provided a detailed evaluation of the contribution of each variable to landslide susceptibility, offering insights into the models' decision-making processes and identifying the most influential features. The results proved that SHAP not only improved the transparency of complex models but also facilitated the identification of key factors driving susceptibility, resulting in a more efficient and interpretable LSM framework. Models trained with SHAP-informed feature selection achieved high performance, with an AUC of up to 0.96. This study highlights the dual potential of explainable AI in addressing the complexity of modern machine learning models and improving their practical applicability in landslide hazard assessments.

Keywords: Landslide susceptibility, Explainable AI, Random Forest, XGBoost, Arid regions

How to cite: Abdelkader, M. and Csámer, Á.: Improving Landslide Susceptibility Mapping with Explainable AI: Enhancing Prediction and Interpretability, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-7584, https://doi.org/10.5194/egusphere-egu25-7584, 2025.