NH6.7 | Integrating Digital Technologies and eXplainable AI (XAI) in Natural Hazard and Disaster Management
EDI
Integrating Digital Technologies and eXplainable AI (XAI) in Natural Hazard and Disaster Management
Co-organized by ESSI1
Convener: Paraskevas Tsangaratos | Co-conveners: Raffaele Albano, Ioanna Ilia, Haoyuan HongECSECS, Elena Xoplaki, Ivanka Pelivan, Yi Wang
Orals
| Tue, 16 Apr, 08:30–09:55 (CEST)
 
Room 0.15
Posters on site
| Attendance Tue, 16 Apr, 10:45–12:30 (CEST) | Display Tue, 16 Apr, 08:30–12:30
 
Hall X4
Posters virtual
| Attendance Tue, 16 Apr, 14:00–15:45 (CEST) | Display Tue, 16 Apr, 08:30–18:00
 
vHall X4
Orals |
Tue, 08:30
Tue, 10:45
Tue, 14:00
This groundbreaking session merges the forefront of digital technology and explainable artificial intelligence (XAI) to redefine our approach to natural hazard management and resilience. As natural hazards such as earthquakes, floods, landslides, and wildfires become more frequent and severe, leveraging advanced digital solutions is crucial. This session delves into the synergistic application of remote sensing, machine learning, geographic information systems (GIS), IoT, quantum computing, digital twins, and VR/AR in understanding, predicting, and managing natural disasters.
We place a special emphasis on the role of eXplainable AI (XAI) in demystifying AI-driven predictive models. By exploring algorithms like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), we aim to make AI predictions in natural hazard assessment transparent and trustworthy. This approach not only enhances the predictive accuracy but also fosters trust and understanding among stakeholders.
Attendees will gain insights into cutting-edge research and practical applications, showcasing how these integrated technologies enable real-time monitoring, early warning systems, and effective communication strategies for disaster management. The session will feature case studies highlighting the successful application of these technologies in diverse geographic regions and hazard scenarios. This interdisciplinary platform is dedicated to advancing our capabilities in mitigating the risks and impacts of natural hazards, paving the way for safer, more resilient communities in the face of increasing environmental challenges.

Orals: Tue, 16 Apr | Room 0.15

Chairpersons: Paraskevas Tsangaratos, Raffaele Albano, Elena Xoplaki
08:30–08:35
08:35–08:45
|
EGU24-16
|
ECS
|
Virtual presentation
Jinjoo Kim and Bapon Shm fakhruddin

The escalating threats of climate change, compounded by seismicity along the ring of fire, pose significant challenges to the Pacific Island Countries (PICs), making them particularly susceptible to the impacts of natural hazards. This commentary explores the potential of Artificial Intelligence (AI) and satellite technology in enhancing resilience, focusing on their application in early warning systems and response/recovery for these vulnerable regions. The integration of these digital technologies can revolutionize the way PICs predict, respond to, and recover from climate- and seismic-induced catastrophes, thereby strengthening their resilience. It also discusses the future prospects for AI and satellite technology in PICs and concludes by highlighting the importance of international cooperation to ensure that PICs can benefit from these technologies.

How to cite: Kim, J. and Shm fakhruddin, B.: Empowering Pacific Island Countries against Climate Change: The Role of AI and Satellite Technology, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16, https://doi.org/10.5194/egusphere-egu24-16, 2024.

08:45–08:55
|
EGU24-1408
|
ECS
|
Highlight
|
Virtual presentation
Ronnen Avny and Menachem Friedman

The present article delves into the necessity of employing artificial intelligence (AI) in locating individuals trapped during natural disasters such as earthquakes and floods. By utilizing unmanned equipment for reconnaissance and support tasks during search and rescue missions, lives can be saved, and the process expedited. Natural disasters have resulted in significant financial losses and loss of human lives, making it imperative to develop efficient and effective methods for rescue operations. The article emphasizes the benefits of using Trapped Victims Location (TVL) systems, including improved response times, increased accuracy, enhanced situational awareness, and improved safety for first responders. Furthermore, the article discusses the current TVL technologies available, such as visual cameras, acoustic sensors, thermal imaging cameras, radar sensors, GPRS, Cellular receivers, and more. The article also highlights the operational gaps within first responders' systems for locating trapped victims and discusses the specific operational needs for various scenarios. This work serves as a basis for further scientific and engineering projects that can overcome existing gaps and enhance the operational process of locating victims during emergencies, significantly improving accuracy and the likelihood of locating live individuals while expediting the entire procedure.

How to cite: Avny, R. and Friedman, M.: The Need for Utilizing AI in Locating Trapped Victims Following Earthquakes and Floods, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1408, https://doi.org/10.5194/egusphere-egu24-1408, 2024.

08:55–09:05
|
EGU24-9063
|
ECS
|
On-site presentation
Mohit Mohanty and Vaibhav Tripathi

The world has experienced the profound and devastating consequences of floods on human life, prompting a shift from mere academic examination to a critical socio-political imperative. To initiate effective flood risk management, many nations are working on creating user-friendly tools to identify flood-prone areas across extensive watersheds. Recently, Geomorphic Flood Descriptors (GFDs), which rely on the characteristics of the river drainage and are computationally less demanding, have been used as an efficient alternative to complex hydraulic models. However, validating the flood inundation maps from GFDs remains a major challenge, especially for ungauged watersheds that limit the adoption of data-intensive hydraulic modeling. In addition, as weather patterns and climate variations incur significant heterogeneity in flood patterns over large watersheds, we need to find error-free benchmark maps to validate the GFDs. The present study explores the suitability of Ensemble Machine Learning (ML) models to represent flooding at high resolution over large ungauged watersheds, thus paving the major research gap of authenticating the GFD-derived flood map with ground truth in ungauged basins. A suite of about 25 flood-influencing factors incorporating geomorphological, climatological, and soil parameters such as the Geomorphic Flood Index (GFI), Topographic Wetness Index (TWI), Height Above the Nearest Drainage (HAND), Slope, Stream Power Index (SPI), rainfall, soil type, and horizontal distance from the stream, etc., were derived from a high-resolution DEM (CartoDEM, resolution~30m). The two most prominent tree-based machine learning (ML) techniques, Random Forest (RF), and Extreme Gradient Boosting (XGBoost) were employed to simulate flood inundation at a fine scale of 30m in the severely flood-prone Mahanadi basin. An ensemble of linear model, random forest, and support vector machine models were further tested for geographical extrapolation which quantified the flood hazard in an ungauged basin, which was lagged by tree-based models. These ML models were trained using a flood inundation map derived from LISFLOOD-FP using the ERA5 reanalysis dataset. The performance of the GFD-derived flood map is tested against the LISFLOOD-FP flood map through a set of performance statistics. The performance of the model developed was evaluated using Area Under the receiver operating characteristics curve (AUC), kappa coefficient, precision, recall, and F1 score, while RMSE and KGE were used for regression models. The ambiguous nature of ML models was also estimated using SHAP values to justify the degree of influence of each GFD on flood depth. The ongoing research also inspires to the development of a global flood inundation atlas using RCMs, which can be used to compare and validate inundation over large regions through geomorphic analysis. Any uncertainty in flood inundation estimates may amplify largely while quantifying flood risk, including vulnerability and exposure dimensions.

Keywords: Flood hazard, Geomorphic Flood Descriptors, LISFLOOD-FP, Machine Learning, SHAP

How to cite: Mohanty, M. and Tripathi, V.: Can Catchment attributes coupled with an Ensemble of Machine Learning improve Flood Hazard mapping over large data-scarce catchments?, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9063, https://doi.org/10.5194/egusphere-egu24-9063, 2024.

09:05–09:15
|
EGU24-12368
|
On-site presentation
Nikolaos S. Bartsotas, Stella Girtsou, Alexis Apostolakis, Themistocles Herekakis, and Charalampos Kontoes

In a changing climate, the growing frequency and intensity of wildfires requires innovative services in order to efficiently remediate against their catastrophic socioeconomic threat. Under the framework of MedEWSa project, we capitalise upon the reliability of the FireHub platform to further enhance its capability and features along the full spectrum of pre-event to post-event time scales, catering: (i) prevention and preparedness, (ii) detection and response, as well as (iii) restoration and inducement of cascading effects.

During the pre-event stage, the fire risk over Attica Region is denoted on a daily basis in 5 risk levels over a detailed 500m grid spacing through a combination of high resolution numerical weather predictions, advanced ML models that utilize historic wildfire record analysis as well as a number of associated atmospheric parameters (temperature, wind speed and direction, precipitation, dew point) and datasets (DEM, land use / land cover) from 2010 onwards. During the event, continuous monitoring is provided through MSG/SEVIRI image acquisitions every 5 minutes from NOA’s in-house antenna, while the spatiotemporal fire-spread information is simulated through a dynamic modelling of the evolving fire. This feature is currently being further developed in order to be capable of performing “hot” starts along the incident and re-estimate based upon new hotspot retrievals from VIIRS imagery. Finally, the procedure of post-event burnt-scar mapping is currently being automated, to provide rapid footprints of the affected areas by utilising MODIS, VIIRS and Sentinel imagery and examine potential cascading effects through hazard assessment maps on landslides, soil erosion and floods. The whole suite will be hosted on a brand new fully responsive user interface that will provide detailed yet straightforward and easy to adopt information in order to enhance the decision making of policy makers and public bodies.

How to cite: Bartsotas, N. S., Girtsou, S., Apostolakis, A., Herekakis, T., and Kontoes, C.: The deployment of an integrated suite for wildfire prediction, near real-time fire monitoring and post-event mapping over Attica region, Greece., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12368, https://doi.org/10.5194/egusphere-egu24-12368, 2024.

09:15–09:25
|
EGU24-22176
|
On-site presentation
Stella Girtsou, Alexis Apostolakis, Konstantinos Alexis, Mariza Kaskara, Giorgos Giannopoulos, and Charalampos Kontoes

Next-day wildfire prediction is a critical research problem with significant implications for the environment, society, and economy. This study addresses the challenges associated with accurately predicting fire occurrences and presents a machine learning methodology designed to achieve high sensitivity and specificity in predicting wildfires at a country-wide scale with high spatial granularity. The unique aspects of the problem, including extreme data imbalance, massive scale, heterogeneity, and absence of fire, are thoroughly examined.

The proposed methodology focuses on three key components:

  • Feature Set Enhancement: An extended set of fire driving factors, encompassing topography, meteorology, Earth Observation data, and historical fire occurrence information, is utilized. This comprehensive feature set provides a holistic view of the factors influencing fire risk.
  • State-of-the-Art Classification Algorithms: A set of well-established classification algorithms, including Random Forest, Extremely Randomized Trees, XGBoost, and shallow Neural Networks, for benchmarking is employed. These algorithms are carefully tuned and optimized to strike a balance between sensitivity and specificity. Furthermore, state-of-the-art Deep Learning Methodologies like Semantic Segmentation and Metric Learning are employed and tuned for this specific task.
  • Effective Cross-Validation and Model Selection: Two alternative cross-validation schemes and custom validation measures are introduced to ensure optimal training of classification models. This allows for the selection of diverse models based on the desired trade-off between sensitivity and specificity.

The paper addresses specific challenges, such as extreme data imbalance, massive scale of data, heterogeneity, and absence of fire. The scale of the dataset, with over 830 million instances covering a 500m grid cell resolution for the entire Greek territory, necessitates careful undersampling for model training. Heterogeneity and concept drifts in different months are acknowledged, and the absence of fire instances is discussed in the context of unpredictable factors.

The study explores pitfalls, best practices, and directions for further investigation, providing valuable insights into the complexities of next-day wildfire prediction. The impact of class_weights hyperparameter in compensating for data imbalance is highlighted, emphasizing its significance in cost-sensitive learning.

In conclusion, the proposed machine learning methodology demonstrates effectiveness and efficiency in next-day fire prediction, aligning with real-world fire prediction system requirements. Further, our proposed methods achieve adequately high effectiveness scores (sensitivity > 90%, specificity > 80%) and are realized within a pre-operational environment that is continuously assessed on real-world conditions and also improved based on the feedback of the Greek Fire Service.  The study contributes insights that can guide future research in addressing the challenges associated with wildfire prediction, paving the way for more accurate and reliable models in the field.

Acknowledgement: "This work has been supported by the national research project PREFERRED, which is co-funded by Greece and the European Union through the Regional Operational Programme of Attiki, under the call "Research and Innovation Synergies in the Region of Attica” (Project code: ΑΤΤΡ4-0340489)"

How to cite: Girtsou, S., Apostolakis, A., Alexis, K., Kaskara, M., Giannopoulos, G., and Kontoes, C.: Machine Learning Approach for Next-Day Wildfire Prediction: Challenges, Solutions, andInsights, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-22176, https://doi.org/10.5194/egusphere-egu24-22176, 2024.

09:25–09:35
|
EGU24-15738
|
ECS
|
Highlight
|
Virtual presentation
Saman Ghaffarian, Fakhereh Alidoost, Umut Lagap, Pranav Chandramouli, Yifat Dzigan, Meiert Grootes, Fatemeh Jalayer, and Ilan Kelman

In the ubiquitous dynamic landscape of social changes and technological advancements, the utilization of innovative solutions for disaster early warning systems (and for other forms of warning) has become paramount. This study explores the incorporation of Digital Twins (DT), dynamic digital replicas of physical entities, into disaster warning. Drawing from insights obtained through a comprehensive literature review and perspectives gleaned from a workshop, we investigate the technical challenges and needs of the research communities engaged in developing DTs for disaster risk management. Additionally, we propose a novel framework for employing DTs in early (and beyond) warning systems.

The implementation of DTs for early warning involves several intricacies and challenges. For instance, achieving seamless data fusion is crucial for enhancing the accuracy and timeliness of early warnings.  However, the real-time integration of diverse and large data sources, including geospatial data, environmental sensors, social media feeds, and demographic and census data is not straightforward task. Another intricacy involves the need for robust predictive modelling within the DT framework. Overcoming this challenge requires the development of dynamic models that can adapt to evolving disaster scenarios. Machine Learning plays a pivotal role in this context, enabling the DT to continuously learn and improve its predictive capabilities. Privacy concerns and ethical considerations are paramount in the use of DTs for early warning, especially when leveraging data from various sources and to ensure trust and credibility. Solutions include the development of privacy-preserving methods and transparent communication strategies to gain public trust and ensure responsible model development and data usage. Furthermore, user interaction and community involvement are essential aspects of a successful DT-based early warning system. Tailoring communication strategies to diverse audiences and fostering community engagement through user-friendly interfaces contribute to the effectiveness of early warnings.

Accordingly, we propose solutions and strategies for addressing these challenges. For instance, leveraging edge computing capabilities for real-time data processing, integrating explainable artificial intelligence (AI) techniques to enhance model interpretability and transparency, and adopting decentralized data governance frameworks like Blockchain address key challenges in DT implementation for early warning systems.

This study provides valuable insights into the current state of DT integration for disaster early warning, highlighting intricacies and offering examples of solutions. By understanding the challenges and proposing a new integration framework, we pave the way for the realization of the full potential of Digital Twins in advancing disaster resilience, early warning capabilities, and contributing to the United Nations’ initiative ‘Early Warnings for All’.

How to cite: Ghaffarian, S., Alidoost, F., Lagap, U., Chandramouli, P., Dzigan, Y., Grootes, M., Jalayer, F., and Kelman, I.: Digital Twins for Early Warning Systems: Intricacies and Solutions, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15738, https://doi.org/10.5194/egusphere-egu24-15738, 2024.

09:35–09:45
|
EGU24-16528
|
ECS
|
On-site presentation
Ziming Wang and Ce Zhang

With the increasing frequency of global extreme weather events and urbanization accelerates, the probability of urban flooding has significantly increased, posing a grave threat to both property and lives. Creating accurate flood maps is a critical component of effective emergency management for urban floods. However, current research primarily focuses on the extent of urban flood, with little consideration given to its type. Different types of floods often have varying water components and sediments, necessitating the identification of flood types during mapping to provide targeted relief. This paper proposes a method using multiple Convolutional Neural Networks (CNNs) that combines U-Net and ResNet architectures for urban flood extent extraction and type classification. The proposed method achieved 97.1% accuracy in flood extent extraction and 91% accuracy in flood type classification, demonstrating its accuracy in urban flood mapping. Furthermore, the research was validated using a global dataset, covering six continents and 20 countries, encompassing samples with diverse dimensions and geographical features, showcasing the robustness and practicality of the model in various regions.

Keywords: Urban flood mapping, Flood type, Deep learning, CNN, Classification

How to cite: Wang, Z. and Zhang, C.: Urban Flood Extent Extraction and Type Recognition Based on Multiple Convolutional Neural Networks, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16528, https://doi.org/10.5194/egusphere-egu24-16528, 2024.

09:45–09:55
|
EGU24-7006
|
ECS
|
Virtual presentation
|
Xia Zhao and Wei Chen

Xiaojin County, Sichuan Province, China was selected as the study area of this paper, and twelve conditioning factors were determined according to the literature review. The spatial correlation between landslide and conditioning factors is analyzed using the weights-of-evidence (WoE) model, and the landslide susceptibility in Xiaojin county is predicted. The landslide susceptibility in this region was mainly assessment by WoE based random forest (RF) model. The radial basis function network (RBFNetwork) model was also exploited to map landslide susceptibility with the identical datasets. Finally, the landslide susceptibility maps were produced, and the comprehensive performance of the three models was quantitatively evaluated and compared by the receiver operating characteristic (ROC) curves and area under curve (AUC) values. The results show that the three models are suitable for landslide susceptibility evaluation in the study area, and the evaluation effect of the WoE model is better than that of the RF and RBF network models. More concretely, the goodness-of-fit values of the WoE, RF and RBFNetwork models in the training dataset are 0.899, 0.880 and 0.866, respectively. In terms of prediction accuracy, AUC values are 0.892, 0.874 and 0.863 respectively. Additionally, mean decrease accuracy (MDA) and means decrease Gini (MDG) are used to quantify the importance of landslide conditioning factors. Elevation, soil, distance to roads and distance to rivers are considered as the most important conditioning factors in landslide susceptibility modeling. Consequently, the study achievements in this paper have reference significance on the development and exploitation of land resources in Xiaojin County.

How to cite: Zhao, X. and Chen, W.: Landslide susceptibility modeling using data-driven weights-of-evidence based random forest and radial basis function network, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7006, https://doi.org/10.5194/egusphere-egu24-7006, 2024.

Posters on site: Tue, 16 Apr, 10:45–12:30 | Hall X4

Display time: Tue, 16 Apr 08:30–Tue, 16 Apr 12:30
Chairpersons: Ioanna Ilia, Ivanka Pelivan
X4.105
|
EGU24-22456
Joel De Plaen, Elco Koks, and Philip Ward

Detailed information on the exposure of critical infrastructure (CI), such as power assets, is a necessity to establish accurate risk assessment from natural and human-made hazards. Currently, large-scale risk assessment mostly relies on Volunteered Geographic Information to establish the exposure of CI causing limited reliability due to inherent information gaps. Deep Learning offers the possibility to fill such gaps through the extraction of CI from remote sensing imagery.

Here we present a comprehensive high-resolution geospatial database encompassing key elements of the power grid, namely power towers, electrical substations, and power plants. The dataset is derived from a workflow using Worldview-2 0.4-meter resolution satellite imagery for the most populated urban areas along the European coastlines.

The method extracts infrastructure location from OpenStreetMap to create annotations. Subsequently, the satellite imagery raster and annotations undergo processing to constitute training data. Data augmentation is employed on the raster tiles to enhance the training dataset. The method then trains a Mask R-CNN model to automate the detection of CI. Additionally, saliency maps are generated to validate the proper functioning of the model.

Performance metrics, specifically mean Average Precision and F-scores of the tile classification, are presented to evaluate the model's ability to correctly identify and classify power infrastructure. Furthermore, to assess the completeness of the geospatial database, a comparative analysis is conducted with OpenStreetMap on “unseen” locations. This comparative study sheds light on potential gaps and discrepancies, offering insights into the overall reliability and comprehensiveness of the dataset.

How to cite: De Plaen, J., Koks, E., and Ward, P.: A Coastal European Dataset of Critical Infrastructure:  Leveraging Deep Learning to Enhance Power Infrastructure Exposure Information for Disaster Risk Assessment., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-22456, https://doi.org/10.5194/egusphere-egu24-22456, 2024.

X4.106
|
EGU24-20549
|
Highlight
Kyriacos Themistocleous and Dante Abate

There is a need for the use of digital twins of cultural heritage sites, especially for those that are affected by natural hazards, for documentation, monitoring and management. This study examines the use of digital twins through the EXCELSIOR and TRIQUETRA project for the use of 3D digital volumentric reporesentation model and Augmented Reality applications by creating a digital twin for monitoring natural hazards in archaeological settings. The EXCELSIOR H2020 Widespread Teaming project under Grant Agreement No 857510 and the TRIQUETRA project Horizon Europe, Grant Agreement No. 101094818 will study the effects of climate change and natural hazards on cultural heritage and remediation using state-of-the-art techniques.  Through the TRIQUETRA project, Choirokoitia, Cyprus is used as one of the pilot studies using these techniques. Choirokoitia is a UNESCO World Heritage Site and is one of the best-preserved Neolithic sites in the Mediterranean. The project will also examine the potential risk of rockfall at the Choirokoitia site, as the topology of the site is vulnerable to movements as a result of extreme climate change as well as of daily/seasonal stressing actions. Rockfall poses a significant danger to visitor safety as well as damage to cultural heritage sites.

Digital twins provide a dynamic visualization of the site and can also be used to monitor any changes resulting from natural hazards. A digital twin model can also be shared with visitors in order to provide an alternative approach and a visualization experience for viewing the site.

How to cite: Themistocleous, K. and Abate, D.: The Use of Digital Twins for the Management of Cultural Heritage Sites, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20549, https://doi.org/10.5194/egusphere-egu24-20549, 2024.

X4.107
|
EGU24-16620
|
ECS
Margit Kurka

The presented study implements existing deep learning (DL) algorithms, an artificial intelligence approach, to extract geotechnical properties about unconsolidated material from photographs. The ultimate goal of this approach lies in facilitating, aiding and simplifying the collection of often missing data about unconsolidated bedrock cover relevant in regional landslide susceptibility studies.  Current research aims at answering, if existing DL algorithms (e. g. Buscombe’s (2020) Sedinet algorithm), developed for granular, often well-sorted sediments, can also perform well with poorly-sorted sediments. It also inquires, if, which and how well geotechnical properties, as described in soil classification standards like ISO 14688-1:2017-12 (EU) and ASTM D2487-17e1 (USA), can be directly or indirectly obtained through DL analysis of photographs. The study approaches these questions by initially building a DL model based on several thousand photographs of 240 samples of unconsolidated material plus their several hundred laboratory sieve residue samples. In a previous project, the 240 samples of mostly alluvial, colluvial, eolian and glacial sediments had been collected from different geological environments within the state of Styria, Austria. Grain size distribution (GSD) and other soil classification parameters, obtained through field and laboratory testing, exist for these samples and have been provided as courtesy by the Land Steiermark (State of Styria).  In the current study this knowledge about geotechnical properties of the samples allows attribution of this information to each of the several thousand photographs, which were taken with three different cameras under controlled conditions. The DL model uses several hundred of these photographs with their associated attributes as training and test data to build a prediction model. The validation of thus derived model in regard to its performance is achieved with selected photographs, not yet used in the training and testing. Results of this approach allow a discussion about applicability, emerging limitations and possible improvements in regard to predicting geotechnical parameters, particularly GSD, for unconsolidated material using existing DL algorithms. As a consequence the results and drawn conclusions also warrant an outlook and contemplation on how, if and in what way the method can aid and simplify field mapping and the collection of relevant input data for regional landslide susceptibility studies.  

How to cite: Kurka, M.: Performance of deep learning algorithms on obtaining geotechnical properties of unconsolidated material to improve input data for regional landslide susceptibility studies, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16620, https://doi.org/10.5194/egusphere-egu24-16620, 2024.

X4.108
|
EGU24-18310
|
ECS
|
Highlight
Yulia Grushetskaya, Mike Sips, Reyko Schachtschneider, and Mohammadmehdi Saberioon

In geosciences, machine learning (ML) has become essential for solving complex problems, such as predicting natural disasters or analysing the impact of extreme temperatures on mortality rates. However, the integration of ML into geoscience scenarios faces significant challenges, especially in explaining the influence of hyperparameters (HP) on model performance and model behaviour in specific scenarios. The Explainable Artificial Intelligence (XAI) system ClarifAI developed at GFZ addresses these challenges by combining XAI concepts with interactive visualisation. 

ClarifAI currently provides users with two interactive XAI methods: HyperParameter Explorer (HPExplorer) and Hypothetical Scenario Explorer (HSExplorer). 

HPExplorer allows interactive exploration of the HP space by computing an interactive tour through stable regions of the HP space. We define a stable region in HP space as a subspace of HP space in which ML models show similar model performance. We also employ HP importance analysis to deepen the understanding of the impact of separate HPs on model performance.The Hypothetical Scenarios Explorer (HSExplorer) helps users explore model behaviour by allowing them to test how changes in input data affect the model's response. 

In our presentation, we will demonstrate how HSExplorer helps users understand the impact of individual HPs on model performance. As ClarifAI is an important research area in our lab, we are interested in discussing relevant XAI challenges with the XAI community in ESSI.

 Our goal is to create a comprehensive set of tools that explain the mechanics of ML models and allow practitioners to apply ML to a wide range of geoscience applications.

How to cite: Grushetskaya, Y., Sips, M., Schachtschneider, R., and Saberioon, M.: ClarifAI: Interactive XAI Methods for Geosciences, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18310, https://doi.org/10.5194/egusphere-egu24-18310, 2024.

Posters virtual: Tue, 16 Apr, 14:00–15:45 | vHall X4

Display time: Tue, 16 Apr 08:30–Tue, 16 Apr 18:00
Chairpersons: Paraskevas Tsangaratos, Raffaele Albano, Haoyuan Hong
vX4.6
|
EGU24-9683
|
ECS
|
Maria Sotiria Frousiou, Ioanna Ilia, Dimitrios Kasmas, and Ioanna Petropoulou

Landslide phenomena, acknowledged as significant geohazards affecting both human infrastructure and the natural environment, have been the
subject of intensive research aimed at pinpointing areas at risk of instability. This task involves the complex modelling of variables related to landslides, which requires both knowledge-based and data-driven methodologies. The challenge is heightened by the often intricate and obscure processes that trigger landslides be they natural or anthropogenic. Over the past two decades, the application of artificial intelligence, specifically machine learning algorithms, has brought a transformative approach to landslide susceptibility evaluations. These advanced methodologies, encompassing fuzzy logic, decision trees, artificial neural networks, ensemble methods, and evolutionary algorithms, have demonstrated notable accuracy and dependability. A significant recent development in this field is the incorporation of eXplainable AI (XAI) techniques into landslide susceptibility models. XAI tools, such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), offer a window into the previously opaque decision-making processes of AI models, thus demystifying the "black box" aspect of conventional AI systems.

The primary aim of this study was to employ the XBoost algorithm and integrate SHAP methods for an in-depth landslide susceptibility assessment. The methodology was methodically divided into five distinct phases: (i)the creation of the inventory map, (ii)the selection, classification, and weighting of landslide-influencing variables, (iii)conducting multicollinearity analysis, (iv)applying and testing the developed model, and (v)evaluating the predictive performance of various models and analyzing the results.

The computational work was performed using coding languages R and Python, while ArcGIS 10.5 was instrumental in compiling data and producing detailed landslide susceptibility maps. This study's efficiency was tested in the North-western Peloponnese region of Greece, known for its frequent landslide occurrences. Nine specific variables were considered: elevation, slope angle, aspect, plan and profile curvature, distance to faults, distance to river networks, lithology and hydrolithology cover and landslide locations, all contributing to the generation of training and test datasets. The Frequency Ratio method was applied to discern the correlation among these variables and assign weight values to each class. Multi-collinearity analysis further helped in identifying any collinearity among the variables.

SHAP values were utilized to rank features according to their importance, offering a transparent view of variable contributions. The evaluation phase involved calculating the model's predictive power using metrics like classification accuracy, sensitivity, specificity, and the area under the success and predictive rate curves (AUC). This comprehensive approach combining XBoost and SHAP methods presents a refined model for understanding and predicting landslide susceptibility, aiming for more accurate and interpretable hazard assessments. The results highlight the high performance of the XBoost algorithm, in terms of accuracy, sensitivity, specificity and AUC values. SHAP method indicates that slope angle was the most important feature in this model for landslide susceptibility. Other features such as elevation, distance to river network, and lithology cover also contribute to the model's predictions, though to a lesser extent and with more mixed effects. Aspect, profile curvature, plan curvature, distance to fault, and hydrolithology cover appear to have a more moderate or minimal impact on the model’s predictions. 

How to cite: Frousiou, M. S., Ilia, I., Kasmas, D., and Petropoulou, I.: Integrating XBoost and SHAP for Enhanced Interpretability in Landslide Susceptibility Assessment: A Case Study in North-western Peloponnese, Greece., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9683, https://doi.org/10.5194/egusphere-egu24-9683, 2024.

vX4.7
|
EGU24-11739
|
ECS
Aikaterini-Alexandra Chrysafi, Paraskevas Tsangaratos, and Ioanna Ilia

Landslides, triggered by severe rainfall events, pose significant risks to both life and infrastructure. Timely and accurate detection of such landslides is crucial for effective disaster management and mitigation. This study presents an innovative approach combining near real-time remote sensing data with advanced machine learning techniques to rapidly identify landslide occurrences following severe rainfall events, specifically focusing on a recent case in Greece.
Our methodology harnesses the capabilities of pre and post-event satellite imagery to capture the landscape's transformation due to landslides. We compute remote sensing indices, including the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Water Index (NDWI), among others, to detect changes indicative of potential landslide areas. This approach leverages the temporal resolution and wide-area coverage of satellite data, enabling a swift and comprehensive assessment immediately after a triggering rainfall event.
To enhance the accuracy of our detection model and reduce false positives, we incorporate a landslide susceptibility map generated via a Weight of Evidence (WoE) model. This map is based on historical landslide occurrences and helps to exclude areas with very low to low susceptibility, thereby refining our detection process.
Central to our study is the implementation of an eXplainable AI (XAI) framework. This aspect is particularly crucial, as it provides insights into the influence of various landslide-related factors on the model's predictions. The factors considered include elevation, slope angle, aspect, plan and profile curvature, distance to faults and river networks, lithology, and hydrolithology cover. By employing XAI techniques, we unravel the complex interactions between these variables and their relative importance in predicting landslide occurrences. This not only enhances the trustworthiness and transparency of our model but also aids in understanding the underlying geophysical processes leading to landslides.
The model's architecture is built upon advanced machine learning algorithms capable of processing large datasets efficiently. This setup is particularly suited to handle the high-dimensional and multi-temporal nature of remote sensing data. Furthermore, the model's ability to rapidly process and analyze data aligns well with the urgency required in disaster response scenarios.
Our case study in Greece demonstrates the model's efficacy in rapidly identifying landslide-prone areas post-severe rainfall events. The results show a significant improvement over traditional methods in terms of speed and accuracy. Moreover, the inclusion of XAI provides valuable insights for local authorities and disaster management teams, enabling them to make informed decisions for emergency response and long-term land-use planning.
This research contributes to the evolving field of rapid landslide detection by integrating cutting-edge remote sensing technologies with the latest advancements in machine learning and AI interpretability. It offers a novel, efficient, and transparent approach to landslide detection, which is vital for enhancing disaster preparedness and resilience in landslide-prone regions.

How to cite: Chrysafi, A.-A., Tsangaratos, P., and Ilia, I.: Leveraging Near Real-Time Remote Sensing and Explainable AI for Rapid Landslide Detection: A Case Study in Greece, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11739, https://doi.org/10.5194/egusphere-egu24-11739, 2024.

vX4.8
|
EGU24-16038
|
ECS
Ploutarchos Tzampoglou, Dimitrios Loukidis, Paraskevas Tsangaratos, Aristodemos Anastasiades, Elena Valari, and Konstantinos Karalis

Over the past decades, landslides have significantly affected extensive areas worldwide due to changing environmental conditions and human activities, causing major problems in the built environment and infrastructure and resulting in the loss of human lives and significant financial damages. The island of Cyprus and especially its southwestern part (which constitutes the study area) have experienced the severe impact of landslides due to the unfavorable geological/geotechnical conditions and mountainous geomorphology. According to the data obtained from the Geological Survey Department of Cyprus (GSD), 1842 landslides (active and inactive) of various types have been identified in an area covering 40% (546km2) of the Paphos District (3.4 landslides per km2).

Knowledge of the location and extent of existing landslides plays crucial role in the landslide susceptibility and hazard assessment. The primary aim of this research is to develop an algorithm for the automatic detection of landslides at regional scale. This is achieved through application of image recognition technology utilizing the cascade method on the hillshade of a region as produced by ArcGIS. The database of recorded landslides of the GSD was split in a algorithm training dataset and a validation dataset. The study also explores the effect of the resolution of terrain data, expressed by the size of the grid cells. To comprehensively assess landslides, the morphology is classified into three types: active, dormant, and relict. The use of hillshade instead of a raster image of the elevation map was chosen because the latter usually results in relatively minor color variations between adjacent pixels, thus hindering the most striking geomorphological features of landslides, which are the main scarp and the enveloping streams.

The results obtained suggest that a hillshade produced using a high-resolution Digital Elevation Model (DEM), i.e. based on elevation contour interval of 1m and a cell size 1 x 1 m (obtained from the Department of Land and Surveys of the Republic of Cyprus), yields better results for landslides with gentle geomorphology (relict). Nonetheless, analysis based on such a high-resolution DEM requires substantial computational resources and time. On the contrary, landslides associated with steeper geomorphologies (active) exhibited optimal performance with a cell size of 2 x 2 m, achieving success rates (80%), for DEMs based on contour intervals of 1m and 5m. In this case, the computational time is significantly reduced.  Depending on the specific landslide types investigated in a particular area, the appropriate processing model can be selected, ultimately leading to significant time savings.

This research was funded by the European Commission (Marie Sklodowska-Curie Actions, Hybland-Society and Enterprise panel, Project No.: 101027880).

How to cite: Tzampoglou, P., Loukidis, D., Tsangaratos, P., Anastasiades, A., Valari, E., and Karalis, K.: Automatic regional identification of active and inactive landslides using satellite image analysis, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16038, https://doi.org/10.5194/egusphere-egu24-16038, 2024.

vX4.9
|
EGU24-17169
|
Kleoniki Valvi, Constantinos Cartalis, Kostas Philippopoulos, Athina-Kyriaki Zazani, and Ilias Agathangelidis

The aim of the present study is the identification of the prevailing climate hazards (e.g., extreme heat, forest fires, drought, floods) and their changes, in terms of frequency, intensity, and trends during multiple 30-year climate reference periods in Greece. The analysis involves the identification of climate hazards using a plethora of extreme event indices along with the application of the extreme value theory (EVT). Changes in extremes over a period are often examined under two different perspectives, one that detects changes in the frequency of the extremes and the other in their intensity. For this purpose, high-resolution reanalyses data (ERA5-Land) are used, with a horizontal resolution of 0.1o x 0.1o. The sensitivity of diverse regions was determined through the analysis of Earth Observation data and products, alongside with the examination of their geomorphological features. In the final stage of the work, all of the above were incorporated using Geographic Information Systems, and GIS tools were developed for the synthesis of the climate hazards. This analysis focuses on the understanding of how climate change may be impacting Greece and can provide valuable insights for policymakers, researchers, and the general public to adapt to and mitigate the effects of climate hazards on a regional scale.

How to cite: Valvi, K., Cartalis, C., Philippopoulos, K., Zazani, A.-K., and Agathangelidis, I.: Utilizing Geographic Information Systems to Identify and Map Climate Hazards in Greece: A Regional Analysis, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17169, https://doi.org/10.5194/egusphere-egu24-17169, 2024.

vX4.10
|
EGU24-18870
Ploutarchos Tzampoglou, Dimitrios Loukidis, Konstantinos karalis, Aristodemos Anastasiades, and Paraskevas Tsangaratos

The activation as well as the consequences of landslides are difficult to predict, as they depend on factors characterized by large variability and uncertainties. The aim of this study is to establish a correlation between geological, geotechnical and geomorgpohlogical characteristics and the spatial distribution of recorded landslides.

The study area is located in the southwestern (SW) part of the island of Cyprus, covering an area of 552km2. During the past years, more than 1800 landslides, active and inactive (dormant and relict), have been recorded within this area through detailed mapping based on field observations, rendering the area an ideal test bed. At the beginning of this research study, all recorded landslides were digitized in raster format. Consequently, the study area was partitioned into 15 x 15m size cells having three classes: no landslides, inactive landslides and active landslides. Additionally, regarding the geological aspect, polygons encompassing 100% rock mass formations within recorded landslides were categorized as rock mass landslides, while the rest were characterized as landslides in argillaceous (soft rock and soil) materials. A series of correlation analyses were conducted using the Random Forest and SHAP (Shapley Additive explanation) methods.

Considering the outcomes of the Random Forest method in argillaceous materials, it turns out that the most important factors for both active and inactive landslides are the Plasticity Index (PI) and the clay fraction (CF), followed by the factors associated with the geomorphology and the bedding structure (e.g. slope angle and bedding dip). The ranking results for inactive and active landslides in rock mass show that the most important factor is the Uniaxial Compressive Strength (UCS), followed by the Geological Strength Index (GSI). Furthermore, the orientation (azimuth) difference between slope and bedding dip (dip direction difference) appears to be more important than the slope angle.

Similar ranking results were obtained using the SHAP method for argillaceous materials. Regarding the contribution of each factor in the inactive landslides, it appears that the PI and the slope angle increase proportionally to the possibility of landslide occurrence, while the CF does not exhibit a specific trend. Regarding the dip direction difference, small values contribute more to the occurrence of landslides. The active landslides show a similar picture, but with the CF exhibiting a stronger correlation than in the case of inactive landslides. According to the SHAP analysis for rock mass, the parameters of importance in both inactive and active landslides are UCS and GSI, followed by the slope angle and the dip direction difference.

This research was funded by the European Commission (Marie Sklodowska-Curie Actions, Hybland-Society and Enterprise panel, Project No.: 101027880).

How to cite: Tzampoglou, P., Loukidis, D., karalis, K., Anastasiades, A., and Tsangaratos, P.: Spatial correlation between landslides and geotechnical factors using Random Forest and SHAP, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18870, https://doi.org/10.5194/egusphere-egu24-18870, 2024.

vX4.11
|
EGU24-20051
Marios Vlachos, Chrysoula Papathanasiou, Valantis Tsiakos, Georgios Tsimiklis, and Angelos Amditis

Desert ecosystems are particularly vulnerable to global climate change, characterized by increased temperatures, variable intensity and frequency in precipitation and increased atmospheric CO2 levels. Under such conditions, substantial alterations in their structure and functioning of desert ecosystems is expected. This climate shift poses a serious threat to species adapted to deserts, especially endemic plants, which are susceptible to the potential loss of suitable habitats. Further to that, neighboring populated areas are also exposed to adverse conditions characterized by poor air quality, with direct impacts on human health, the economy and the environment overall. To address these challenges, the CiROCCO Project aims to implement a robust yet cost-effective Internet of Things (IoT) system for environmental measurements in harsh desert environments. Such a system not only enhances data accuracy but also enables continuous monitoring, reduces costs, and supports critical research and conservation efforts considering climate change and ecosystem challenges. The proposed IoT system primarily relies on a network of distributed low-cost Wireless Sensor Nodes (WSNs) that have the capability to monitor the surrounding environment and measure various crucial meteorological and air quality parameters, including inter alia air and sand/soil temperature, solar radiation, Ozone, PM2.5, PM10, with accuracy comparable to commercial high-end nodes offering similar measurements. Additionally, communication gateways are employed to collect measurements from the distributed WSNs using low-power consumption protocols such as Bluetooth Low Energy (BLE) and LoRaWAN. The collected measurements are then standardized into JSON messages, including the unique identifier of the device, timestamp, and parameter values. Subsequently, the data are transmitted wirelessly to the cloud using the most suitable method based on network connectivity. If there is an available Wi-Fi network in the field, the data is prioritized for transmission through this network. Alternatively, the system utilizes the 4G or 5G network in the area. In cases where none of these networks is accessible, the data is transmitted to the cloud through satellite communications. This method involves an additional satellite device connected to the gateway, where the formatted messages are loaded through serial communications. The satellite device awaits the next pass of the nanosatellite, for uploading the measurements. The nanosatellite continues its journey until it passes by a base station, at which point the data are downloaded, stored in the base station portal, and made available to third-party applications through the portal API. In conclusion, the scientific approach outlined in this work addresses the imposing challenges of collecting valuable in-situ data for monitoring climatic conditions in hard-to-reach under-sampled environments. The development of low-cost devices, including WSNs and gateways with IoT capabilities, is crucial for advancing research and conservation efforts in the context of climate change and considering the unique challenges posed on desert ecosystems.

AKNOWLEDGMENTS

This research work is part of the CiROCCO Project. CiROCCO Project is funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or REA. Neither the European Union nor the granting authority can be held responsible for them.

How to cite: Vlachos, M., Papathanasiou, C., Tsiakos, V., Tsimiklis, G., and Amditis, A.: Resilient Data Harvesting: A Low-Cost IoT Paradigm for Robust Measurement Collection in Challenging Environments, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20051, https://doi.org/10.5194/egusphere-egu24-20051, 2024.

vX4.12
|
EGU24-22400
Nandini Menon, Salah Parastoo, and Raul Adriaensen

The exponential increase in flood intensity that causes loss of life and economic and structural
damage to their connected environment calls for strategic rescue and response solutions for
risk mitigation. This study focuses on flood mapping using satellite imagery combined with
machine learning (ML) and deep learning (DL) techniques. Remote sensing and Geographic
Information Systems (GIS) serve as vital tools in this process, enabling the effective utilization
of satellite data.
While academics consistently contribute to novel flood mapping approaches, a research gap
that requires a discussion about the comparative performances of these ML and DL
techniques exists, which this paper aims to address. This comparison is crucial as it highlights
the strengths and limitations of each method, contributing valuable insights to the literature on
flood risk management. The study focuses on the Ernakulam District of Kerala, chosen due to
its frequent flooding and the availability of diverse datasets.
The methodology involves the use of satellite imagery for flood analysis, employing an array
of techniques: a thresholding method recommended by the UN-SPIDER Office for Outer
Space Affairs, and statistical ML methods including Random Forest, Support Vector
Classification (SVC), Real AdaBoost, alongside a deep learning semantic segmentation
method, UNet. Modelled using JavaScript and Python languages, the models and the
packages are completely reusable. The dataset comprises two before and after floods satellite
images: the thresholding method uses Sentinel-1 SAR images, and the ML and DL method
uses Sentinel-2 MSI Level 1C, a digital elevation model image from SRTM for feature
engineering, processed to identify flood-affected areas. The data is normalized and cleaned
to account for cloud and missing data before the analysis. Alongside, we sourced the labelled
flood data from the Kerala State Disaster Management Authority (KSDMA) and filtered and
rasterized it on QGIS.
The results emphasize the varied effectiveness of these methods, with Random Forest
outperforming others with a 96.61% accuracy rate. At the same time, the UNet-Linear Model
lags at 75% accuracy, indicating the significant impact of hyperparameter tuning and dataset
size on model performance. This comparative analysis not only delineates the strengths and
weaknesses of traditional and advanced techniques but also sets a precedent for future
studies to build upon an understanding of flood risk management and rapid response
strategies.

How to cite: Menon, N., Parastoo, S., and Adriaensen, R.: Flood Inundation Mapping of the 2018 Kerala Floods: A ComparativeStudy of Traditional Remote Sensing, Machine Learning, and Deep Learning Methods. , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-22400, https://doi.org/10.5194/egusphere-egu24-22400, 2024.