ITS2.5/NH10.8
Artificial Intelligence for Natural Hazard and Disaster Management

ITS2.5/NH10.8

EDI
Artificial Intelligence for Natural Hazard and Disaster Management
Co-organized by ESSI1/HS12/OS4
Convener: Ivanka Pelivan | Co-conveners: Jürg Luterbacher, Elena Xoplaki, Andrea Toreti, Raffaele Albano
Presentations
| Wed, 25 May, 11:05–11:47 (CEST), 13:20–16:36 (CEST)
 
Room N1

Presentations: Wed, 25 May | Room N1

Chairpersons: Raffaele Albano, Ivanka Pelivan
11:05–11:07
11:07–11:17
|
EGU22-8
|
solicited
|
On-site presentation
Monique Kuglitsch

The ITU/WMO/UNEP Focus Group on AI for Natural Disaster Management (FG-AI4NDM) explores the potential of AI to support the monitoring and detection, forecasting, and communication of natural disasters. Building on the presentation at EGU2021, we will show how detailed analysis of real-life use cases by an interdisciplinary, multistakeholder, and international community of experts is leading to the development of three technical reports (dedicated to best practices in data collection and handling, AI-based algorithms, and AI-based communications technologies, respectively), a roadmap of ongoing pre-standardization and standardization activities in this domain, a glossary of relevant terms and definitions, and educational materials to support capacity building. It is hoped that these deliverables will form the foundation of internationally recognized standards.

How to cite: Kuglitsch, M.: Nature can be disruptive, so can technology: ITU/WMO/UNEP Focus Group on AI for Natural Disaster Management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8, https://doi.org/10.5194/egusphere-egu22-8, 2022.

11:17–11:23
|
EGU22-2879
|
On-site presentation
Elisabeth D. Hafner, Patrick Barton, Rodrigo Caye Daudt, Jan Dirk Wegner, Konrad Schindler, and Yves Bühler

Safety related applications like avalanche warning or risk management depend on timely information about avalanche occurrence. Knowledge on the locations and sizes of avalanches releasing is crucial for the responsible decision-makers. Such information is still collected today in a non-systematic way by observes in the field, for example from ski resort patrols or community avalanche services. Consequently, the existing avalanche mapping is, in particular in situations with high avalanche danger, strongly biased towards accessible terrain in proximity to (winter sport) infrastructure.

Recently, remote sensing has been shown to be capable of partly filling this gap, providing spatially continuous information on avalanche occurrences over large regions. In previous work we applied optical SPOT 6/7 satellite imagery to manually map two avalanche periods over a large part of the swiss Alps (2018: 12’500 and 2019: 9’500 km2). Subsequently, we investigated the reliability of this mapping and proved its suitability by identifying almost ¾ of all occurred avalanches (larger size 1) from SPOT 6/7 imagery. Therefore, optical SPOT data is an excellent source for continuous avalanche mapping, currently restricted by the time intensive manual mapping. To speed up this process we now propose a fully convolutional neural network (CNN) called AvaNet. AvaNet is based on a Deeplabv3+ architecture adapted to specifically learn how avalanches look like by explicitly including height information from a digital terrain model (DTM) for example. Relying on the manually mapped 24’737 avalanches for training, validation and testing, AvaNet achieves an F1 score of 62.5% when thresholding the probabilities from the network predictions at 0.5. In this study we present the results from our network in more detail, including different model variations and results of predictions on data from a third avalanche period we did not train on.

The ability to automate the mapping and therefor quickly identify avalanches from satellite imagery is an important step forward in regularly acquiring spatially continuous avalanche occurrence data. This enables the provision of essential information for the complementation of avalanche databases, making Alpine regions safer.

How to cite: Hafner, E. D., Barton, P., Caye Daudt, R., Wegner, J. D., Schindler, K., and Bühler, Y.: Automatically detecting avalanches with machine learning in optical SPOT6/7 satellite imagery, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2879, https://doi.org/10.5194/egusphere-egu22-2879, 2022.

11:23–11:29
|
EGU22-3422
|
ECS
|
Virtual presentation
|
Thomas Gölles, Kathrin Lisa Kapper, Stefan Muckenhuber, and Andreas Trügler

Since its start in 2014, the Copernicus Sentinel-1 programme has provided free of charge, weather independent, and high-resolution satellite Earth observations and has set major scientific advances in the detection of snow avalanches from satellite imagery in motion. Recently, operational avalanche detection from Sentinel-1 synthetic Aperture radar (SAR) images were successfully introduced for some test regions in Norway. However, current state of the art avalanche detection algorithms based on machine learning do not include weather history. We propose a novel way to encode weather data and include it into an automatic avalanche detection pipeline for the Austrian Alps. The approach consists of four steps. At first the raw data in netCDF format is downloaded, which consists of several meteorological parameters over several time steps. In the second step the weather data is downscaled onto the pixel locations of the SAR image. Then the data is aggregated over time, which produces a two-dimensional grid of one value per SAR pixel at the time when the SAR data was recorded. This aggregation function can range from simple averages to full snowpack models. In the final step, the grid is then converted to an image with greyscale values corresponding to the aggregated values. The resulting image is then ready to be fed into the machine learning pipeline. We will include this encoded weather history data to increase the avalanche detection performance, and investigate contributing factors with model interpretability tools and explainable artificial intelligence.

How to cite: Gölles, T., Kapper, K. L., Muckenhuber, S., and Trügler, A.: Weather history encoding for machine learning-based snow avalanche detection, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3422, https://doi.org/10.5194/egusphere-egu22-3422, 2022.

11:29–11:35
|
EGU22-7313
|
On-site presentation
|
Kathrin Lisa Kapper, Stefan Muckenhuber, Thomas Goelles, Andreas Trügler, Muhamed Kuric, Jakob Abermann, Jakob Grahn, Eirik Malnes, and Wolfgang Schöner

Each year, snow avalanches cause many casualties and tremendous damage to infrastructure. Prevention and mitigation mechanisms for avalanches are established for specific regions only. However, the full extent of the overall avalanche activity is usually barely known as avalanches occur in remote areas making in-situ observations scarce. To overcome these challenges, an automated avalanche detection approach using the Copernicus Sentinel-1 synthetic aperture radar (SAR) data has recently been introduced for some test regions in Norway. This automated detection approach from SAR images is faster and gives more comprehensive results than field-based detection provided by avalanche experts. The Sentinel-1 programme has provided - and continues to provide - free of charge, weather-independent, and high-resolution satellite Earth observations since its start in 2014. Recent advances in avalanche detection use deep learning algorithms to improve the detection rates. Consequently, the performance potential and the availability of reliable training data make learning-based approaches an appealing option for avalanche detection.  

         In the framework of the exploratory project SnowAV_AT, we intend to build the basis for a state-of-the-art automated avalanche detection system for the Austrian Alps, including a "best practice" data processing pipeline and a learning-based approach applied to Sentinel-1 SAR images. As a first step towards this goal, we have compiled several labelled training datasets of previously detected avalanches that can be used for learning. Concretely, these datasets contain 19000 avalanches that occurred during a large event in Switzerland in January 2018, around 6000 avalanches that occurred in Switzerland in January 2019, and around 800 avalanches that occurred in Greenland in April 2016. The avalanche detection performance of our learning-based approach will be quantitatively evaluated against held-out test sets. Furthermore, we will provide qualitative evaluations using SAR images of the Austrian Alps to gauge how well our approach generalizes to unseen data that is potentially differently distributed than the training data. In addition, selected ground truth data from Switzerland, Greenland and Austria will allow us to validate the accuracy of the detection approach. As a particular novelty of our work, we will try to leverage high-resolution weather data and combine it with SAR images to improve the detection performance. Moreover, we will assess the possibilities of learning-based approaches in the context of the arguably more challenging avalanche forecasting problem.

How to cite: Kapper, K. L., Muckenhuber, S., Goelles, T., Trügler, A., Kuric, M., Abermann, J., Grahn, J., Malnes, E., and Schöner, W.: The potential of automated snow avalanche detection from SAR images for the Austrian Alpine region using a learning-based approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7313, https://doi.org/10.5194/egusphere-egu22-7313, 2022.

11:35–11:41
|
EGU22-4900
|
ECS
|
On-site presentation
|
Ann-Kathrin Edrich, Anil Yildiz, Ribana Roscher, and Julia Kowalski

The spatial impact of a single shallow landslide is small compared to a deep-seated, impactful failure and hence its damage potential localized and limited. Yet, their higher frequency of occurrence and spatio-temporal correlation in response to external triggering events such as strong precipitation, nevertheless result in dramatic risks for population, infrastructure and environment. It is therefore essential to continuously investigate and analyze the spatial hazard that shallow landslides pose. Its visualisation through regularly-updated, dynamic hazard maps can be used by decision and policy makers. Even though a number of data-driven approaches for shallow landslide hazard mapping exist, a generic workflow has not yet been described. Therefore, we introduce a scalable and modular machine learning-based workflow for shallow landslide hazard prediction in this study. The scientific test case for the development of the workflow investigates the rainfall-triggered shallow landslide hazard in Switzerland. A benchmark dataset was compiled based on a historic landslide database as presence data, as well as a pseudo-random choice of absence locations, to train the data-driven model. Features included in this dataset comprise at the current stage 14 parameters from topography, soil type, land cover and hydrology. This work also focuses on the investigation of a suitable approach to choose absence locations and the influence of this choice on the predicted hazard as their influence is not comprehensively studied. We aim at enabling time-dependent and dynamic hazard mapping by incorporating time-dependent precipitation data into the training dataset with static features. Inclusion of temporal trigger factors, i.e. rainfall, enables a regularly-updated landslide hazard map based on the precipitation forecast. Our approach includes the investigation of a suitable precipitation metric for the occurrence of shallow landslides at the absence locations based on the statistical evaluation of the precipitation behavior at the presence locations. In this presentation, we will describe the modular workflow as well as the benchmark dataset and show preliminary results including above mentioned approaches to handle absence locations and time-dependent data.

How to cite: Edrich, A.-K., Yildiz, A., Roscher, R., and Kowalski, J.: A modular and scalable workflow for data-driven modelling of shallow landslide susceptibility, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4900, https://doi.org/10.5194/egusphere-egu22-4900, 2022.

11:41–11:47
|
EGU22-3212
|
Presentation form not yet defined
Joel Efiong, Devalsam Eni, Josiah Obiefuna, and Sylvia Etu

Landslides have continued to wreck its havoc in many parts of the globe; comprehensive studies of landslide susceptibilities of many of these areas are either lacking or inadequate. Hence, this study was aimed at predicting landslide susceptibility in Cross River State of Nigeria, using machine learning. Precisely, the frequency ratio (FR) model was adopted in this study. In adopting this approach, a landslide inventory map was developed using 72 landslide locations identified during fieldwork combined with other relevant data sources. Using appropriate geostatistical analyst tools within a geographical information environment, the landslide locations were randomly divided into two parts in the ratio of 7:3 for the training and validation processes respectively. A total of 12 landslide causing factors, such as; elevation, slope, aspect, profile curvature, plan curvature, topographic position index, topographic wetness index, stream power index, land use/land cover, geology, distance to waterbody and distance to major roads, were selected and used in the spatial relationship analysis of the factors influencing landslide occurrences in the study area. FR model was then developed using the training sample of the landslide to investigate landslide susceptibility in Cross River State which was subsequently validated. It was found out that the distribution of landslides in Cross River State of Nigeria was largely controlled by a combined effect of geo-environmental factors such as elevation of 250 – 500m, slope gradient of >35o, slopes facing the southwest direction, decreasing degree of both positive and negative curvatures, increasing values of topographic position index, fragile sands, sparse vegetation, especially in settlement and bare surfaces areas, distance to waterbody and major road of < 500m. About 46% of the mapped area was found to be at landslide susceptibility risk zones, ranging from moderate – very high levels. The susceptibility model was validated with 90.90% accuracy. This study has shown a comprehensive investigation of landslide susceptibility in Cross River State which will be useful in land use planning and mitigation measures against landslide induced vulnerability in the study area including extrapolation of the findings to proffer solutions to other areas with similar environmental conditions. This is a novel use of a machine learning technique in hazard susceptibility mapping.

 

Keywords: Landslide; Landslide Susceptibility mapping; Cross River State, Nigeria; Frequency ratio, Machine learning

How to cite: Efiong, J., Eni, D., Obiefuna, J., and Etu, S.: Predicting Landslide Susceptibility in Cross River State of Nigeria using Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3212, https://doi.org/10.5194/egusphere-egu22-3212, 2022.

Lunch break
Chairpersons: Elena Xoplaki, Ivanka Pelivan
13:20–13:22
13:22–13:28
|
EGU22-4250
|
ECS
|
On-site presentation
|
Luísa Vieira Lucchese, Guilherme Garcia de Oliveira, Alexander Brenning, and Olavo Correa Pedrollo

Landslide Susceptibility Mapping (LSM) and rainfall thresholds are well-documented tools used to model the occurrence of rainfall-induced landslides. In the case of locations where only rainfall can be considered a main landslide trigger, both methodologies apply essentially to the same locations, and a model that encompasses both would be an important step towards a better understanding and prediction of landslide-triggering rainfall events. In this research, we employ spatially cross-validated, hyperparameter tuned Artificial Neural Networks (ANNs) to predict the susceptibility to landslides of an area in southern Brazil. In a next step, we plan to add the triggering rainfall to this Artificial Intelligence model, which will concurrently model the susceptibility and the triggering rainfall event for a given area. The ANN is of type Multi-Layer Perceptron with three layers. The number of neurons in the hidden layer was tuned separately for each cross-validation fold, using a method described in previous work. The study area is the escarpment in the limits of the municipalities of Presidente Getúlio, Rio do Sul, and Ibirama, in southern Brazil. For this area, 82 landslides scars related to the event of December 17th, 2020, were mapped. The metrics for each fold are presented and the final susceptibility map for the area is shown and analyzed. The evaluation metrics attained are satisfactory and the resulting susceptibility map highlights the escarpment areas as most susceptible to landslides. The ANN-based susceptibility mapping in the area is considered successful and seen as a baseline for identifying rainfall thresholds in susceptible areas, which will be accomplished with a combined susceptibility and rainfall model in our future work.

How to cite: Vieira Lucchese, L., Garcia de Oliveira, G., Brenning, A., and Correa Pedrollo, O.: Landslide Susceptibility Modeling of an Escarpment in Southern Brazil using Artificial Neural Networks as a Baseline for Modeling Triggering Rainfall, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4250, https://doi.org/10.5194/egusphere-egu22-4250, 2022.

13:28–13:34
|
EGU22-7308
|
On-site presentation
Marta Béjar-Pizarro, Pablo Ezquerro, Carolina Guardiola-Albert, Héctor Aguilera Alonso, Margarita Patricia Sanabria Pabón, Oriol Monserrat, Anna Barra, Cristina Reyes-Carmona, Rosa Maria Mateos, Juan Carlos García López Davalillo, Juan López Vinielles, Guadalupe Bru, Roberto Sarro, Jorge Pedro Galve, Roberto Tomás, Virginia Rodríguez Gómez, Joaquín Mulas de la Peña, and Gerardo Herrera

The detection of areas of the Earth’s surface experiencing active deformation processes and the identification of the responsible phenomena (e.g. landslides activated after rainy events, subsidence due to groundwater extraction in agricultural areas, consolidation settlements, instabilities in active or abandoned mines) is critical for geohazard risk management and ultimately to mitigate the unwanted effects on the affected populations and the environment.

This will now be possible at European level thanks to the Copernicus European Ground Motion Service (EGMS), which will provide ground displacement measurements derived from time series analyses of Sentinel-1 data, using Interferometric Synthetic Aperture Radar (InSAR). The EGMS, which will be available to users in the first quarter of 2022 and will be updated annually, will be especially useful to identify displacements associated to landslides, subsidence and deformation of infrastructure.  To fully exploit the capabilities of this large InSAR datasets, it is fundamental to develop automatic analysis tools, such as machine learning algorithms, which require an InSAR-derived deformation database to train and improve them.  

Here we present the preliminary InSAR-derived deformation database developed in the framework of the SARAI project, which incorporates the previous InSAR results of the IGME-InSARlab and CTTC teams in Spain. The database contains classified points of measurement with the associated InSAR deformation and a set of environmental variables potentially correlated with the deformation phenomena, such as geology/lithology, land-surface slope, land cover, meteorological data, population density, and inventories such as the mining registry, the groundwater database, and the IGME’s land movements database (MOVES). We discuss the main strategies used to identify and classify pixels and areas that are moving, the covariables used and some ideas to improve the database in the future. This work has been developed in the framework of project PID2020-116540RB-C22 funded by MCIN/ AEI /10.13039/501100011033 and e-Shape project, with funding from the European Union’s Horizon 2020 research and innovation program under grant agreement 820852.

How to cite: Béjar-Pizarro, M., Ezquerro, P., Guardiola-Albert, C., Aguilera Alonso, H., Sanabria Pabón, M. P., Monserrat, O., Barra, A., Reyes-Carmona, C., Mateos, R. M., García López Davalillo, J. C., López Vinielles, J., Bru, G., Sarro, R., Galve, J. P., Tomás, R., Rodríguez Gómez, V., Mulas de la Peña, J., and Herrera, G.: Building an InSAR-based database to support geohazard risk management by exploiting large ground deformation datasets, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7308, https://doi.org/10.5194/egusphere-egu22-7308, 2022.

13:34–13:40
|
EGU22-2011
|
ECS
|
Virtual presentation
|
Yixin Zhang and Hossein Hashemi

Massive groundwater pumping for agricultural and industrial activities results in significant land subsidence in the arid world. In an acute water crisis, monitoring land subsidence and its key drivers is essential to assist groundwater depletion mitigation strategy. Physical models for aquifer simulation related to land deformation are computationally expensive. The interferometric synthetic aperture radar (InSAR) technique provides precise deformation mapping yet is affected by tropospheric and ionospheric errors. This study explores the capabilities of the deep learning approach coupled with satellite-derived variables in modeling subsidence, spatially and temporally, from 2016 to 2020 and predicting subsidence in the near future by using a recurrent neural network (RNN) in the Shabestar basin, Iran. The basin is part of the Urmia Lake River Basin, embracing 6.4 million people, yet has been primarily desiccated due to the over-usage of water resources in the basin. The deep learning model incorporates InSAR-derived land subsidence and its satellite-based key drivers such as actual evapotranspiration, Normalized Difference Vegetation Index (NDVI), land surface temperature, precipitation to yield the importance of critical drivers to inform groundwater governance. The land deformation in the area varied between -93.2 mm/year to 16 mm/year on average in 2016-2020. Our findings reveal that precipitation, evapotranspiration, and vegetation coverage primarily affected land subsidence; furthermore, the subsidence rate is predicted to increase rapidly. The phenomenon has the same trend with the variation of the Urmia Lake level. This study demonstrates the potential of artificial intelligence incorporating satellite-based ancillary data in land subsidence monitoring and prediction and contributes to future groundwater management.

How to cite: Zhang, Y. and Hashemi, H.: InSAR-Deep learning approach for simulation and prediction of land subsidence in arid regions, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2011, https://doi.org/10.5194/egusphere-egu22-2011, 2022.

13:40–13:46
|
EGU22-11422
|
On-site presentation
Sansar Raj Meena, Mario Floris, and Filippo Catani

Landslide inventories are essential for landslide susceptibility mapping, hazard modelling, and further risk mitigation management. For decades, experts and organisations worldwide have preferred manual visual interpretation of satellite and aerial images. However, there are various problems associated with manual inventories, such as manual extraction of landslide borders and their representation with polygons, which is a subjective process.  Manual delineation is affected by the applied methodology, the preferences of the experts and interpreters, and how much time and effort are invested in the inventory generating process. In recent years, a vast amount of research related to semi-automated and automatic mapping of landslide inventories has been carried out to overcome these issues. The automatic generation of landslide inventories using Artificial Intelligence (AI) techniques is still in its early phase as currently there is no published research that can create a ground truth representation of landslide situation after a landslide triggering event. The evaluation metrics in recent literature show a range of 50-80% of F1-score in terms of landslide boundary delineation using AI-based models. However, very few studies claim to have achieved more than 80% F1 score with the exception of those employing the testing of their model evaluation in the same study area. Therefore, there is still a research gap between the generation of AI-based landslide inventories and their usability for landslide hazard and risk studies. In this study, we explore several inventories developed by AI and manual delineation and test their usability for assessing landslide hazard.

How to cite: Meena, S. R., Floris, M., and Catani, F.: Can landslide inventories developed by artificial intelligence substitute manually delineated inventories for landslide hazard and risk studies?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11422, https://doi.org/10.5194/egusphere-egu22-11422, 2022.

13:46–13:52
|
EGU22-6690
|
ECS
|
Virtual presentation
Xiaotong Zhu, Jinhui Jeanne Huang, Hongwei Guo, Shang Tian, and Zijie Zhang

The precise estimation of seawater quality parameters is crucial for decision-makers to manage coastal water resources. Although various machine learning (ML)-based algorithms have been developed for seawater quality retrieval using remote sensing technology, the performance of these models in the application of specific regions remains significant uncertainty due to the different properties of coastal waters. Moreover, the prediction results of these ML models are unexplainable. To address these problems, an ML-based ensemble model was developed in this study. The model was applied to estimate chlorophyll-a (Chla), turbidity, and dissolved oxygen (DO) based on Sentinel-2 satellite imagery in Shenzhen Bay, China. The optimal input features for each seawater quality parameter were selected from the nine simulation scenarios which generated from eight spectral bands and six spectral indices. A local explanation method called SHapley Additive exPlanations (SHAP) was introduced to quantify the contributions of various features to the predictions of the seawater quality parameters. The results suggested that the ensemble model with feature selection enhanced the performance for three types of seawater quality parameters estimations (The errors were 1.7%, 1.5%, and 0.02% for Chla, turbidity, and DO, respectively). Furthermore, the reliability of the model performance was further verified for mapping the spatial distributions of water quality parameters during the model validation period. The spatial-temporal patterns of seawater quality parameters revealed that the distributions of seawater quality were mainly influenced by estuary input. Correlation analysis demonstrated that air temperature (Temp) and average air pressure (AAP) exhibited the closest relationship with Chla. The DO was most relevant with Temp, and turbidity was not sensitive to Temp, average wind speed (AWS), and AAP. This study enhanced the prediction capability of seawater quality parameters and provided a scientific coastal waters management approach for decision-makers.

How to cite: Zhu, X., Huang, J. J., Guo, H., Tian, S., and Zhang, Z.: A machine learning-based ensemble model for estimation of seawater quality parameters in coastal area, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6690, https://doi.org/10.5194/egusphere-egu22-6690, 2022.

13:52–13:58
|
EGU22-79
|
ECS
|
Virtual presentation
|
Joko Sampurno, Valentin Vallaeys, Randy Ardianto, and Emmanuel Hanert

Compound flooding hazard in estuarine delta is increasing due to mean sea-level rise (SLR) as the impact of climate change. Decision-makers need future hazard analysis to mitigate the event and design adaptation strategies. However, to date, no future hazard analysis has been made for the Kapuas River delta, a low-lying area on the west coast of the island of Borneo, Indonesia. Therefore, this study aims to assess future compound flooding hazards under SLR over the delta, particularly in Pontianak (the densest urban area over the region). Here we consider three SLR scenarios due to climate change, i.e., low emission scenario (RCP2.6), medium emission scenario (RCP4.5), and high emission scenario (RCP8.5). We implement a machine-learning technique, i.e., the multiple linear regression (MLR) algorithm, to model the river water level dynamics within the city. We then predict future extreme river water levels due to interactions of river discharges, rainfalls, winds, and tides. Furthermore, we create flood maps with a likelihood of areas to be flooded in 100 years return period (1% annual exceedance probability) due to the expected sea-level rise. We find that the extreme 1% return water level for the study area in 2100 is increased from about 2.80 m (current flood frequency state) to 3.03 m (under the RCP2.6), to 3.13 m (under the RCP4.5), and 3.38 m (under the RCP8.5).

How to cite: Sampurno, J., Vallaeys, V., Ardianto, R., and Hanert, E.: Assessing the impact of sea-level rise on future compound flooding hazards in the Kapuas River delta, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-79, https://doi.org/10.5194/egusphere-egu22-79, 2022.

13:58–14:04
|
EGU22-4730
|
ECS
|
On-site presentation
|
Julian Hofmann and Holger Schüttrumpf

Recent urban flood events revealed how severe and fast the impacts of heavy rainfall can be. Pluvial floods pose an increasing risk to communities worldwide due to ongoing urbanization and changes in climate patterns. Still, pluvial flood warnings are limited to meteorological forecasts or water level monitoring which are insufficient to warn people against the local and terrain-specific flood risks. Therefore, rapid flood models are essential to implement effective and robust early warning systems to mitigate the risk of pluvial flooding. Although hydrodynamic (HD) models are state-of-the-art for simulation pluvial flood hazards, the required computation times are too long for real-time applications.

In order to overcome the computation time bottleneck of HD models, the deep learning model floodGAN has been developed. FloodGAN combines two adversarial Convolutional Neural Networks (CNN) that are trained on high-resolution rainfall-flood data generated from rainfall generators and HD models. FloodGAN translates the flood forecasting problem into an image-to-image translation task whereby the model learns the non-linear spatial relationships of rainfall and hydraulic data. Thus, it directly translates spatially distributed rainfall forecasts into detailed hazard maps within seconds. Next to the inundation depth, the model can predict the velocities and time periods of hydraulic peaks of an upcoming rainfall event. Due to its image-translation approach, the floodGAN model can be applied for large areas and can be run on standard computer systems, fulfilling the tasks of fast and practical flood warning systems.

To evaluate the accuracy and generalization capabilities of the floodGAN model, numerous performance tests were performed using synthetic rainfall events as well as a past heavy rainfall event of 2018. Therefore, the city of Aachen was used as a case study. Performance tests demonstrated a speedup factor of 106 compared to HD models while maintaining high model quality and accuracy and good generalization capabilities for highly variable rainfall events. Improvements can be obtained by integrating recurrent neural network architectures and training with temporal rainfall series to forecast the dynamics of the flooding processes.

How to cite: Hofmann, J. and Schüttrumpf, H.: floodGAN – A deep learning-based model for rapid urban flood forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4730, https://doi.org/10.5194/egusphere-egu22-4730, 2022.

14:04–14:10
|
EGU22-4266
|
On-site presentation
|
Raffaele Albano, Nicla Notarangelo, Kohin Hirano, and Aurelia Sole

Flood risk monitoring, alert and adaptation in urban areas require near-real-time fine-scale precipitation observations that are challenging to obtain from currently available measurement networks due to their costs and installation difficulties. In this sense, newly available data sources and computational techniques offer enormous potential, in particular, the exploiting of not-specific, widespread, and accessible devices.

This study proposes an unprecedented system for rainfall monitoring based on artificial intelligence, using deep learning for computer vision, applied to cameras images. As opposed to literature, the method is not device-specific and exploits general-purpose cameras (e.g., smartphones, surveillance cameras, dashboard cameras, etc.), in particular, low-cost device, without requiring parameter setting, timeline shots, or videos. Rainfall is measured directly from single photographs through Deep Learning models based on transfer learning with Convolutional Neural Networks. A binary classification algorithm is developed to detect the presence of rain. Moreover, a multi-class classification algorithm is used to estimate a quasi-instantaneous rainfall intensity range. Open data, dash-cams in Japan coupled with high precision multi-parameter radar XRAIN, and experiments in the NIED Large Scale Rainfall Simulator combined to form heterogeneous and verisimilar datasets for training, validation, and test. Finally, a case study over the Matera urban area (Italy) was used to illustrate the potential and limitations of rainfall monitoring using camera-based detectors.

The prototype was deployed in a real-world operational environment using a pre-existent 5G surveillance camera. The results of the binary classifier showed great robustness and portability: the accuracy and F1-score value were 85.28% and 85.13%, 0.86 and 0.85 for test and deployment, respectively, whereas the literature algorithms suffer from drastic accuracy drops changing the image source (e.g. from 91.92% to 18.82%). The 6-way classifier results reached test average accuracy and macro-averaged F1 values of 77.71% and 0.73, presenting the best performances with no-rain and heavy rainfall, which represents critical condition for flood risk. Thus, the results of the tests and the use-case demonstrate the model’s ability to detect a significant meteorological state for early warning systems. The classification can be performed on single pictures taken in disparate lighting conditions by common acquisition devices, i.e. by static or moving cameras without adjusted parameters. This system does not suit scenes that are also misleading for human visual perception. The proposed method features readiness level, cost-effectiveness, and limited operational requirements that allow an easy and quick implementation by exploiting pre-existent devices with a parsimonious use of economic and computational resources.

Altogether, this study corroborates the potential of non-traditional and opportunistic sensing networks for the development of hydrometeorological monitoring systems in urban areas, where traditional measurement methods encounter limitations, and in data-scarce contexts, e.g. where remote-sensed rainfall information is unavailable or has broad resolution respect with the scale of the proposed study. Future research will involve incremental learning algorithms and further data collection via experiments and crowdsourcing, to improve accuracy and at the same time promote public resilience from a smart city perspective.

How to cite: Albano, R., Notarangelo, N., Hirano, K., and Sole, A.: Camera Rain Gauge Based on Artificial Intelligence, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4266, https://doi.org/10.5194/egusphere-egu22-4266, 2022.

14:10–14:16
|
EGU22-781
|
Presentation form not yet defined
Ali Mostafavi and Faxi Yuan

Background and objective: The fields of urban resilience to flooding and data science are on a collision course giving rise to the emerging field of smart resilience. The objective of this study is to propose and demonstrate a smart flood resilience framework that leverages various heterogeneous community-scale big data and infrastructure sensor data to enhance predictive risk monitoring and situational awareness.

Smart flood resilience framework: The smart flood resilience framework focuses on four core capabilities that could be augmented through the use of heterogeneous community-scale big data and analytics techniques: (1) predictive flood risk mapping: prediction capability of imminent flood risks (such as overflow of channels) to inform communities and emergency management agencies to take preparation and response actions; (2) automated rapid impact assessment: the ability to automatically and quickly evaluate the extent of flood impacts (i.e., physical, social, and economic impacts) to enable crisis responders and public officials to allocate relief and rescue resources on time; (3) predictive infrastructure failure prediction and monitoring: the ability to anticipate imminent failures in infrastructure systems as a flood event unfolds; and (4) smart situational awareness capabilities: the capability to derive proactive insights regarding the evolution of flood impacts (e.g., disrupted access to critical facilities and spatio-temporal patterns of recovery) on the communities.

Case study: We demonstrate the components of these core capabilities in the smart flood resilience framework in the context of the 2017 Hurricane Harvey in Harris. First, with Bayesian network modeling and deep learning methods, we reveal the use of flood sensor data for the prediction of floodwater overflow in channel networks and inundation of co-located road networks. Second, we discuss the use of social media data and machine learning techniques for assessing the impacts of floods on communities and sensing emotion signals to examine societal impacts. Third, we illustrate the use of high-resolution traffic data in network-theoretic models for now-casting of flood propagation on road networks and the disrupted access to critical facilities such as hospitals. Fourth, we leverage location-based and credit card transaction data in advanced spatial data analytics to proactively evaluate the recovery of communities and the impacts of floods on businesses.

Significances: This study shows that the significance of different core capabilities of the smart flood resilience framework in helping emergency managers, city planners, public officials, responders, and volunteers to better cope with the impacts of catastrophic flooding events.

How to cite: Mostafavi, A. and Yuan, F.: Smart Flood Resilience: Harnessing Community-Scale Big Data for Predictive Flood Risk Monitoring, Rapid Impact Assessment, and Situational Awareness, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-781, https://doi.org/10.5194/egusphere-egu22-781, 2022.

14:16–14:22
|
EGU22-7561
|
Virtual presentation
|
Ferda Ofli, Zainab Akhtar, Rizwan Sadiq, and Muhammad Imran

Flood events cause substantial damage to infrastructure and disrupt livelihoods. There is a need for the development of an innovative, open-access and real-time disaster map pipeline which is automatically initiated at the time of a flood event to highlight flooded regions, potential damage and vulnerable communities. This can help in directing resources appropriately during and after a disaster to reduce disaster risk. To implement this pipeline, we explored the integration of three heterogeneous data sources which include remote sensing data, social sensing data and geospatial sensing data to guide disaster relief and response. Remote sensing through satellite imagery is an effective method to identify flooded areas where we utilized existing deep learning models to develop a pipeline to process both optical and radar imagery. Whilst this can offer situational awareness right after a disaster, satellite-based flood extent maps lack important contextual information about the severity of structural damage or urgent needs of affected population. This is where the potential of social sensing through microblogging sites comes into play as it provides insights directly from eyewitnesses and affected people in real-time. Whilst social sensing data is advantageous, these streams are usually extremely noisy where there is a need to build disaster relevant taxonomies for both text and images. To develop a disaster taxonomy for social media texts, we conducted literature review to better understand stakeholder information needs. The final taxonomy consisted of 30 categories organized among three high-level classes. This built taxonomy was then used to label a large number of tweet texts (~ 10,000) to train machine learning classifiers so that only relevant social media texts are visualized on the disaster map. Moreover, a disaster object taxonomy for social media images was developed in collaboration with a certified emergency manager and trained volunteers from Montgomery County, MD Community Emergency Response Team. In total, 106 object categories were identified and organized as a hierarchical  taxonomy with  three high-level classes and 10 sub-classes. This built taxonomy will be used to label a large set of disaster images for object detection so that machine learning classifiers can be trained to effectively detect disaster relevant objects in social media imagery. The wide perspective provided by the satellite view combined with the ground-level perspective from locally collected textual and visual information helped us in identifying three types of signals: (i) confirmatory signals from both sources, which puts greater confidence that a specific region is flooded, (ii) complementary signals that provide different contextual information including needs and requests, disaster impact or damage reports and situational information, and (iii) novel signals when both data sources do not overlap and provide unique information. We plan to fuse the third component, geospatial sensing, to perform flood vulnerability analysis to allow easy identification of areas/zones that are most vulnerable to flooding. Thus, the fusion of remote sensing, social sensing and geospatial sensing for rapid flood mapping can be a powerful tool for crisis responders.

How to cite: Ofli, F., Akhtar, Z., Sadiq, R., and Imran, M.: Triangulation of remote sensing, social sensing, and geospatial sensing for flood mapping, damage estimation, and vulnerability assessment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7561, https://doi.org/10.5194/egusphere-egu22-7561, 2022.

14:22–14:28
|
EGU22-1650
|
ECS
|
On-site presentation
Hamish Steptoe and Theano Xirouchaki

Tropical Cyclones (TCs) are deadly but rare events that cause considerable loss of life and property damage every year. Traditional TC forecasting and tracking methods focus on numerical forecasting models, synoptic forecasting and statistical methods. However, in recent years there have been several studies investigating applications of Deep Learning (DL) methods for weather forecasting with encouraging results.

We aim to test the efficacy of several DL methods for TC nowcasting, particularly focusing on Generative Adversarial Neural Networks (GANs) and Recurrent Neural Networks (RNNs). The strengths of these network types align well with the given problem: GANs are particularly apt to learn the form of a dataset, such as the typical shape and intensity of a TC, and RNNs are useful for learning timeseries data, enabling a prediction to be made based on the past several timesteps.

The goal is to produce a DL based pipeline to predict the future state of a developing cyclone with accuracy that measures up to current methods.  We demonstrate our approach based on learning from high-resolution numerical simulations of TCs from the Indian and Pacific oceans and discuss the challenges and advantages of applying these DL approaches to large high-resolution numerical weather data.

How to cite: Steptoe, H. and Xirouchaki, T.: Deep Learning for Tropical Cyclone Nowcasting: Experiments with Generative Adversarial and Recurrent Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1650, https://doi.org/10.5194/egusphere-egu22-1650, 2022.

14:28–14:34
|
EGU22-10495
|
ECS
|
On-site presentation
Flash flood susceptibility modelling using advanced machine learning algorithms. Case study of the Rheraya watershed,Morocco
(withdrawn)
Akram Elghouat, Ahmed Algouti, and Abdellah Algouti
14:34–14:40
|
EGU22-6576
|
Presentation form not yet defined
Silvia García, Raul Aquino, and Walter Mata

Natural disasters should be examined within a risk-perspective framework where both natural threat and vulnerability are considered as intricate components of an extremely complex equation. The trend toward more frequent floods and landslides in Mexico in recent decades is not only the result of more intense rainfall, but also a consequence of increased vulnerability. As a multifactorial element, vulnerability is a low-frequency modulating factor of the risk dynamics to intense rainfall. It can be described in terms of physical, social, and economical factors. For instance, deforested or urbanized areas are the physical and social factors that lead to the deterioration of watersheds and an increased vulnerability to intense rains. Increased watershed vulnerability due to land-cover changes is the primary factor leading to more floods, particularly over pacific Mexico. ln some parts of the country, such as Colima, the increased frequency of intense rainfall (i.e., natural hazard) associated with high-intensity tropical cyclones and hurricanes is the leading cause of more frequent floods.

 

In this research an intelligent rain management-system is presented. The object is built to forecast and to simulate the components of risk, to stablish communication between rescue/aid teams and to help in preparedness activities (training). Detection, monitoring, analysis and forecasting of the hazards and scenarios that promote floods and landslides, is the main task. The developed methodology is based on a database that permits to relate heavy rainfall measurements with changes in land cover and use, terrain slope, basin compactness and communities’ resilience as key vulnerability factors. A neural procedure is used for the spatial definition of exposition and susceptibility (intrinsic and extrinsic parameters) and Machine Learning techniques are applied to find the If-Then relationships. The capability of the intelligent model for Colima, Mexico was tested by comparing the observed and modeled frequency of landslides and floods for ten years period. It was found that over most of the Mexican territory, more frequent floods are the result of a rapid deforestation process and that landslides and their impact on communities are directly related to the unauthorized growth of populations in high geo-risk areas (due to forced migration because of violence or extreme poverty) and the development of civil infrastructure (mainly roads) with a high impact on the natural environment. Consequently, the intelligent rain-management system offers the possibility to redesign and to plan the land use and the spatial distribution of poorest communities.

How to cite: García, S., Aquino, R., and Mata, W.: Swept Away: Flooding and landslides in Mexican poverty nodes, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6576, https://doi.org/10.5194/egusphere-egu22-6576, 2022.

14:40–14:46
|
EGU22-3283
|
ECS
|
Virtual presentation
|
Rehenuma Lazin, Xinyi Shen, and Emmanouil Anagnostou

Every year flood causes severe damages in the cropland area leading to global food insecurity. As climate change continues, floods are predicted to be more frequent in the future. To cope with the future climate impacts, mitigate damages, and ensure food security, it is now imperative to study the future flood damage trends in the cropland area. In this study, we use a convolutional neural network (CNN) to estimate the damages (in acre) in the corn and soybean lands across the mid-western USA with projections from climate models. Here, we extend the application of the CNN model developed by Lazin et. al, (2021) that shows ~25% mean relative error for county-level flood-damaged crop loss estimation. The meteorological variables are derived from the reference gridMet datasets as predictors to train the model from 2008-2020. We then use downscaled climate projections from Multivariate Adaptive Constructed Analogs (MACA) dataset in the trained CNN model to assess future flood damage patterns in the cropland in the early (2011-2040), mid (2041-2070), and late (2071-2100) century, relative to the baseline historical period (1981-2010). Results derived from this study will help understand the crop loss trends due to floods under climate change scenarios and plan necessary arrangements to mitigate damages in the future.

 

Reference:

[1] Lazin, R., Shen, X., & Anagnostou, E. (2021). Estimation of flood-damaged cropland area using a convolutional neural network. Environmental Research Letters16(5), 054011.

How to cite: Lazin, R., Shen, X., and Anagnostou, E.: Assessment of Flood-Damaged Cropland Trends Under Future Climate Scenarios Using Convolutional Neural Network, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3283, https://doi.org/10.5194/egusphere-egu22-3283, 2022.

Coffee break
Chairpersons: Andrea Toreti, Ivanka Pelivan
15:10–15:12
15:12–15:18
|
EGU22-1510
|
ECS
|
On-site presentation
Renaud Jougla, Manon Ahlouche, Morgan Buire, and Robert Leconte

Machine learning model approaches for hydrological forecasts are nowadays common in research. Artificial Neural Network (ANN) is one of the most popular due to its good performance on watersheds with different hydrologic regimes and over several timescales. A short-term (1 to 7 days ahead) forecast model was explored to predict streamflow. This study focused on the summer season defined from May to October. Cross-validation was done over a period of 16 years, each time keeping a single year as a validation set.

The ANN model was parameterized with a single hidden layer of 6 neurons. It was developed in a virtual environment based on datasets generated by the physically based distributed hydrological model Hydrotel (Fortin et al., 2012). In a preliminary analysis, several combinations of inputs were assessed, the best combining precipitation and temperature with surface soil moisture and antecedent streamflow. Different spatial discretizations were compared. A semi-distributed discretization was selected to facilitate transferring the ANN model from a virtual environment to real observations such as remote sensing soil moisture products or ground station time series.

Four watersheds were under study: the Au Saumon and Magog watersheds located in south Québec (Canada); the Androscoggin watershed in Maine (USA); and the Susquehanna watershed located in New-York and Pennsylvania (USA). All but the Susquehanna watershed are mainly forested, while the latter has a 57% forest cover. To evaluate whether a model with a data-driven structure can mimic a deterministic model, ANN and Hydrotel simulated flows were compared. Results confirm that the ANN model can reproduce streamflow output from Hydrotel with confidence.

Soil moisture observation stations were deployed in the Au Saumon and Magog watersheds during the summers 2018 to 2021. Meteorological data were extracted from the ERA5-Land reanalysis dataset. As the period of availability of observed data is short, the ANN model was trained in a virtual environment. Two validations were done: one in the virtual environment and one using real soil moisture observations and flows. The number and locations of the soil moisture probes slightly differed during each of the four summers. Therefore, four models were trained depending on the number of probes and their location. Results highlight that location of the soil moisture probes has a large influence on the ANN streamflow outputs and identifies more representative sub-regions of the watershed.

The use of remote sensing data as inputs of the ANN model is promising. Soil moisture datasets from SMOS and SMAP missions are available for the four watersheds under study, although downscaling approaches should be applied to bring the spatial resolution of those products at the watershed scale. One other future lead could be the development of a semi-distributed ANN model in virtual environment based on a restricted selection of hydrological units based on physiographic characteristics. The future L-band NiSAR product could be relevant for this purpose, having a finer spatial resolution compared to SMAP and SMOS and a better penetration of the signal in forested areas than C-band SAR satellites such as Sentinel-1 and the Radarsat Constellation Mission.

How to cite: Jougla, R., Ahlouche, M., Buire, M., and Leconte, R.: From virtual environment to real observations: short-term hydrological forecasts with an Artificial Neural Network model., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1510, https://doi.org/10.5194/egusphere-egu22-1510, 2022.

15:18–15:24
|
EGU22-266
|
ECS
|
Presentation form not yet defined
Annie Singla, Rajat Agrawal, and Aman Garg

According to UNDRR2021, there are 389 reported disasters in 2020. Disasters claim the lives of 15,080 people, 98.4 million people are affected globally, and US171.3 billion dollars are spent on economic damage. International agreements such as the Sendai framework for disaster risk reduction encourage the use of social media to strengthen disaster risk communication. With the advent of new technologies, social media has emerged out to be an important source of information in disaster management, and there is an increase in social media activity whilst disasters. Social media is the fourth most used platform for accessing emergency information. People seek to contact family, friends and search for food, water, transportation, and shelter. During cataclysmic events, the critical information posted on social media is immersed in irrelevant information. To assist and streamline emergency situations, staunch methodologies are required for extracting relevant information. The research study explores new-fangled deep learning methods for automatically identifying the relevancy of disaster-related social media messages. The contributions of this study are three-fold. Firstly, we present a hybrid deep learning-based framework to ameliorate the classification of disaster-related social media messages. The data is gathered from the Twitter platform, using the Search Application Programming Interface. The messages that contain information regarding the need, availability of vital resources like food, water, electricity, etc., and provide situational information are categorized into relevant messages. The rest of the messages are categorized into irrelevant messages. To demonstrate the applicability and effectiveness of the proposed approach, it is applied to the thunderstorm and cyclone Fani dataset. Both the disasters happened in India in 2019. Secondly, the performance of the proposed approach is compared with baseline methods, i.e., convolutional neural network, long short-term memory network, bidirectional long short-term memory network. The results of the proposed approach outperform the baseline methods. The performance of the proposed approach is evaluated using multiple metrics. The considered evaluation metrics are accuracy, precision, recall, f-score, area under receiver operating curve, area under precision-recall curve. The accurate and inaccurate classifications are shown on both the datasets. Thirdly, to incorporate our evaluated models into a working application, we extend an existing application DisDSS, which has been granted copyright invention award by Government of India. We call the newly extended system DisDSS 2.0, which integrates our framework to address the disaster relevancy identification issue. The output from the research study is helpful for disaster managers to make effective decisions on time. It bridges the gap between the decision-makers and citizens during disasters through the lens of deep learning.

How to cite: Singla, A., Agrawal, R., and Garg, A.: DisDSS 2.0: A Multi-Hazard Web-based Disaster Management System to Identify Disaster-Relevancy of a Social Media Message for Decision-Making Using Deep Learning Techniques, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-266, https://doi.org/10.5194/egusphere-egu22-266, 2022.

15:24–15:30
|
EGU22-6758
|
Virtual presentation
Pankaj Kumar Dalela, Saurabh Basu, Sandeep Sharma, Anugandula Naveen Kumar, Suvam Suvabrata Behera, and Rajkumar Upadhyay

Effective communication systems supported by Information and Communication Technologies (ICTs) are integral and important components for ensuring comprehensive disaster management. Continuous warning monitoring, prediction, dissemination, and response coordination along with public engagement by utilizing the capabilities of emerging technologies including Artificial Intelligence (AI) can assist in building resilience and ensuring Disaster Risk Reduction. Thus, for effective disaster management, an Integrated Alert System is proposed which encapsulates all concerned disaster management authorities, alert forecasting and disseminating agencies under a single umbrella for alerting the targeted public through various communication channels. Enhancing the capabilities of the system through AI, its integral part includes the data-driven citizen-centric Decision Support System which can help disaster managers by performing complete impact assessment of disaster events through configuration of decision models developed by learning inter-relationships of different parameters. The system needs to be capable of identification of possible communication means to address community outreach, prediction of scope of alert, providing influence of alert message on targeted vulnerable population, performing crowdsourced data analysis, evaluating disaster impact through threat maps and dashboards, and thereby, providing complete analysis of the disaster event in all phases of disaster management. The system aims to address challenges including limited communication channels utilization and audience reach, language differences, and lack of ground information in decision making posed by current systems by utilizing the latest state of art technologies.

How to cite: Dalela, P. K., Basu, S., Sharma, S., Kumar, A. N., Behera, S. S., and Upadhyay, R.: AI-enhanced Integrated Alert System for effective Disaster Management, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6758, https://doi.org/10.5194/egusphere-egu22-6758, 2022.

15:30–15:36
|
EGU22-10276
|
On-site presentation
Elena Xoplaki, Andrea Toreti, Florian Ellsäßer, Muralidhar Adakudlu, Eva Hartmann, Niklas Luther, Johannes Damster, Kim Giebenhain, Andrej Ceglar, and Jackie Ma

The project DAKI-FWS (BMWi joint-project “Data and AI-supported early warning system to stabilise the German economy”; German: “Daten- und KI-gestütztes Frühwarnsystem zur Stabilisierung der deutschen Wirtschaft”) develops an early warning system (EWS) to strengthen economic resilience in Germany. The EWS enables better characterization of the development and course of pandemics or hazardous climate extreme events and can thus protect and support lives, jobs, land and infrastructures.

The weather and climate modules of the DAKI-FWS use state-of-the-art seasonal forecasts for Germany and apply innovative AI-approaches to prepare very high spatial resolution simulations. These are used for the climate-related practical applications of the project, such as pandemics or subtropical/tropical diseases, and contribute to the estimation of the outbreak and evolution of health crises. Further, the weather modules of the EWS objectively identify weather and climate extremes, such as heat waves, storms and droughts, as well as compound extremes from a large pool of key data sets. The innovative project work is complemented by the development and AI-enhancement of the European Flood Awareness System model, LISFLOOD, and forecasting system for Germany at very high spatial resolution. The model combined with the high-end output of the seasonal forecast prepares high-resolution, accurate flood risk assessment. The final output of the EWS and hazard maps not only support adaptation, but they also increase preparedness providing a time horizon of several months ahead, thus increasing the resilience of economic sectors to impacts of the ongoing anthropogenic climate change. The weather and climate modules of the EWS provide economic, political, and administrative decision-makers and the general public with evidence on the probability of occurrence, intensity and spatial and temporal extent of extreme events as well as with critical information during a disaster.

How to cite: Xoplaki, E., Toreti, A., Ellsäßer, F., Adakudlu, M., Hartmann, E., Luther, N., Damster, J., Giebenhain, K., Ceglar, A., and Ma, J.: Weather and climate in the AI-supported early warning system DAKI-FWS, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10276, https://doi.org/10.5194/egusphere-egu22-10276, 2022.

15:36–15:42
|
EGU22-1662
|
On-site presentation
|
Stephen Haddad, Peter Killick, Aaron Hopkinson, Tomasz Trzeciak, Mark Burgoyne, and Susan Leadbetter

Digital Twins present a new user-centric paradigm for developing and using weather & climate simulations that is currently being widely embraced, for example through large projects such as Destination Earth led by ECMWF.  In this project we have taken a smaller scale approach in understanding the opportunities and challenges in translating the Digital Twin concept from the original domain of manufacturing and the built environment to modelling of the earth’s atmosphere.

We describe our approach to creating a Digital Twin based on the Met Office’s Atmospheric Dispersion simulation package called NAME. We will discuss the advantages of doing this, such as the ability of nonexpert users to more easily produce scientifically valid simulations of dispersion events, such as industrial fires, and easily obtain results to feed into downstream analysis, for example of health impacts. We will describe the requirements of each of the key components of a digital twin and potential implementation approaches.

We will describe how a Digital Twin framework enables multiple models to be joined together to model complex systems as required for atmospheric concentrations around chemical spills or fires modelled by NAME. Overall, we outline a potential project blueprint for future work to improve usability and scientific throughput of existing modelling systems by creating a Digital Twins from core current modelling code and data gathering systems.

How to cite: Haddad, S., Killick, P., Hopkinson, A., Trzeciak, T., Burgoyne, M., and Leadbetter, S.: Exploring the challenges of Digital Twins for weather & climate through an Atmospheric Dispersion modelling prototype, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1662, https://doi.org/10.5194/egusphere-egu22-1662, 2022.

15:42–15:48
|
EGU22-1230
|
Virtual presentation
|
Thomas Ward and Rinku Kanwar

Overview:

Operations Risk Insight (ORI) with Watson is an IBM AI application on the cloud.  ORI analyzes thousands of news sources and alert services daily.  There are too many data sources, warnings, watches and advisories for an individual to understand.  For example, during a week in 2021 with record wildfires, hurricanes and COVID hotspots across the US, thousands of impacting risk events hit key points of interest to IBM globally and were analyzed in real time.  

Which events impacted IBM’s business, and which didn’t? ORI has saved IBM millions of dollars annually for the past 5 years.  Our non-profit disaster relief partners have used ORI to respond more effectively to the needs of the vulnerable groups impacted by disasters.  Find out how disaster response leaders identify severe risks using Watson, the Hybrid Cloud, Big Data, Machine Learning and AI.

Presentation Objectives:

The objectives of this session are:

  • Educate the audience on a pragmatic and relevant IBM internal use case for an AI on the Cloud application, using many Watson and The Weather Company API's, plus machine learning running on IBM's cloud.
  • Obtain feedback and suggestions from the audience on how to expand and improve the machine learning and data analysis for this application to expanded the value for natural disaster response leaders. .
  • Inspire others to create their own grass roots cognitive project and learn more about AI and cloud technologies.
  • Discuss how this relates to the Call for Code and is used by Disaster Relief Agencies for free to assist the most vulnerable in society.

References Links:  

  • ORI has been featured in two Cloud Pak for Data (CP4D) workbooks:  CP4D Watson Studio Tutorial on Risk Analysis: https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/f2ee8dbf-e6af-4b00-90ca-8f7fee77c377/view and the Flood Risk Project: https://dataplatform.dev.cloud.ibm.com/exchange/public/entry/view/def444923c771f3f20285820dc072eac  Each demonstrate the application and methods for Machine Learning to be applied to AI for Natural Disaster Management (NDM). 
  • IBM use case for non-profit partners: https://newsroom.ibm.com/ORI-nonprofits-disaster
  • NC Tech article: https://www.ednc.org/nonprofits-and-artificial-intelligence-join-forces-for-covid-19-relief/
  • Supply Chain Management Review (SCMR) interview: https://www.scmr.com/article/nextgen_supply_chain_interview_tom_ward
  • Supply Chain navigator article: http://scnavigator.avnet.com/article/january-2017/the-missing-link/

How to cite: Ward, T. and Kanwar, R.: IBM Operations Risk Insights with Watson:  a multi-hazard risk, AI for Natural Disaster Management use case, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1230, https://doi.org/10.5194/egusphere-egu22-1230, 2022.

15:48–15:54
|
EGU22-9406
|
On-site presentation
Luigi Cesarini, Rui Figueiredo, Xavier Romão, and Mario Martina

The built environment is constantly under the threat of natural hazards, and climate change will only exacerbate such perils. The assessment of natural hazard risk requires exposure models representing the characteristics of the assets at risk, which are crucial to subsequently estimate damage and impacts of a given hazard to such assets. Studies addressing exposure assessment are expanding, in particular due to technological progress. In fact, several works are introducing data collected from volunteered geographic information (VGI), user-generated content, and remote sensing data. Although these methods generate large amounts of data, they typically require a time-consuming extraction of the necessary information. Deep learning models are particularly well suited to perform this labour-intensive task due to their ability to handle massive amount of data.

In this context, this work proposes a methodology that connects VGI obtained from OpenStreetMap (OSM), street-level imagery from Google Street View (GSV) and deep learning object detection models to create an exposure dataset of electrical transmission towers, an asset particularly vulnerable to strong winds among other perils (i.e., ice loads and earthquakes). The main objective of the study is to establish and demonstrate a complete pipeline that first obtains the locations of transmission towers from the power grid layer of OSM’s world infrastructure, and subsequently assigns relevant features of each tower based on the classification returned from an object detection model over street-level imagery of the tower, obtained from GSV.

The study area for the initial application of the methodology is the Porto district (Portugal), which has an area of around 1360 km2 and 5789 transmission towers. The area was found to be representative given its diverse land use, containing both densely populated settlements and rural areas, and the different types of towers that can be found. A single-stage detector (YOLOv5) and a two-stage detector (Detectron2) were trained and used to perform identification and classification of towers. The first task was used to test the ability of a model to recognize whether a tower is present in an image, while the second task assigned a category to each tower based on a taxonomy derived from a compilation of the most used type of towers. Preliminary results on the test partition of the dataset are promising. For the identification task, YOLOv5 returned a mean average precision (mAP) of 87% for an intersection over union (IoU) of 50%, while Detectron2 reached a mAP of 91% for the same IoU. In the classification problem, the performances were also satisfactory, particularly when the models were trained on a sufficient number of images per class. 

Additional analyses of the results can provide insights on the types of areas for which the methodology is more reliable. For example, in remote areas, the long distance of a tower to the street might prevent the object to be identified in the image. Nevertheless, the proposed methodology can in principle be used to generate exposure models of transmission towers at large spatial scales in areas for which the necessary datasets are available.

 

How to cite: Cesarini, L., Figueiredo, R., Romão, X., and Martina, M.: Building exposure datasets using street-level imagery and deep learning object detection models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9406, https://doi.org/10.5194/egusphere-egu22-9406, 2022.

15:54–16:00
|
EGU22-6568
|
ECS
|
On-site presentation
Davide Mauro Ferrario, Elisa Furlan, Silvia Torresan, Margherita Maraschini, and Andrea Critto

In the last years there has been a growing interest around Machine Learning (ML) in climate risk/ multi-risk assessment, steered mainly by the growing amount of data available and the reduction of associated computational costs. Extracting information from spatio-temporal data is critically important for problems such as extreme events forecasting and assessing risks and impacts from multiple hazards. Typical challenges in which AI and ML are now being applied require understanding the dynamics of complex systems, which involve many features with non-linear relations and feedback loops, analysing the effects of phenomena happening at different time scales, such as slow-onset events (sea level rise) and short-term episodic events (storm surges, floods) and estimating uncertainties of long-term predictions and scenarios. 
While in the last years there were many successful applications of AI/ML, such as Random Forest or Long-Short Term Memory (LSTM) in floods and storm surges risk assessment, there are still open questions and challenges that need to be addressed. In fact, there is a lack of data for extreme events and Deep Learning (DL) algorithms often need huge amounts of information to disentangle the relationships among hazard, exposure and vulnerability factors contributing to the occurrence of risks. Moreover, the spatio-temporal resolution can be highly irregular and need to be reconstructed to produce accurate and efficient models. For example, using data from meteorological ground stations can offer accurate datasets with fine temporal resolution, but with an irregular distribution in the spatial dimension; on the other hand, leveraging on satellite images can give access to more spatially refined data, but often lacking the temporal dimension (fewer events available to due atmospheric disturbances). 
Several techniques have been applied, ranging from classical multi-step forecasting, state-space and Hidden Markov models to DL techniques, such as Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). ANN and Deep Generative Models (DGM) have been used to reconstruct spatio-temporal grids and modelling continuous time-series, CNN to exploit spatial relations, Graph Neural Networks (GNN) to extract multi-scale localized spatial feature and RNN and LSTM for multi-scale time series prediction.  
To bridge these gaps, an in-depth state-of-the-art review of the mathematical and computer science innovations in ML/DL techniques that could be applied to climate /multi-risk assessment was undertaken. The review focuses on three possible ML/DL applications: analysis of spatio-temporal dynamics of risk factors, with particular attention on applications for irregular spatio-temporal grids; multivariate analysis for multi-hazard interactions and multiple risk assessment endpoints; analysis of future scenarios under climate change. We will present the main outcomes of the scientometric and systematic review of publications across the 2000- 2021 timeframe, which allowed us to: i) summarize keywords and word co-occurrence networks, ii) highlight linkages, working relations and co-citation clusters, iii) compare ML and DL approaches with classical statistical techniques and iv) explore applications at the forefront of the risk assessment community.

How to cite: Ferrario, D. M., Furlan, E., Torresan, S., Maraschini, M., and Critto, A.: Harnessing Machine Learning and Deep Learning applications for climate change risk assessment: a survey, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6568, https://doi.org/10.5194/egusphere-egu22-6568, 2022.

16:00–16:06
|
EGU22-11872
|
ECS
|
On-site presentation
María González-Calabuig, Jordi Cortés-Andrés, Miguel-Ángel Fernández-Torres, and Gustau Camps-Valls

Droughts constitute one of the costliest natural hazards and have seriously destructive effects on the ecological environment, agricultural production and socio-economic conditions. Their elusive and subjective definition, due to the complex physical, chemical and biological processes of the Earth system they involve, makes their management an arduous challenge to researchers, as well as decision and policy makers. We present here our most recent advances in machine learning models in three complementary lines of research about droughts: monitoring, forecasting and understanding. While monitoring or detection is about gaining the time series of drought maps and discovering underlying patterns and correlations, forecasting or prediction is to anticipate future droughts. Last but not least, understanding or explaining models by means of expert-comprehensible representations is equally important as accurately addressing these tasks, especially for their deployment in real scenarios. Thanks to the emergence and success of deep learning, all of these tasks can be tackled by the design of spatio-temporal data-driven approaches built on the basis of climate variables (soil moisture, precipitation, temperature, vegetation health, etc.) and/or satellite imagery. The possibilities are endless, from the design of convolutional architectures and attention mechanisms to the use of generative models such as Normalizing Flows (NFs) or Generative Adversarial Networks (GANs), trained both in a supervised and unsupervised manner, among others. Different application examples in Europe from 2003 onwards are provided, with the aim of reflecting on the possibilities of the strategies proposed, and also of foreseeing alternatives and future lines of development. For that purpose, we make use of several mesoscale (1 km) spatial and 8 days temporal resolution variables included in the Earth System Data Cube (ESDC) [Mahecha et al., 2020] for drought detection, while high resolution (20 m, 5 days) Sentinel-2 data cubes, extracted from the extreme summer track in EarthNet2021 [Requena-Mesa et al., 2021], are considered for forecasting.

 

References

Mahecha, M. D., Gans, F., Brandt, G., Christiansen, R., Cornell, S. E., Fomferra, N., ... & Reichstein, M. (2020). Earth system data cubes unravel global multivariate dynamics. Earth System Dynamics, 11(1), 201-234.

Requena-Mesa, C., Benson, V., Reichstein, M., Runge, J., & Denzler, J. (2021). EarthNet2021: A large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1132-1142).

How to cite: González-Calabuig, M., Cortés-Andrés, J., Fernández-Torres, M.-Á., and Camps-Valls, G.: Recent Advances in Deep Learning for Spatio-Temporal Drought Monitoring, Forecasting and Model Understanding, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11872, https://doi.org/10.5194/egusphere-egu22-11872, 2022.

16:06–16:12
|
EGU22-11787
|
ECS
|
Presentation form not yet defined
Michele Ronco, Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Gustau Camps-Valls, Miguel-Ángel Fernández-Torres, Maria Piles Guillem, and Nuno Carvalhais

Deep learning models have been remarkably successful in a number of different fields, yet their application to disaster management is obstructed by the lack of transparency and trust which characterises artificial neural networks. This is particularly relevant in the field of Earth sciences where fitting is only a tiny part of the problem, and process understanding becomes more relevant [1,2]. In this regard, plenty of eXplainable Artificial Intelligence (XAI) algorithms have been proposed in the literature over the past few years [3]. We suggest that combining saliency maps with interpretable approximations, such as LIME, is useful to extract complementary insights and reach robust explanations. We address the problem of wildfire forecasting for which interpreting the model's predictions is of crucial importance to put into action effective mitigation strategies. Daily risk maps have been obtained by training a convolutional LSTM with ten years of data of spatio-temporal features, including weather variables, remote sensing indices and static layers for land characteristics [4]. We show how the usage of XAI allows us to interpret the predicted fire danger, thereby shortening the gap between black-box approaches and disaster management.

 

[1] Deep learning for the Earth Sciences: A comprehensive approach to remote sensing, climate science and geosciences

Gustau Camps-Valls, Devis Tuia, Xiao Xiang Zhu, Markus Reichstein (Editors)

Wiley \& Sons 2021

[2] Deep learning and process understanding for data-driven Earth System Science

Reichstein, M. and Camps-Valls, G. and Stevens, B. and Denzler, J. and Carvalhais, N. and Jung, M. and Prabhat

Nature 566 :195-204, 2019

[3] Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

 Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller (Editors)

LNCS, volume 11700, Springer 

[4] Deep Learning Methods for Daily Wildfire Danger Forecasting

Ioannis Prapas, Spyros Kondylatos, Ioannis Papoutsis, Gustau Camps-Valls, Michele Ronco, Miguel-Ángel Fernández-Torres, Maria Piles Guillem, Nuno Carvalhais

arXiv: 2111.02736


 

How to cite: Ronco, M., Prapas, I., Kondylatos, S., Papoutsis, I., Camps-Valls, G., Fernández-Torres, M.-Á., Piles Guillem, M., and Carvalhais, N.: Explainable deep learning for wildfire danger estimation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11787, https://doi.org/10.5194/egusphere-egu22-11787, 2022.

16:12–16:18
|
EGU22-12432
|
ECS
|
On-site presentation
|
Maria João Sousa, Alexandra Moutinho, and Miguel Almeida

The increased frequency, intensity, and severity of wildfire events in several regions across the world has highlighted several disaster response infrastructure hindrances that call for enhanced intelligence gathering pipelines. In this context, the interest in the use of unmanned aerial vehicles for surveillance and active fire monitoring has been growing in recent years. However, several roadblocks challenge the implementation of these solutions due to their high autonomy requirements and energy-constrained nature. For these reasons, the artificial intelligence development focus on large models hampers the development of models suitable for deployment onboard these platforms. In that sense, while artificial intelligence approaches can be an enabling technology that can effectively scale real-time monitoring services and optimize emergency response resources, the design of these systems imposes: (i) data requirements, (ii) computing constraints and (iii) communications limitations. Here, we propose a decentralized approach, reflecting upon these three vectors.

Data-driven artificial intelligence is central to both handle multimodal sensor data in real-time and to annotate large amounts of data collected, which are necessary to build robust safety-critical monitoring systems. Nevertheless, these two objectives have distinct implications computation-wise, because the first must happen on-board, whereas the second can leverage higher processing capabilities off-board. While autonomy of robotic platforms drives mission performance, being a key reason for the need for edge computing of onboard sensor data, the communications design is essential to mission endurance as relaying large amounts of data in real-time is unfeasible energy-wise. 

For these reasons, real-time processing and data annotation must be tackled in a complimentary manner, instead of the general practice of only targeting overall accuracy improvement. To build wildfire intelligence at the edge, we propose developments on two tracks of solutions: (i) data annotation and (ii) on the edge deployment. The need for considerable effort in these two avenues stems from both having very distinct development requirements and performance evaluation metrics. On the one hand, improving data annotation capacity is essential to build high quality databases that can provide better sources for machine learning. On the other hand, for on the edge deployment the development architectures need to compromise on robustness and architectural parsimony in order to be efficient for edge processing. Whereas the first objective is driven foremost by accuracy, the second goal must emphasize timeliness.

Acknowledgments
This work was supported by FCT – Fundação para a Ciência e a Tecnologia, I.P., through IDMEC, under project Eye in the Sky, PCIF/SSI/0103/2018, and through IDMEC, under LAETA, project UIDB/50022/2020. M. J. Sousa acknowledges the support from FCT, through the Ph.D. Scholarship SFRH/BD/145559/2019, co-funded by the European Social Fund (ESF).

How to cite: Sousa, M. J., Moutinho, A., and Almeida, M.: Building wildfire intelligence at the edge: bridging the gap from development to deployment, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12432, https://doi.org/10.5194/egusphere-egu22-12432, 2022.

16:18–16:24
|
EGU22-7129
|
ECS
|
Virtual presentation
|
Stathis G Arapostathis

Main purpose of current research article is to present latest findings on automatic methods of manipulating social network data for developing seismic intensity maps. As case study the author selected the 2020 Samos earthquake event (Mw= 7, 30 October 2020, Greece). That earthquake event had significant consequences to the urban environment along with 2 deaths and 19 injuries. Initially an automatic approach, presented recently in the international literature was applied producing thus seismic intensity maps from tweets. Furthermore, some initial findings regarding the use of machine learning in various parts of the automatic methodology were presented along with potential of using photos posted in social networks. The data used were several thousands tweets and instagram posts.The results, provide vital findings in enriching data sources, data types, and effective rapid processing.

How to cite: Arapostathis, S. G.: The Samos earthquake event (Mw = 7, 30 October 2020, Greece) as case study for applying machine learning on texts and photos scraped from social networks for developing seismic intensity maps., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7129, https://doi.org/10.5194/egusphere-egu22-7129, 2022.

16:24–16:30
|
EGU22-7915
|
ECS
|
On-site presentation
Hannah Vogel, Hamed Amiri, Oliver Plümper, Suzanne Hangx, and Martyn Drury

Induced subsidence and seismicity caused by the production of hydrocarbons in the Groningen gas field (the Netherlands) is a widely known issue facing this naturally aseismic region (Smith et al., 2019). Extraction reduces pore-fluid pressure leading to accumulation of small elastic and inelastic strains and an increase in effective vertical stress driving compaction of reservoir sandstones.

Recent studies (Pijnenburg et al., 2019a, b and Verberne et al., 2021) identify grain-scale deformation of intergranular and grain-coating clays as largely responsible for accommodating (permanent) inelastic deformation at small strains relevant to production (≤1.0%). However, their distribution, microstructure, abundance, and contribution to inelastic deformation remains unconstrained, presenting challenges when evaluating grain-scale deformation mechanisms within a natural system. Traditional methods of mineral identification are costly, labor-intensive, and time-consuming. Digital imaging coupled with machine-learning-driven segmentation is necessary to accelerate the identification of clay microstructures and distributions within reservoir sandstones for later large-scale analysis and geomechanical modeling.

We performed digital imaging on thin-sections taken from core recovered from the highly-depleted Zeerijp ZRP-3a well located at the most seismogenic part of the field. The core was kindly made available by the field operator, NAM. Optical digital images were acquired using the Zeiss AxioScan optical light microscope at 10x magnification with a resolution of 0.44µm and compared to backscattered electron (BSE) digital images from the Zeiss EVO 15 Scanning Electron Microscope (SEM) at varying magnifications with resolutions ranging from 0.09µm - 2.24 µm. Digital images were processed in ilastik, an interactive machine-learning-based toolkit for image segmentation that uses a Random Forest classifier to separate clays from a digital image (Berg et al., 2019).

Comparisons between segmented optical and BSE digital images indicate that image resolution is the main limiting factor for successful mineral identification and image segmentation, especially for clay minerals. Lower resolution digital images obtained using optical light microscopy may be sufficient to segment larger intergranular/pore-filling clays, but higher resolution BSE images are necessary to segment smaller micron to submicron-sized grain-coating clays. Comparing the same segmented optical image (~11.5% clay) versus BSE image (~16.3% clay) reveals an error of ~30%, illustrating the potential of underestimating the clay content necessary for geomechanical modeling.

Our analysis shows that coupled automated electron microscopy with machine-learning-driven image segmentation has the potential to provide statistically relevant and robust information to further constrain the role of clay films on the compaction behavior of reservoir rocks.

 

References:

Berg, S. et al., Nat Methods 16, 1226–1232 (2019).

(NAM) Nederlandse Aardolie Maatschappij BV (2015).

Pijnenburg, R. P. J. et al., Journal of Geophysical Research: Solid Earth, 124 (2019a).

Pijnenburg, R. P. J. et al., Journal of Geophysical Research: Solid Earth, 124, 5254–5282. (2019b)

Smith, J. D. et al., Journal of Geophysical Research: Solid Earth, 124, 6165–6178. (2019)

Verberne, B. A. et al., Geology, 49 (5): 483–487. (2020)

How to cite: Vogel, H., Amiri, H., Plümper, O., Hangx, S., and Drury, M.: Applications of digital imaging coupled with machine-learning for aiding the identification, analysis, and quantification of intergranular and grain-coating clays within reservoirs rocks., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7915, https://doi.org/10.5194/egusphere-egu22-7915, 2022.

16:30–16:36
|
EGU22-7711
|
On-site presentation
|
Yueting Li, Claudia Zoccarato, Noemi Friedman, András Benczúr, and Pietro Teatini

Earth fissure associated with groundwater pumping is a severe geohazard jeopardizing several subsiding basins generally in arid countries (e.g., Mexico, Arizona, Iran, China, Pakistan). Up to 15 km long, 1–2 m wide, 15–20 m deep, and more than 2 m vertically dislocated fissures have been reported. A common geological condition favoring the occurrence of earth fissures is the presence of shallow bedrock ridge buried by compacting sedimentary deposits. This study aims to improve the understanding of this mechanism by evaluating the effects of various factors on the risk of fissure formation and development. Several parameters playing a role in the fissure occurrence have been considered, such as the shape of the bedrock ridge, the aquifer thickness, the pressure depletion in the aquifer system, and its compressibility. A realistic case is developed where the characteristics of fissure like displacements and stresses are quantified with aid of a numerical approach based on finite elements for the continuum and interface elements for the discretization of the fissures. Modelling results show that the presence of bedrock ridge causes tension accumulation around its tip and results in fissure opening from land surface downward after long term piezometry depletion. Different global sensitivity analysis methods are applied to measure the importance of each single factor (or group of them) on the quantity of interest, i.e., the fissure opening. A conventional variance-based method is first presented with Sobol indices computed from Monte Carlo simulations, although its accuracy is only guaranteed with a high number of forward simulations. As alternatives, generalized polynomial chaos expansion and gradient boosting tree are introduced to approximate the forward model and implement the corresponding sensitivity assessment at a significantly reduced computational cost. All the measures provide similar results that highlight the importance of bedrock ridge in earth fissuring. Generally, the steeper bedrock ridge the higher the risk of significant fissure opening. Pore pressure depletion is secondarily key factor which is essential for fissure formation.

How to cite: Li, Y., Zoccarato, C., Friedman, N., Benczúr, A., and Teatini, P.: Global sensitivity analyses to characterize the risk of earth fissures in subsiding basins, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7711, https://doi.org/10.5194/egusphere-egu22-7711, 2022.