ITS1.1/NH0.1 | Artificial Intelligence for Natural Hazard and Disaster Management
EDI
Artificial Intelligence for Natural Hazard and Disaster Management
Co-organized by ESSI1/NP4
Convener: Raffaele Albano | Co-conveners: Ivanka PelivanECSECS, Elena Xoplaki, Andrea Toreti, Monique Kuglitsch
Orals
| Wed, 26 Apr, 08:30–10:12 (CEST)
 
Room 0.94/95
Posters on site
| Attendance Wed, 26 Apr, 16:15–18:00 (CEST)
 
Hall X4
Posters virtual
| Attendance Wed, 26 Apr, 16:15–18:00 (CEST)
 
vHall NH
Orals |
Wed, 08:30
Wed, 16:15
Wed, 16:15
Artificial intelligence (in particular, machine learning) can be used to predict and respond to natural disasters. The ITU/WMO/UNEP Focus Group AI for Natural Disaster Management (FG-AI4NDM) is building a community of experts and stakeholders to identify best practices in the use of AI for data processing, improved modeling across spatiotemporal scales, and providing effective communication. This multidisciplinary FG-AI4NDM-session invites contributions addressing challenges and opportunities related to the use of AI for the detection, forecasting, and communication of natural hazards and disasters. In particular, it welcomes presentations highlighting innovative approaches to data collection (e.g., via sensor networks), data handling (e.g., via automating annotation), data storage and transmission (e.g., via edge- and cloud computing), novel modeling or explainability methods (e.g., integrating quantum computing methods), and outcomes of operational implementation.

Orals: Wed, 26 Apr | Room 0.94/95

Chairpersons: Raffaele Albano, Monique Kuglitsch, Ivanka Pelivan
08:30–08:32
08:32–08:42
|
EGU23-5913
|
ITS1.1/NH0.1
|
ECS
|
On-site presentation
Nikolaos Ioannis Bountos, Dimitrios Michail, Themistocles Herekakis, Angeliki Thanasou, and Ioannis Papoutsis

Artificial intelligence (AI) methods have emerged as a powerful tool to study and in some cases forecast natural disasters [1,2]. Recent works have successfully combined deep learning modeling with scientific knowledge stemming from the SAR Interferometry domain propelling research on tasks like volcanic activity monitoring [3], associated with ground deformation. A milestone in this interdisciplinary field has been the release of the Hephaestus [4] InSAR dataset, facilitating automatic InSAR interpretation, volcanic activity localization as well as the detection and categorization of atmospheric contributions in wrapped interferograms. Hephaestus contains annotations for approximately 20,000 InSAR frames, covering the 44 most active volcanoes in the world. The annotation was performed  by a team of InSAR experts that manually examined each InSAR frame individually. However, even with such a large dataset, class imbalance remains a challenge, i.e. the InSAR samples containing volcano deformation fringes are orders of magnitude less than those that do not. This is anticipated since natural hazards are in principle rare in nature. To counter that, the authors of Hephaestus provide more than 100,000 unlabeled InSAR frames to be used for global large-scale self-supervised learning, which is more robust to class imbalance when compared to supervised learning [5]. 

Motivated by the Hephaestus dataset and the insights provided by [2], we train global, task-agnostic models in a self-supervised learning fashion that can handle distribution shifts caused by spatio-temporal variability as well as major class imbalances. By finetuning such a model to the labeled part of Hephaestus we obtain the backbone for a global volcanic activity alerting system, namely Pluto. Pluto is a novel end-to-end AI based system that provides early warnings of volcanic unrest on a global scale.

Pluto automatically synchronizes its database with the Comet-LiCS [6] portal to receive newly generated Sentinel-1 InSAR data acquired over volcanic areas. The new samples are fed to our volcanic activity detection model. If volcanic activity is detected, an automatic email is sent to the service users, which contains information about the intensity, the exact location and the type (Mogi, Sill, Dyk) of the event. To ensure a robust and ever-improving service we augment Pluto with an iterative pipeline that collects samples that were misclassified in production, and uses them to further improve the existing model. 

 

[1] Kondylatos et al. "Wildfire danger prediction and understanding with Deep Learning." Geophysical Research Letters 49.17 (2022): e2022GL099368.

[2] Bountos et al. "Self-supervised contrastive learning for volcanic unrest detection." IEEE Geoscience and Remote Sensing Letters 19 (2021): 1-5.

[3] Bountos et al. "Learning from Synthetic InSAR with Vision Transformers: The case of volcanic unrest detection." IEEE Transactions on Geoscience and Remote Sensing (2022).

[4] Bountos et al. "Hephaestus: A large scale multitask dataset towards InSAR understanding." Proceedings of the IEEE/CVF CVPR. 2022.

[5] Liu et al. "Self-supervised learning is more robust to dataset imbalance." arXiv preprint arXiv:2110.05025 (2021).

[6] Lazecký et al. "LiCSAR: An automatic InSAR tool for measuring and monitoring tectonic and volcanic activity." Remote Sensing 12.15 (2020): 2430.

How to cite: Bountos, N. I., Michail, D., Herekakis, T., Thanasou, A., and Papoutsis, I.: Pluto: A global volcanic activity early warning system powered by large scale self-supervised deep learning on InSAR data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5913, https://doi.org/10.5194/egusphere-egu23-5913, 2023.

08:42–08:52
|
EGU23-8419
|
ITS1.1/NH0.1
|
ECS
|
On-site presentation
Filippo Dainelli, Riccardo Taormina, Guido Ascenso, Enrico Scoccimarro, Matteo Giuliani, and Andrea Castelletti

Extra-Tropical Cyclones are major systems ruling and influencing the atmospheric structure at mid-latitudes. They are characterised by strong winds and heavy precipitation, and can cause considerable storm surges potentially devastating for coastal regions. The availability of historical observations of the extreme events caused by intense ETCs are rather limited, hampering risk evaluation. Increasing the amount of significant data available would substantially help several fields of analysis influenced by these events, such as coastal management, agricultural production, energy distribution, air and maritime transportation, and risk assessment and management.

Here, we address the possibility of generating synthetic ETC atmospheric fields of mean sea level pressure, wind speed, and precipitation in the North Atlantic by training a Generative Adversarial Network (GAN). The purpose of GANs is to learn the distribution of a training set based on a game theoretic scenario where two networks compete against each other, the generator and the discriminator. The former is trained to generate synthetic examples that are plausible and resemble the real ones. The input of the generator is a vector of random Gaussian values, whose domain is known as the “latent space”. The discriminator learns to distinguish whether an example comes from the dataset distribution. The competition set by the game-theoretic approach improves the network until the counterfeits are indistinguishable from the originals.

To train the GAN, we use atmospheric fields extracted from the ERA5 reanalysis dataset in the geographic domain with boundaries 0°- 90°N, 70°W - 20°E and for the period 1st January 1979 - 1st January 2020. We analyse the generated samples’ histograms, the samples’ average fields, the Wasserstein distance and the Kullback-Leibler divergence between the generated samples and the test set distributions. Results show that the generative model has learned the distribution of the values of the atmospheric fields and the general spatial trends of the atmosphere in the domain. To evaluate better the atmospheric structure learned by the network, we perform linear and spherical interpolations in the latent space. Specifically, we consider four cyclones and compare the frames of their tracks to those of the synthetic tracks generated by interpolation. The interpolated tracks show interesting features consistent with the original tracks. These findings suggest that GANs can learn meaningful representations of the ETCs’ fields, encouraging further investigations to model the tracks’ temporal evolution.

How to cite: Dainelli, F., Taormina, R., Ascenso, G., Scoccimarro, E., Giuliani, M., and Castelletti, A.: Synthetic Generation of Extra-Tropical Cyclones’ fields with Generative Adversarial Networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8419, https://doi.org/10.5194/egusphere-egu23-8419, 2023.

08:52–09:02
|
EGU23-8944
|
ITS1.1/NH0.1
|
ECS
|
On-site presentation
Marthe Wens, Raed Hamed, Hans de Moel, Marco Massabo, and Anna Mapelli

Understanding the relationships between different drought drivers and observed drought impact can provide important information for early warning systems and drought management planning. Moreover, this relationship can help inform the definition and delineation of drought events. However, currently, drought hazards are often characterized based on their frequency of occurring, rather than based on the impacts they cause. A more data-driven depiction of “impactful drought events”- whereby droughts are defined by the hydrometeorological conditions that, in the past, have led to observable impacts-, has the potential to be more meaningful for drought risk assessments.

In our research, we apply a data-mining method based on association rules, namely fast and frugal decision trees, to link different drought hazard indices to agricultural impacts. This machine learning technique is able to select the most relevant drought hazard drivers (among both hydrological and meteorological indices) and their thresholds associated with “impactful drought events”. The technique can be used to assess the likelihood of occurrence of several impact severities, hence it supports the creation of a loss exceedance curve and estimates of average annual loss. An additional advantage is that such data-driven relations in essence reflect varying local drought vulnerabilities which are difficult to quantify in data-scarce regions.

This contribution exemplifies the use of fast and frugal decision trees to estimate (agricultural) drought risk in the Volta basin and its riparian countries. We find that some agriculture-dependent regions in Ghana, Togo and Côte d’Ivoire face annual average drought-induced maize production losses up to 3M USD, while per hectare, losses can mount to on average 50 USD/ha per year in Burkina Faso. In general, there is a clear north-south gradient in the drought risk, which we find augmented under projected climate conditions. Climate change is estimated to worsen the drought impacts in the Volta Basin, with 11 regions facing increases in annual average losses of more than 50%.

We show that the proposed multi-variate, impact-based, non-parametric, machine learning approach can improve the evaluation of droughts, as this approach directly leverages observed drought impact information to demarcate impactful drought events. We evidence that the proposed technique can support quantitative drought risk assessments which can be used for geographic comparison of disaster losses at a sub-national scale.

How to cite: Wens, M., Hamed, R., de Moel, H., Massabo, M., and Mapelli, A.: Towards probabilistic impact-based drought risk analysis – a case study on the Volta Basin, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8944, https://doi.org/10.5194/egusphere-egu23-8944, 2023.

09:02–09:12
|
EGU23-9091
|
ITS1.1/NH0.1
|
On-site presentation
Mathieu Turgeon-Pelchat, Heather McGrath, Fatemeh Esfahani, Simon Tolszczuk-Leclerc, Thomas Rainville, Nicolas Svacina, Lingjun Zhou, Zarrin Langari, and Hospice Houngbo

The Canada Centre for Mapping and Earth Observation (CCMEO) uses Radarsat Constellation Mission (RCM) data for near-real time flood mapping. One of the many advantages of using SAR sensors, is that they are less affected by the cloud coverage and atmospheric conditions, compared to optical sensors. RCM has been used operationally since 2020 and employs 3 satellites, enabling lower revisit times and increased imagery coverage. The team responsible for the production of flood maps in the context of emergency response are able to produce maps within four hours from the data acquisition. Although the results from their automated system are good, there are some limitations to it, requiring manual intervention to correct the data before publication. Main limitations are located in urban and vegetated areas. Work started in 2021 to make use of deep learning algorithms, namely convolutional neural networks (CNN), to improve the performances of the automated production of flood inundation maps. The training dataset make use of the former maps created by the emergency response team and is comprised of over 80 SAR images and corresponding digital elevation model (DEM) in multiple locations in Canada. The training and test images were split in smaller tiles of 256 x 256 pixels, for a total of 22,469 training tiles and 6,821 test tiles. Current implementation uses a U-Net architecture from NRCan geo-deep-learning pipeline (https://github.com/NRCan/geo-deep-learning). To measure performance of the model, intersection over union (IoU) metric is used. The model can achieve 83% IoU for extracting water and flood from background areas over the test tiles. Next steps include increasing the number of different geographical contexts in the training set, towards the integration of the model into production.

How to cite: Turgeon-Pelchat, M., McGrath, H., Esfahani, F., Tolszczuk-Leclerc, S., Rainville, T., Svacina, N., Zhou, L., Langari, Z., and Houngbo, H.: Improving near real-time flood extraction pipeline from SAR data using deep learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9091, https://doi.org/10.5194/egusphere-egu23-9091, 2023.

09:12–09:22
|
EGU23-9426
|
ITS1.1/NH0.1
|
ECS
|
On-site presentation
|
|
Johanna Strebl, Julia Gottfriedsen, Dominik Laux, Max Helleis, and Volker Tresp

Throughout the past couple years, changes in global climate have been turning wildfires into an increasingly unpredictable phenomenon. Many environmental parameters that have been linked to wildfires, such as the number of consecutive hot days, are becoming increasingly unstable. This leads to a twofold problem: adequate fire risk assessment is at the same time more important and more difficult than ever. 

In the past, physical models were the prevalent approach to most questions in the domain of wildfire science. While they tend to provide accurate and transparent results, they require domain expertise and often tedious manual data collection.

In recent years, increased computation capabilities and the improved availability of remote sensing data associated with the new space movement have made deep learning a beneficial approach. Data-driven approaches often yield state of the art performance without requiring expert knowledge at a fraction of the complexity of physical models. The downside, however, is that they are often intransparent and offer no insights into their inner algorithmic workings. 

We want to shed some light on this interpretability/performance tradeoff and compare different approaches for predicting wildfire hazard. We evaluate their strengths and weaknesses with a special focus on explainability. We built a wildfire hazard model for South America based on a spatiotemporal CNN architecture that infers fire susceptibility from environmental conditions that led to fire in the past. The training data used contains selected ECMWF ERA5 Land variables and ESA world cover information. This means that our model is able to learn from actual fire conditions instead of relying on theoretical frameworks. Unlike many other models, we do not make simplifying assumptions such as a standard fuel type, but calculate hazard ratings based on actual environmental conditions. Compared to classical fire hazard models, this approach allows us to account for regional and atypical fire behavior and makes our model readily adaptable and trainable for other ecosystems, too.

The ground truth labels are derived from fusing active fire remote sensing data from 20 different satellites into one active wildfire cluster data set. The problem itself is highly imbalanced with non-fire pixels making up 99.78% of the training data. Therefore we evaluate the ability of our model to correctly predict wildfire hazard using metrics for imbalanced data such as PR-AUC and F1 score. We also compare the results against selected standard fire hazard models such as the Canadian Fire Weather Index (FWI). 

In addition, we assess the computational complexity and speed of calculating the respective models and consider the accuracy/complexity/speed tradeoff of the different approaches. Furthermore, we aim to provide insights why and how our model makes its predictions by leveraging common explainability methods. This allows for insights into which factors tend to influence wildfire hazard the most and to optimize for relatively lightweight, yet performant and transparent architectures.

How to cite: Strebl, J., Gottfriedsen, J., Laux, D., Helleis, M., and Tresp, V.: Fire hazard modelling with remote sensing data for South America, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9426, https://doi.org/10.5194/egusphere-egu23-9426, 2023.

09:22–09:32
|
EGU23-13083
|
ITS1.1/NH0.1
|
On-site presentation
Agnieszka Indiana Olbert, Sogol Moradian, and Galal Uddin

Flood early warning systems are vital for preventing flood damages and for reducing disaster risks. Such systems are particularly important for forecasting compound events where multiple, often dependent flood drivers co-occur and interact. In this research an early warning system for prediction of coastal-fluvial floods is developed to provide a robust, cost-effective and time-efficient framework for management of flood risks and impacts. This three-step method combines a cascade of three linked models: (1) statistical model that determines probabilities of multiple-driver flood events, (2) hydrodynamic model forced by outputs from the statistical model, and finally (3) machine learning (ML) model that uses hydrodynamic outputs from various probability flood events to train the ML algorithm in order to predict the spatially and temporarily variable inundation patterns resulting from a combination of coastal and fluvial flood drivers occurring simultaneously.

The method has been utilized for the case of Cork City, located in the south-west of Ireland, which has a long history of fluvial-coastal flooding. The Lee  River channelling through the city centre may generate a substantial flood when the downstream river flow draining to the estuary coincides with the sea water propagating upstream on a flood tide. For this hydrological domain the statistical model employs the univariate extreme values analysis and copula functions to calculate joint probabilities of river discharges and sea water levels (astronomical tides and surge residuals) occurring simultaneously. The return levels for these two components along a return level curve produced by the copula function are used to generate synthetic timeseries, which serve as water level boundary conditions for a hydrodynamic flood model. The multi-scale nested flood model (MSN_Flood) was configured for Cork City at 2m resolution to simulate an unsteady, non-uniform flow in the Lee  River and a flood wave propagation over urban floodplains. The ensemble hydrodynamic model outputs are ultimately used to train and test a range machine learning models for prediction of flood extents and water depths. In total, 23 machine learning algorithms including: Artificial Neural Network, Decision Tree, Gaussian Process Regression, Linear Regression, Radial Basis Function, Support Vector Machine, and Support Vector Regression were employed to confirm that the ML algorithm can be used successfully to predict the flood inundation depths over urban floodplains for a given set of compound flood drivers. Here, the limited flood conditioning factors taken into account to analyse floods are the upstream flood hydrographs and downstream sea water level timeseries. To evaluate model performance, different statistical skill scores were computed. Results indicated that in most pixels, the Gaussian Process Regression model performs better than the other models.

The main contribution of this research is to demonstrate the ML models can be used in early warning systems for flood prediction and to give insight into the most suitable models in terms of robustness, accuracy, effectiveness, and speed. The findings demonstrate that ML models do help in flood water propagation mapping and assessment of flood risk under various compound flood scenarios.

How to cite: Olbert, A. I., Moradian, S., and Uddin, G.: Machine learning modelling of compound flood events, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13083, https://doi.org/10.5194/egusphere-egu23-13083, 2023.

09:32–09:42
|
EGU23-14126
|
ITS1.1/NH0.1
|
ECS
|
On-site presentation
Tobias Bauer, Julia Miller, Julia Gottfriedsen, Christian Mollière, Juan Durillo Barrionuevo, and Nicolay Hammer

Climate change is one of the most pressing challenges to humankind today. The number and severity of wildfires are increasing in many parts of the world, with record-breaking temperatures, prolonged heat waves, and droughts. We can minimize the risks and consequences of these natural disasters by providing accurate and timely wildfire progression predictions through fire spread modeling. Knowing the direction and rate of spread of wildfires over the next hours can help deploy firefighting resources more efficiently and warn nearby populations hours in advance to allow safe evacuation.
Physics-based spread models have proven their applicability on a regional scale but often require detailed spatial input data. Additionally, rendering them in real-time scenarios can be slow and therefore inhibit fast output generation. Deep learning-based models have shown success in specific fire spread scenarios in recent years. But they are limited by their transferability to other regions, explainability, and longer training time. Accurate active fire data products and a fast data pipeline are additional essential requirements of a wildfire spread early-warning system.
In this study, physical models are compared to a deep learning-based CNN approach in terms of computational speed, area accuracy, and spread direction. We use a dataset of the 30 largest wildfires in the US in the year 2021 to evaluate the performance of the model’s predictions.
This work focuses in particular on the optimization of a cloud-based fire spread modeling data pipeline for near-real-time fire progression over the next  2 to 24 hours. We describe our data pipeline, including the collection and pre-processing of ignition points derived from remote sensing-based active fire detections. Furthermore, we use data from SRTM-1 as topography, ESA Land Cover and Corine Land Cover for fuel composition, and ERA-5 Reanalysis products for weather data inputs. The application of the physics-based models is derived from the open-source library ForeFire, to create and execute physical wildfire spread models from single fire ignition points as well as fire fronts. The predictions of the ForeFire model serve as a benchmark for the evaluation of the performance of our Convolutional Neural Network. The CNN forecasts the fire outline based on a spatiotemporal U-Net architecture. 
The scaling of the algorithms to a global setting is enabled by the Leibniz Supercomputing Centre. It enables large-scale cloud-based machine learning to provide a time-sensitive solution for operational fire spread modeling in emergency management based on real-time remote sensing information. 

How to cite: Bauer, T., Miller, J., Gottfriedsen, J., Mollière, C., Durillo Barrionuevo, J., and Hammer, N.: ML-based fire spread model and data pipeline optimization, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14126, https://doi.org/10.5194/egusphere-egu23-14126, 2023.

09:42–09:52
|
EGU23-15711
|
ITS1.1/NH0.1
|
On-site presentation
Filippo Catani, Sansar Raj Meena, Lorenzo Nava, Kushanav Bhuyan, Silvia Puliero, Lucas Pedrosa Soares, Helen Cristina Dias, and Mario Floris

Multiple landslide events occur often across the world which have the potential to cause significant harm to both human life and property. Although a substantial amount of research has been conducted to address the mapping of landslides using Earth Observation (EO) data, several gaps and uncertainties remain when developing models to be operational at the global scale. To address this issue, we present the HR-GLDD, a high-resolution (HR) dataset for landslide mapping composed of landslide instances from ten different physiographical regions globally: South and South-East Asia, East Asia, South America, and Central America. The dataset contains five rainfall triggered and five earthquake-triggered multiple landslide events that occurred in varying geomorphological and topographical regions. HR-GLDD is one of the first datasets for landslide detection generated by high-resolution satellite imagery which can be useful for applications in artificial intelligence for landslide segmentation and detection studies. Five state-of-the-art deep learning models were used to test the transferability and robustness of the HR-GLDD. Moreover, two recent landslide events were used for testing the performance and usability of the dataset to comment on the detection of newly occurring significant landslide events. The deep learning models showed similar results for testing the HR-GLDD in individual test sites thereby indicating the robustness of the dataset for such purposes. The HR-GLDD can be accessed open access and it has the potential to calibrate and develop models to produce reliable inventories using high-resolution satellite imagery after the occurrence of new significant landslide events. The HR-GLDD will be updated regularly by integrating data from new landslide events.

How to cite: Catani, F., Meena, S. R., Nava, L., Bhuyan, K., Puliero, S., Pedrosa Soares, L., Dias, H. C., and Floris, M.: A globally distributed dataset using generalized DL for rapid landslide mapping on HR satellite imagery, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15711, https://doi.org/10.5194/egusphere-egu23-15711, 2023.

09:52–10:02
|
EGU23-3928
|
ITS1.1/NH0.1
|
On-site presentation
|
Remy Vandaele, Sarah L Dance, and Varun Ojha

We investigate the use of CCTV cameras and deep learning to automatically monitor trash screen blockage. 

Trash screens are installed to prevent debris from entering critical parts of river networks (pipes, tunnels, locks,...). When the debris piles up at the trash screens,  it  may block the waterway and can cause flooding. It is thus crucial to clean blocked trash screens and avoid flooding and consequent damage. Currently, the maintenance crews must manually check a camera or river level data or go on site to check the state of the screen to know if it needs cleaning. This wastes valuable time in emergency situations where blocked screens must be urgently cleaned (e.g., in case of forecast  heavy rainfall). Some initial attempts at trying to predict trash screen blockage exist. However, these have not been widely adopted in practice.  CCTV cameras can be easily installed at any location and can thus be used to monitor the state of trash screens, but the images need to be processed by an automated algorithm to inform whether the screen is blocked.

With the help of UK-based practitioners (Environment Agency and local councils), we have created a dataset of 40000 CCTV trash screen images coming from 36 cameras, each labelled with blockage information. Using this database, we have compared 3 deep learning approaches to automate the detection of trash screen blockage: 

  • A binary image classifier, which takes as input a single image, and outputs a binary label that estimates whether the trash screen is blocked.
  • An approach based on anomaly detection which tries to reconstruct the input image with an auto-encoder trained on clean trash screen images.  In consequence, blocked trash screens are detected as anomalies by the auto-encoder.
  • An image similarity estimation approach based on the use of a siamese network, which takes as input two images and outputs a similarity index related, in our case, to whether both images contain trash. 

Using performance criteria chosen in discussion  with practitioners (overall accuracy, false alarm rate, resilience to luminosity / moving fields of view, computing capabilities), we show that deep learning can be used in practice to automate the identification of blocked trash screens. We also analyse the strengths and weaknesses of each of these approaches and provide guidelines for their application.

How to cite: Vandaele, R., Dance, S. L., and Ojha, V.: Comparison of deep learning approaches to monitor trash screen blockage from CCTV cameras, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3928, https://doi.org/10.5194/egusphere-egu23-3928, 2023.

10:02–10:12
|
EGU23-11756
|
ITS1.1/NH0.1
|
Highlight
|
On-site presentation
Shunichi Koshimura and Erick Mas

Digital twin is now recognized as digital copies of physical world's objects stored in digital space and utilized to simulate the sequences and consequences of target phenomena. By incorporating physical world’s data into the digital twin, developers and users have a full view of the target through real-time feedback. Recent advances in high-performance computing and large-scale data fusion of sensing and observations of both natural and social phenomena are enhancing applicability of digital twin paradigm to natural disaster research. Artificial intelligence (AI) and machine learning are also being applied more and more widely across the world and contributing as essential elements of digital twin. Those have significant implications for disaster response and recovery to hold out the promise of dramatically improving our understanding of disaster-affected areas and responses in real-time.

A project is underway to enhance resilience of disaster response systems by constructing "Disaster Digital Twin" to support disaster response team in the anticipated tsunami disaster. “Disaster Digital Twin” platform consists of a fusion of real-time hazard simulation, e.g. tsunami inundation forecast, social sensing to identify dynamic exposed population, and multi-agent simulation of disaster response activities to find optimal allocation or strategy of response efforts, and achieve the enhancement of disaster resilience.

To achieve the goal of innovating digital twin computing for enhancing disaster resilience, four preliminary results are shown;

(1) Developing nation-wide real-time tsunami inundation and damage forecast system. The priority target for forecasting is the Pacific coast of Japan, a region where Nankai trough earthquake is likely to occur.

(2) Establishing a real-time estimation of the number of exposed population in the inundation zone and clarifying the relationship between the exposed population and medical demand.

(3) Developing a reinforcement learning-based multi-agent simulation of medical activities in the affected areas with use of damage information, medical demands, and resources in the medical facilities to fid optimal allocation of medical response.

(4) Developing a digital twin computing platform to support disaster medical response activities and find optimal allocation of disaster medical services through what-if analysis of multi-agent simulation.

How to cite: Koshimura, S. and Mas, E.: Digital twin computing for enhancing resilience of disaster response system, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11756, https://doi.org/10.5194/egusphere-egu23-11756, 2023.

Posters on site: Wed, 26 Apr, 16:15–18:00 | Hall X4

Chairpersons: Andrea Toreti, Elena Xoplaki, Raffaele Albano
Introduction
X4.1
|
EGU23-16626
|
ITS1.1/NH0.1
|
ECS
|
solicited
Grith Martinsen, Yann Sweeney, Jonas Wied Pedersen, Roxana Alexandru, Sergi Capape, Charlotte Harris, Michael Butts, and Maria Diaz

Fluvial and flash floods can have devastating effects if they occur without warning. In Denmark, management of flood risk and performing preventative emergency service actions has been the sole responsibility of local municipalities. However, motivated by the disastrous 2021 floods in Central Europe, the Danish government has recently appointed the Danish Meteorological Institute (DMI) as the national authority for flood warnings in Denmark, and DMI is in the process of building capacity to fulfill this role.

 

One of the most cost-effective ways to mitigate flood damages is a well-functioning early warning system. Flood warning systems can rely on various methods ranging from human interpretation of meteorological and hydrological data to advanced hydrological modelling. The aim of this study is to generate short-range streamflow predictions in Danish river systems with lead times of 4-12 hours. To do so, we train and test models with hourly data on 172 catchments.

 

Machine learning (ML) models have in many cases been shown to outperform traditional hydrological models and offer efficient ways to learn patterns in historical data. Here, we investigate streamflow predictions with LightGBM, which is a gradient boosting framework that employs tree-based ML algorithms and is developed and maintained by Microsoft (Ke et al., 2017). The main argument for choosing a tree-based algorithm is its inherent ability to represent rapid dynamics often observed during flash floods. The main advantages of LightGBM over other tree-based algorithms are efficiency in training and lower memory consumption. We benchmark LightGBM’s performance against persistence, linear regression and various LSTM setups from the Neural Hydrology library (Kratzert et al., 2022).

 

We evaluate the algorithm trained using different input features. This analysis include model explainability, such as SHAP, and the results indicate that simply using lagged real-time observations of streamflow together with precipitation leads to the best performing and most parsimonious models. The results show that the LightGBM setup outperforms the benchmarks and is able to generate predictions with high Klinge-Gupta Efficiency scores > 0.9 in most catchments. Compared to the persistence benchmark it especially shows strong improvements on peak timing errors.

How to cite: Martinsen, G., Sweeney, Y., Pedersen, J. W., Alexandru, R., Capape, S., Harris, C., Butts, M., and Diaz, M.: Danish national early warning system for flash floods based on a gradient boosting machine learning framework, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16626, https://doi.org/10.5194/egusphere-egu23-16626, 2023.

X4.2
|
EGU23-12716
|
ITS1.1/NH0.1
|
ECS
|
Nirlipta Pande and Wouter Dorigo

Humans significantly control the natural environment and natural processes. Global fire ignitions are a prime example of how human actions change the frequency of occurrence of otherwise rare events like wildfires. However, human controls on fire ignition are insufficiently characterised by global fire models because impacts are often indirect, complex, and collinear. Hence, modelling fire activity while considering the complex relationships amongst the input variables and their effect on global ignitions is crucial to developing fire models reflecting the real world. 

This presentation leverages causal inference and machine learning frameworks applied to global datasets of fire ignitions from Earth observations and potential drivers to uncover anthropogenic pathways on fire ignition. Potential fire controls include human predictors from Earth observations and statistical data combined with variables traditionally associated with fire activity, like weather, and vegetation abundance and state, derived from earth observations and models.

Our research models causal relationships between fire control variables and global ignitions using Directed Acyclic Graphs(DAGs). Here, every edge between variables symbolises a relation between them; the edge weight indicates the strength of the relationship, and the orientation of the edge between the variables signifies the cause-and-effect relationship between the variables. However, defining a fire ignition distribution using DAGs is challenging owing to the large combinatorial sample space and acyclicity constraint. We use Bayesian structure learning to make these approximations and infer the extent of human intervention when combined with climate variables and vegetation properties. Our research demonstrates the need for causal modelling and the inclusion of anthropogenic factors in global fire modelling.

How to cite: Pande, N. and Dorigo, W.: Investigating causal effects of anthropogenic factors on global fire modeling, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12716, https://doi.org/10.5194/egusphere-egu23-12716, 2023.

X4.3
|
EGU23-4757
|
ITS1.1/NH0.1
Hansaem Kim

Earthquake-induced land deformation and structure failure are more severe over soft soils than over firm soils and rocks owing to the seismic site effect and liquefaction. The site-specific seismic site effect related to the amplification of ground motion, liquefaction, and landslide has spatial uncertainty depending on the local subsurface, surface geological, and topographic conditions. When the 2017 Pohang earthquake (M 5.4), South Korea’s second strongest earthquake in decades, occurred, the severe damages influenced by variable site response and vulnerability indicators were observed focusing on the basin or basin-edge region deposited unconsolidated Quaternary sediments. Thus, nationwide site characterization is essential considering empirical correlations with geotechnical site response and hazard parameters and surface proxies. Furthermore, in case of so many variables and tenuously related correlations, machine learning classification models can prove to be very precise than the parametric methods. This study established a multivariate seismic site classification system using the machine learning technique based on the geospatial big data platform.

The supervised machine learning classification techniques and more specifically, random forest, support vector machine (SVM), and artificial neural network (ANN) algorithms have been adopted. Supervised machine learning algorithms analyze a set of labeled training data consisting of a group of input data and desired output values. They produce an inferred function that can be used for predictions from given input data. To optimize the classification criteria by considering the geotechnical uncertainty and local site effects, the training datasets applying principal component analysis (PCA) were verified with k-fold cross-validation. Moreover, the optimized training algorithm, proved by loss estimators (receiver operating characteristic curve (ROC), the area under the ROC curve (AUC)) based on confusion matrix, was selected.

For the southeastern region in South Korea, the boring log information (strata, standard penetration test, etc.), geological map (1:50k scale), digital terrain model (having 5 m × 5 m), soil map (1:250k scale) were collected and constructed as geospatial big data. Preliminarily, to build spatially coincided datasets with geotechnical response parameters and surface proxies, the mesh-type geospatial information was built by advanced geostatistical interpolation and simulation methods.

Site classification systems use seismic hazard parameters related to the geotechnical characteristics of the study area as the classification criteria. The current site classification systems in South Korea and the United States recommend Vs30, which is the average shear wave velocity (Vs) up to 30 m underground. This criterion uses only the dynamic characteristics of the site without considering its geometric distribution characteristics. Thus, the geospatial information included the geo-layer thickness, surface proxies (elevation, slope, geological category, soil category), and Vs30. For the liquefaction and landslide hazard estimation, the liquefaction vulnerability indexes (i.e., liquefaction potential or severity index) and landslide vulnerability indexes (i.e., a factor of safety or displacement) were also trained as input features into the classifier modeling. Finally, the composite status against seismic site effect, liquefaction, and landslide was predicted as hazard class (I.e., safe, slight-, moderate-, extreme-failure) based on the best-fitting classifier.  

How to cite: Kim, H.: Machine Learning-based Site Classification System for Earthquake-Induced Multi-Hazard in South Korea, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4757, https://doi.org/10.5194/egusphere-egu23-4757, 2023.

X4.4
|
EGU23-4816
|
ITS1.1/NH0.1
|
ECS
Jordi Cortés-Andrés, Maria Gonzalez-Calabuig, Mengxue Zhang, Tristan Williams, Miguel-Ángel Fernández-Torres, Oscar J. Pellicer-Valero, and Gustau Camps-Valls

The automatic anticipation and detection of extreme events constitute a major challenge in the current context of climate change, which has changed their likelihood and intensity. One of the main objectives within the EXtreme Events: Artificial Intelligence for Detection and Attribution (XAIDA) project (https://xaida.eu/) is related to developing novel approaches for the detection and localization of extreme events, such as tropical cyclones and severe convective storms, heat waves and droughts, as well as persistent winter extremes, among others. Here we introduce the XAIDA4Detection toolbox that allows for tackling generic problems of detection and characterization. The open-source toolbox integrates a set of advanced ML models, ranging in complexity, assumptions, and sophistication, and yields spatio-temporal explicit detection maps with probabilistic heatmap estimates. We included supervised and unsupervised methods, deterministic and probabilistic, neural networks based on convolutional and recurrent nets, and density-based methods. The toolbox is intended for scientists, engineers, and students with basic knowledge of extreme events, outlier detection techniques, and Deep Learning (DL), as well as Python programming with basic packages (Numpy, Scikit-learn, Matplotlib) and DL packages (PyTorch, PyTorch Lightning). This presentation will summarize the available features and their potential to be adapted to multiple extreme event problems and use cases.

How to cite: Cortés-Andrés, J., Gonzalez-Calabuig, M., Zhang, M., Williams, T., Fernández-Torres, M.-Á., Pellicer-Valero, O. J., and Camps-Valls, G.: XAIDA4Detection: A Toolbox for the Detection and Characterization of Spatio-Temporal Extreme Events, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4816, https://doi.org/10.5194/egusphere-egu23-4816, 2023.

X4.5
|
EGU23-7435
|
ITS1.1/NH0.1
Boris Chen and Che-Yuan Li

Increasing climatic extremes resulted in frequency and severity of urban flood events during the last several decades. Significant economic losses were point out the urgency of flood response. In recent years, the government gradually increased the layout of CCTV water level monitoring facilities for the purpose of decision-making in flood event. However, it is difficult for decision makers to recognize multiple images in the same time. Therefore, the aim of this study attempts to establish an automatic water level recognition method for given closed-circuit television (CCTV) system.

In the last years, many advances have been made in the area of automatic image recognition with methods of artificial intelligence. Little literature has been published on real-time water level recognition of closed-circuit television system for disaster management. The purpose of this study is to examine the possibilities in practice of artificial intelligence for real-time water level recognition with deep convolutional neural network. Proposed methodology will demonstrate with several case studies in Taichung. For the potential issue that AI models may lacks of learning target, the generative adversarial network (GAN) may be adopted for this study. The result of this study could be useful to decision makers responsible for organizing response assignments during flood event.

How to cite: Chen, B. and Li, C.-Y.: A study on the establishment of computer vision for disaster identification based on existing closed-circuit television system, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7435, https://doi.org/10.5194/egusphere-egu23-7435, 2023.

X4.6
|
EGU23-12240
|
ITS1.1/NH0.1
|
ECS
Jui-Ming Chang, Wei-An Chao, and Wei-Kai Huang

Daman Landslide had blocked one of the three cross-island roads in Taiwan, and a road section has been under control since last October. During the period, more than thousands of small-scale post-failures occurred whose irregular patterns affected the safety of engineering workers for slope protection construction and road users. Therefore, we installed one time-lapse camera and two geophones at the crown and closed to the toe of the Daman landslide, respectively to train a classification model to offer in-situ alarm. According to time-lapse photos, those post failures can be categorized into two types. One is rock/debris moving and stopping above the upper slope or road, named type I, and the other is the rock/debris going through the road to download slope, named type II. Type I was almost recorded by the crown station, and type II was shown by both stations with different arrival times and the toe station’ high-frequency signals gradually rising (up to 100 Hz). Those distinct features were exhibited by spectrograms. To keep characteristics simultaneously, we merge two stations’ spectrograms as one to indicate different types of post-failures. However, frequent earthquakes affect the performance of the landslide’s discrimination, which should be involved in the classification model. A total of three labels, type I, type II, and earthquake, contained more than 15,000 images of spectrogram, have been used for deep neural network (DNN) to be a two-station-based automatic classifier. Further, user-defined parameters for the specific frequency band within fixed time span windows, including a sum of power spectrogram density, the arrival time of peak amplitude, cross-correlation coefficient, and signal-to-noise ratio, have been utilized for the decision tree algorithm. Both model results benefit the automatic classifier for post-failure alarms and can readily extend to monitor other landslides with frequent post-failures by transfer learning.

How to cite: Chang, J.-M., Chao, W.-A., and Huang, W.-K.: Classification Seismic Spectrograms from Deep Neural Network: Application to Alarm System of Post-failure Landslides, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12240, https://doi.org/10.5194/egusphere-egu23-12240, 2023.

X4.7
|
EGU23-2564
|
ITS1.1/NH0.1
|
ECS
Yi-Ho Chen and Yuan-Chien Lin

        Since Taiwan is located at the Pacific Ring of Fire, seismic activity of varying magnitudes occurs almost every day. Among them, some of these seismic activities have in turn caused severe disasters, resulting in loss of personal property, casualties and damage to important public facilities. Therefore, investigating the long-term spatiotemporal pattern of seismic activities is a crucial task for understanding the causes of seismic activity and to predict future seismic activity, in order to carry out disaster prevention measures in advance. Previous studies mostly focused on the causes of single seismic events on the small spatiotemporal scale. In this study, the data from 1987 to 2020 are used, including seismic events from the United States Geological Survey (USGS), the ambient environmental factors such as daily air temperature from Taiwan Central Weather Bureau (CWB) and daily sea surface temperature data from National Oceanic and Atmospheric Administration (NOAA). Then the temperature difference between the land air temperature and the sea surface temperature (SST) to the correlation between the occurrence of seismic activities and the abnormal occurrence of temperature difference are compared. The results show that lots of seismic activities often have positive and negative anomalies of temperature difference from 21 days before to 7 days after the seismic event. Moreover, there is a specific trend of temperature difference anomalies under different magnitude intervals. In the magnitude range of 2.5 to 4 and greater than 6, almost all of the seismic events have significant anomalous differences in the temperature difference between land air temperature and SST compared with no seismic events. This study uncovers anomalous frequency signatures of seismic activities and temperature differences between land air temperature and SST. The significant difference in temperature difference between seismic events and non-seismic events was compared by using statistical analysis. Additionally, the deep neural network (DNN) of deep learning model, logistic regression and random forest of machine learning model was used to identify whether there will be a seismic event under different magnitude intervals. It is hoped that it can provide relevant information for the prediction of future seismic activity, to more accurately prevent disasters that may be caused by seismic activity.

How to cite: Chen, Y.-H. and Lin, Y.-C.: Investigating the Correlation between the Characteristics of Seismic Activity and Environmental Variables in Taiwan, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2564, https://doi.org/10.5194/egusphere-egu23-2564, 2023.

X4.8
|
EGU23-5581
|
ITS1.1/NH0.1
Adrien Lagrange, Nicolas Dublé, François De Vieilleville, Aurore Dupuis, Stéphane May, and Aymeric Walker-Deemin

Damage assessment is a critical step in crisis management. It must be fast and accurate in order to organize and scale the emergency response in a manner adapted to the real needs on the ground. The speed requirements motivate an automation of the analysis, at least in support of the photo-interpretation. Deep Learning (DL) seems to be the most suitable methodology for this problem: on one hand for the speed in obtaining the answer, and on the other hand by the high performance of the results obtained by these methods in the extraction of information from images. Following previous studies to evaluate the potential contribution of DL methods for building damage assessment after a disaster, several conventional Deep Neural Network (DNN) and Transformers (TF) architectures were compared.

Made available at the end of 2019, the xView2 database appears to be the most interesting database for this study. It gathers images of disasters between 2011 and 2018 with 6 types of disasters: earthquakes, tsunamis, floods, volcanic eruptions, fires and hurricanes. For each of these disasters, pre- and post-disaster images are available with a ground truth containing the building footprint as well as the evaluation of the type of damage divided into 4 classes (no damage, minor damage, major damage, destroyed) similar to those considered in the study.

This study compares a wide range DNN architectures all based on an encoder-decoder structure. Two encoder families were implemented: EfficientNet (B0 to B7 configurations) and Swin TF (Tiny, Small, and Base configurations). Three adaptable decoders were implemented: UNet, DeepLabV3+, FPN. Finally, to benefit from both pre- and post-disaster images, the trained models were designed to proceed images with a Siamese approach: both images are processed independently by the encoder, and the extracted features are then concatenated by the decoder.

Taking benefit of global information (such as the type of disaster for example) present in the image, the Swin TF, associated with FPN decoder, reaches the better performances than all other encoder-decoder architectures. The Shifted WINdows process enables the pipe to process large images in a reasonable time, comparable to the processing time of EfficientNet-based architectures. An interesting additional result is that the models trained during this study do not seem to benefit so much from extra-large configurations, and both small and tiny configurations reach the highest scores.

How to cite: Lagrange, A., Dublé, N., De Vieilleville, F., Dupuis, A., May, S., and Walker-Deemin, A.: Vision Transformers for building damage assessment after natural disasters, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5581, https://doi.org/10.5194/egusphere-egu23-5581, 2023.

X4.9
|
EGU23-5778
|
ITS1.1/NH0.1
|
ECS
|
solicited
Saurabh Gupta and Syam Nair

Natural and man-made disasters pose a threat to human life, flora-fauna, and infrastructure. It is critical to detect the damage quickly and accurately for infrastructures right after the occurrence of any disaster. The detection and assessment of infrastructure damage help manage financial strategy as well. Recently, many researchers and agencies have made efforts to create high-resolution satellite imageries database related to pre and post-disaster events. The advanced remote sensing satellite imageries can reflect the surface of the earth accurately up to 30 cm spatial resolution on a daily basis. These high spatial resolutions (HSR) imageries can help access any natural hazard's damage by comparing the pre- and post-disaster data. These remote sensing imageries have limitations, such as cloud occlusions. Building under a thick cloud cannot be recognised in optical images. The manual assessment of the severity of damage to buildings/infrastructure by comparing bi-temporal HSR imageries or airborne will be a tedious and subjective job. On the other hand, the emerging use of unmanned aired vehicles (UAV) can be used to assess the situation precisely. The high-resolution UAV imageries and the HSR satellite imageries can complement each other for critical infrastructure damage assessment. In this study, a novel approach is used to integrate UAV data into HSR satellite imageries for the building damage assessment using a convolution neural network (CNN) based deep learning model. The research work is divided into two fundamental sub-tasks: first is the building localisation in the pre-event images, and second is the damage classification by assigning a unique damage level label reflecting the degree of damage to each building instance on the post-disaster images. For the study, the HSR satellite imageries of 36 pairs of pre- and post natural hazard events is acquired for the year 2021-22, similarly available UAV based data for these events is also collected from the open data source. The data is then pre-processed, and the building damage is assessed using a deep object-based semantic change detection framework (ChangeOS). The mentioned model was trained on the xview2 building damage assessment datasets comprised of ~20,000 images with ~730,000 building polygons of pre and post disaster events over the globe from 2011-2018. The experimental setup in this study includes training on the global dataset and testing on the regional-scale building damage assessment using HSR satellite imageries and local-scale using UAV imageries. The result obtained from the bi-temporal assessment of HSR images for the Indonesia Earthquake 2022 has shown an F1 score of ~67%, while the Uttarakhand flooding event 2021 has reported an F1 score of ~64%. The HSR imageries from the UAV Haiti earthquake event in 2011 have also shown less but promising F1 scores of ~54%. It is inferred that merging HSR imageries from satellite and UAV for building damage assessment using the ChangeOS framework represents a robust tool to further promote future research in infrastructure maintenance strategy and policy management in disaster response.

How to cite: Gupta, S. and Nair, S.: A novel approach for infrastructural disaster damage assessment using high spatial resolution satellite and UAV imageries using deep learning algorithms., EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5778, https://doi.org/10.5194/egusphere-egu23-5778, 2023.

X4.10
|
EGU23-6790
|
ITS1.1/NH0.1
|
ECS
Arnaud Dupeyrat, abdullah Almaksour, Joao Vinholi, and tapio friberg

 With the gradual warming of the global climate, natural catastrophes have caused billions of dollars in damage to ecosystems, economies and properties. Along with the damage, the loss of life is a very serious possibility. With the unprecedented growth of the human population, large-scale development activities and changes to the natural environment, the frequency, and intensity of extreme natural events and consequent impacts are expected to increase in the future. 

 To be able to mitigate and to reduce the potential damage of the natural catastrophe, continuous monitoring is required. The collection of data using earth observation (EO) systems has been valuable for tracking the effects of natural hazards, especially with their near real-time capabilities for tracking extreme natural events. Remote sensing systems from different platforms also serve as an important decision support tool for devising response strategies, coordinating rescue operations, and making damage and loss estimations.

 Synthetic aperture radar (SAR) imagery provides highly valuable information about our planet that no other technology is capable of. SAR sensors emit their own energy to illuminate objects or areas on Earth and record what’s reflected back from the surface to the sensor. This allows data acquisition day and night since no sunlight is needed. SAR also uses longer wavelengths than optical systems, which gives it the unsurpassed advantage of being able to penetrate clouds, rain, fog and smoke. All of this makes SAR imagery unprecedentedly valuable in sudden events and crisis situations requiring a rapid response.

 In this talk we will be focusing on flood monitoring using our ICEYE SAR images, taking into account multi-satellites, multi-angles and multi-resolutions that are inherent from our constellation and capabilities. We will present the different steps necessary that have allowed us to improve the consistency of our generated flood maps.

How to cite: Dupeyrat, A., Almaksour, A., Vinholi, J., and friberg, T.: Deep learning for automatic flood mapping from high resolution SAR images, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6790, https://doi.org/10.5194/egusphere-egu23-6790, 2023.

X4.11
|
EGU23-6898
|
ITS1.1/NH0.1
|
ECS
|
solicited
AI-based anomaly detection system for SAR data
(withdrawn)
João Gabriel Vinholi and Maria Arbenina
X4.12
|
EGU23-11636
|
ITS1.1/NH0.1
|
ECS
Jaehyun Kim and Donghwi Jung

For recent years, Machine Learning (ML) models have been proven to be useful in solving problems of a wide variety of fields such as medical, economic, manufacturing, transportation, energy, education, etc. With increased interest in ML models and advances in sensor technologies, ML models are being widely applied even in civil engineering domain. ML model enables analysis of large amounts of data, automation, improved decision making and provides more accurate prediction. While several state-of-the-art reviews have been conducted in each sub-domain (e.g., geotechnical engineering, structural engineering) of civil engineering or its specific application problems (e.g., structural damage detection, water quality evaluation), little effort has been devoted to comprehensive review on ML models applied in civil engineering and compare them across sub-domains. A systematic, but domain-specific literature review framework should be employed to effectively classify and compare the models. To that end, this study proposes a novel review approach based on the hierarchical classification tree “D-A-M-I-E (Domain-Application problem-ML models-Input data-Example case)”. “D-A-M-I-E” classification tree classifies the ML studies in civil engineering based on the (1) domain of the civil engineering, (2) application problem, (3) applied ML models and (4) data used in the problem. Moreover, data used for the ML models in each application examples are examined based on the specific characteristic of the domain and the application problem. For comprehensive review, five different domains (structural engineering, geotechnical engineering, water engineering, transportation engineering and energy engineering) are considered and the ML application problem is divided into five different problems (prediction, classification, detection, generation, optimization). Based on the “D-A-M-I-E” classification tree, about 300 ML studies in civil engineering are reviewed. For each domain, analysis and comparison on following questions has been conducted: (1) which problems are mainly solved based on ML models, (2) which ML models are mainly applied in each domain and problem, (3) how advanced the ML models are and (4) what kind of data are used and what processing of data is performed for application of ML models. This paper assessed the expansion and applicability of the proposed methodology to other areas (e.g., Earth system modeling, climate science). Furthermore, based on the identification of research gaps of ML models in each domain, this paper provides future direction of ML in civil engineering based on the approaches of dealing data (e.g., collection, handling, storage, and transmission) and hopes to help application of ML models in other fields.

How to cite: Kim, J. and Jung, D.: State-of-the-Art Review of Machine Learning Models in Civil Engineering: Based on DAMIE Classification Tree, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11636, https://doi.org/10.5194/egusphere-egu23-11636, 2023.

Posters virtual: Wed, 26 Apr, 16:15–18:00 | vHall NH

Chairpersons: Ivanka Pelivan, Monique Kuglitsch, Raffaele Albano
introduction
vNH.1
|
EGU23-338
|
ITS1.1/NH0.1
|
ECS
|
Leon Sim, Fang-Jung Tsai, and Szu-Yun Lin

Traditional post-disaster building damage assessments were performed manually by the response team, which was risky and time-consuming. With advanced remote sensing technology, such as an unmanned aerial vehicle (UAV), it would be possible to acquire high-quality aerial videos and operate at a variety of altitudes and angles.  The collected data would be sent into a neural network for training and validating. In this study, the Object Detection model (YOLO) was utilized, which is capable of predicting both bounding boxes and damage levels. The network was trained using the ISBDA dataset, which was created from aerial videos of the aftermath of Hurricane Harvey in 2017, Hurricane Michael and Hurricane Florence in 2018, and three tornadoes in 2017, 2018, and 2019 in the United States. The Joint Damage Scale was used to classify the buildings in this dataset into four categories: no damage, minor damage, major damage, and destroyed. However, the number of major damage and destroyed classes are significantly lower than the number of no damage and minor damage classes in the dataset. Also, the damage characteristics of minor and major damage classes are similar under such type of disaster. These caused the YOLO model prone to misclassify the intermediate damage levels, i.e., minor and major damage in our earlier experiments. This study aimed to improve the YOLO model using a stacking ensemble deep learning approach with a image classification model called Mobilenet. First, the ISBDA dataset was used and refined to train the YOLO network and the Mobilenet network separately, and the latter provides two classes predictions (0 for no damage or minor damage, 1 for major damage or destroyed) rather than the four classes by the former. In the inference phase, the initial predictions from the trained YOLO network, including bounding box coordinates, confidence scores for four damage classes, and the predicted class, were then extracted and passed to the trained Mobilenet to generate the secondary predictions for each building. Based on the secondary predictions, two hyperparameters were utilized to refine the initial predictions by modifying the confidence scores of each class, and the hyperparameters were trained during this phase. Lastly, the trained hyperparameters were applied to the testing dataset to evaluate the performance of the proposed method. The results show that our stacking ensemble method could obtain more reliable predictions of intermediate classes.

 

How to cite: Sim, L., Tsai, F.-J., and Lin, S.-Y.: A Stacking Ensemble Deep Learning Approach for Post Disaster Building Assessment using UAV Imagery, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-338, https://doi.org/10.5194/egusphere-egu23-338, 2023.

vNH.2
|
EGU23-2996
|
ITS1.1/NH0.1
|
ECS
Samuel Roeslin

The 2010-2011 Canterbury Earthquake sequence (CES) led to unprecedented building damage in the Canterbury region, New Zealand. Commercial and residential buildings were significantly affected. Due to New Zealand’s unique insurance setting, around 80% of the losses were covered by insurance (Bevere & Balz, 2012; King et al., 2014). The Insurance Council of New Zealand (ICNZ) estimated the total economic losses to be more than NZ$40 billion, with the Earthquake Commission (EQC) and private insurers covering NZ$10 billion and NZ$21 billion of the losses, respectively (ICNZ, 2021). As a result of the CES and the 2016 Kaikoura earthquake, EQC’s Natural Disaster Fund was depleted (EQC, 2022). This highlighted the need for improved tools enabling damage and loss analysis for natural hazards.
This research project used residential building claims collected by EQC following the CES to develop a rapid seismic loss prediction model for residential buildings in Christchurch. Geographic information systems (GIS) tools, data science techniques, and machine learning (ML) were used for the model development. Before the training of the ML model, the claims data was enriched with additional information from external data sources. The seismic demand, building characteristics, soil conditions, and information about the liquefaction occurrence were added to the claims data. Once merged and pre-processed, the aggregated data was used to train ML models based on the main events in the CES. Emphasis was put on the interpretability and explainability of the model. The ML model delivered valuable insights related to the most important features contributing to losses. Those insights are aligned with engineering knowledge and observations from previous studies, confirming the potential of using ML for disaster loss prediction and management. Care was also put into the retrainability of the model to ensure that any new data from future earthquake events can rapidly be added to the model. 

How to cite: Roeslin, S.: Development of a Rapid Seismic Loss Prediction Model for Residential Buildings using Machine Learning - Christchurch, New Zealand, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2996, https://doi.org/10.5194/egusphere-egu23-2996, 2023.

vNH.3
|
EGU23-6522
|
ITS1.1/NH0.1
Ilaria Pennino

It has become increasingly apparent over the past few decades that environmental degradation is something of a common concern for humanity and it is difficult to deny that the present environmental problems are caused primarily by anthropogenic activities rather than natural causes.

To minimize disaster’s risk, the role of geospatial science and technology may be a terribly helpful and necessary technique for hazard zone mapping throughout emergency conditions. 

This approach can definitively help predict harmful events, but also to mitigate damage to the environment from events that cannot be efficiently predicted.

With detailed information obtained through various dataset, decision making has become simpler. This fact is crucial for a quick and effective response to any disaster. Remote sensing, in particular RADAR/SAR data, help in managing a disaster at various stages. 

Prevention for example refers to the outright avoidance of adverse impacts of hazards and related disasters; preparedness refers to the knowledge and capacities to effectively anticipate, respond to, and recover from, the impacts of likely, imminent or current hazard events or conditions.

Finally relief is the provision of emergency services after a disaster in order to reduce damage to environment and people.

Thanks to the opportunity proposed by ASI (Italian Space Agency) to use COSMO-SkyMed data, in NeMeA Sistemi srl we developed two projects: “Ventimiglia Legalità”, “Edilizia Spontanea” and 3xA.

Their main objective is to detect illegal buildings not present in the land Legal registry.

We developed new and innovative technologies using integrated data for the monitoring and protection of environmental and anthropogenic health, in coastal and nearby areas. 

3xA project addresses the highly challenging problem of automatically detecting changes from a time series of high-resolution synthetic aperture radar (SAR) images. In this context, to fully leverage the potential of such data, an innovative machine learning based approach has been developed. 

The project is characterized by an end-to-end training and inference system which takes as input two raw images and produces a vectorized change map without any human supervision.

More into the details, it takes as input two SAR acquisitions at time t1 and t2, the acquisitions are firstly pre-processed, homogenised and finally undergo a completely self-supervised algorithm which takes advantage of DNNs to classify changed/unchanged areas. This method shows promising results in automatically producing a change map from two input SAR images (Stripmap or Spotlight COSMO-SkyMed data), with 98% accuracy.

Being the process automated, results are produced faster than similar products generated by human operators.

A similar approach has been followed to create an algorithm which performs semantic segmentation from the same kind of data.

This time, only one of the two SAR acquisitions is taken as input for pre-processing steps and then for a supervised neural network. The result is a single image where each pixel is labelled with the class predicted by the algorithm. 

Also in this case, results are promising, reaching around 90% of accuracy. 

How to cite: Pennino, I.: A new approach for hazard and disaster prevention: deep learning algorithms for change detection and classification RADAR/SAR, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6522, https://doi.org/10.5194/egusphere-egu23-6522, 2023.

vNH.4
|
EGU23-4455
|
ITS1.1/NH0.1
|
ECS
Michele Gazzea, Reza Arghandeh, and Amir Miraki

Roadways are critical infrastructure in our society, providing services for people through and between cities. However, they are prone to closures and disruptions, especially after extreme weather events like hurricanes.

At the same time, traffic flow data are a fundamental type of information for any transportation system.

We tackle the problem of traffic sensor placement on roadways to address two tasks at the same time. The first task is traffic data estimation in ordinary situations, which is vital for traffic monitoring and city planning. We design a graph-based method to estimate traffic flow on roads where sensors are not present. The second one is enhanced observability of roadways in case of extreme weather events. We propose a satellite-based multi-domain risk assessment to locate roads at high risk of closures. Vegetation and flood hazards are taken into account. We formalize the problem as a search method over the network to suggest the minimum number and location of traffic sensors to place while maximizing the traffic estimation capabilities and observability of the risky areas of a city.

How to cite: Gazzea, M., Arghandeh, R., and Miraki, A.: Traffic Monitoring System Design considering Multi-Hazard Disaster Risks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4455, https://doi.org/10.5194/egusphere-egu23-4455, 2023.