EGU23-5581
https://doi.org/10.5194/egusphere-egu23-5581
EGU General Assembly 2023
© Author(s) 2023. This work is distributed under
the Creative Commons Attribution 4.0 License.

Vision Transformers for building damage assessment after natural disasters

Adrien Lagrange1, Nicolas Dublé1, François De Vieilleville1, Aurore Dupuis2, Stéphane May2, and Aymeric Walker-Deemin2
Adrien Lagrange et al.
  • 1AGENIUM Space, TOULOUSE, France
  • 2CNES, TOULOUSE, France

Damage assessment is a critical step in crisis management. It must be fast and accurate in order to organize and scale the emergency response in a manner adapted to the real needs on the ground. The speed requirements motivate an automation of the analysis, at least in support of the photo-interpretation. Deep Learning (DL) seems to be the most suitable methodology for this problem: on one hand for the speed in obtaining the answer, and on the other hand by the high performance of the results obtained by these methods in the extraction of information from images. Following previous studies to evaluate the potential contribution of DL methods for building damage assessment after a disaster, several conventional Deep Neural Network (DNN) and Transformers (TF) architectures were compared.

Made available at the end of 2019, the xView2 database appears to be the most interesting database for this study. It gathers images of disasters between 2011 and 2018 with 6 types of disasters: earthquakes, tsunamis, floods, volcanic eruptions, fires and hurricanes. For each of these disasters, pre- and post-disaster images are available with a ground truth containing the building footprint as well as the evaluation of the type of damage divided into 4 classes (no damage, minor damage, major damage, destroyed) similar to those considered in the study.

This study compares a wide range DNN architectures all based on an encoder-decoder structure. Two encoder families were implemented: EfficientNet (B0 to B7 configurations) and Swin TF (Tiny, Small, and Base configurations). Three adaptable decoders were implemented: UNet, DeepLabV3+, FPN. Finally, to benefit from both pre- and post-disaster images, the trained models were designed to proceed images with a Siamese approach: both images are processed independently by the encoder, and the extracted features are then concatenated by the decoder.

Taking benefit of global information (such as the type of disaster for example) present in the image, the Swin TF, associated with FPN decoder, reaches the better performances than all other encoder-decoder architectures. The Shifted WINdows process enables the pipe to process large images in a reasonable time, comparable to the processing time of EfficientNet-based architectures. An interesting additional result is that the models trained during this study do not seem to benefit so much from extra-large configurations, and both small and tiny configurations reach the highest scores.

How to cite: Lagrange, A., Dublé, N., De Vieilleville, F., Dupuis, A., May, S., and Walker-Deemin, A.: Vision Transformers for building damage assessment after natural disasters, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-5581, https://doi.org/10.5194/egusphere-egu23-5581, 2023.