EGU26-1641, updated on 13 Mar 2026
https://doi.org/10.5194/egusphere-egu26-1641
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
PICO | Tuesday, 05 May, 08:57–08:59 (CEST)
 
PICO spot 1a, PICO1a.12
Advancing Building Exposure Modeling at Scale through Multimodal Geo-Imagery and AI
Patrick Aravena Pelizari1, Christian Geiß1,2, and Hannes Taubenböck1,3
Patrick Aravena Pelizari et al.
  • 1German Aerospace Center (DLR), German Remote Sensing Data Center, 82234 Weßling, Germany (patrick.aravenapelizari@dlr.de)
  • 2University of Bonn, Department of Geography, 53115 Bonn, Germany (christian.geiss@dlr.de)
  • 3University of Würzburg, Institute of Geography and Geology, Department of Remote Sensing, 97074 Würzburg, Germany (hannes.taubenboeck@dlr.de)

Exposure models that provide up-to-date, spatially explicit information on buildings’ vulnerability-relevant characteristics are key to effective disaster mitigation and risk management. As (i) different building attributes influence vulnerability to different natural hazards, and (ii) natural hazards vary in spatial scale and exhibit distinct spatial patterns, holistic multi-risk assessments place particularly high demands on thematic detail and spatial resolution. A generic yet detailed representation of the building stock enhances the flexibility of risk models to consistently address diverse hazard scenarios. However, given the vast number of buildings, their structural heterogeneity, and high spatio-temporal dynamics, maintaining a comprehensive inventory across large areas remains a complex challenge. The rapid transformation of disaster risk regimes due to global change, coupled with limited exposure data, necessitates automated, data-driven approaches to efficiently infer building vulnerability at scale. This work investigates the potential of heterogeneous, multimodal geospatial image data—including street-level imagery (SLI), very high-resolution optical remote sensing data, and a normalized digital surface model—for generic building characterization using deep learning. To infer multiple building attributes from multimodal inputs, we introduce a deep multimodal multitask classification framework. It incorporates a feature-level fusion module designed to optimally exploit synergies among data modalities within a multitask learning setting. The common challenge of missing SLI is addressed through a dedicated submodel that learns spatio-contextual representations from available SLI as substitutes. Using the earthquake-prone metropolis of Santiago de Chile as a case study, we evaluate the contribution of the employed geo-image modalities and the proposed methods to the reliable inference of five structural target variables: building height, lateral load-resisting system material, seismic building structural type, roof shape, and block position. Our results demonstrate that integrating ground-based and top-view geo-image data with tailored deep learning models offers a promising path toward the automated generation of detailed, area-wide exposure models.

How to cite: Aravena Pelizari, P., Geiß, C., and Taubenböck, H.: Advancing Building Exposure Modeling at Scale through Multimodal Geo-Imagery and AI, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1641, https://doi.org/10.5194/egusphere-egu26-1641, 2026.