- 1Data Driven Computing in Civil Engineering, RWTH Aachen University, Aachen, Germany
- 2Department of Civil, Building and Environmental Engineering, Sapienza University of Rome, Rome, Italy
Floods are highly dynamic hazards whose spatial extent can change rapidly within hours. Timely and accurate monitoring is essential for early warning, emergency response, and post-disaster assessment. A major challenge in current Earth Observation (EO) based approaches is the difficulty of capturing the complete evolution of a flood event, including its maximum flood extent. This information is often missing due to temporal gaps in Synthetic Aperture Radar (SAR) acquisitions and cloud cover in optical imagery. Missing the peak extent limits the accuracy of impact assessments and poses challenges for applications such as parametric insurance, which depend on reliable measurements of flood magnitude. Although daily flood products exist, they are often based on large-scale multi-spectral sensors and struggle during persistent cloud cover as well as with resolution for smaller events, creating an urgent need for a more reliable method for daily flood estimation from higher-resolution SAR datasets. To address these challenges, we propose a novel deep learning framework that fuses EO-based coarse dynamic hydrometeorological data with static geospatial datasets to produce high-resolution daily flood extent maps. Our approach integrates static flood conditioning inputs, including elevation, Height Above Nearest Drainage, Urban Development Area, flow direction, Normalized Difference Vegetation Index, Normalized Difference Built-up Index, soil clay and sand content, and pre-flood SAR and multispectral imagery with dynamic hydrometeorological variables such as daily precipitation and soil moisture. The model adopts a multi-stage vision transformer architecture: encoders extract multi-level latent representations from all inputs, which are then fused using cosine similarity, normalization, and temporal attention mechanisms. A decoder reconstructs high-resolution flood extent, followed by a Gaussian filter to reduce high-frequency noise. The framework is fully supervised using the globally available KuroSiwo flood mask dataset, ensuring transferability across diverse geographic regions and climate zones. In addition, this research provides a complete data preparation workflow that converts flood mask shapefiles into standardized image patch datasets, including a modular input selection interface that removes dependence on inputs included in specific datasets, directly suitable for deep learning training, enabling straightforward implementation and practical applicability. The model is trained and evaluated across three distinct climate zones on multiple continents, demonstrating a robust capability to overcome the temporal limitations of SAR data and cloud-induced gaps in optical observations. Held-out region tests with strict geographic separation to minimize spatial autocorrelation induced data leakage, further ensure unbiased evaluation and true transferability. Preliminary tests across multiple continents yield stable performance, with cross-site metric variations remaining within approximately 5-7 percent. This study introduces the first deep learning framework for daily fine-scale flood extent mapping using purely EO data which are globally accessible, providing a scalable and transferable solution for real-time flood monitoring, disaster management, and potential applications in parametric insurance by improving flood mapping cadence and reliably estimating maximum flood extents.
Keywords: spatio-temporal fusion, vision transformer, high-resolution flood mapping
How to cite: Surojaya, A., Kumar, R., and Dasgupta, A.: DeepFuse2.0: Novel Deep Learning-based Fusion of Satellite-based Hydroclimatic Data and Flood Conditioning Factors for Daily Flood Extent Mapping, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1047, https://doi.org/10.5194/egusphere-egu26-1047, 2026.