- 1Faculty of Science and Engineering, Macquarie University, Sydney, New South Wales, Australia
- 2Delft Institute of Applied Mathematics, Delft University of Technology, Delft, The Netherlands
- 3Hydroinformatics Institute, Singapore
- 4Department of Computer Science, School of Computing, National University of Singapore, Singapore
- 5Transport, Health and Urban Systems Research Lab, Melbourne School of Design, The University of Melbourne, Melbourne, Victoria, Australia
- 6Faculty of Engineering and Information Technology, The University of Melbourne, Melbourne, Victoria, Australia
Diffusion models have emerged as state-of-the-art generative AI models in computer vision, excelling in generating high-fidelity and diverse images. These models surpass previous architectures in quality, generalizability, and stability. However, their potential remains largely untapped in water resource applications, including flood mapping.
This research introduces a diffusion model-based super-resolution approach to upscale coarse-grid hydrodynamic model outputs to fine-grid accuracy in a computationally efficient manner. Aligning with the theory-guided data science (TGDS) paradigm, the proposed model functions as a hybrid TGDS model.
The process begins by running a coarse-grid hydrodynamic model over the area of interest, with mesh resolution selected to enable simulation completion within several minutes. Acting as a corrective layer, the diffusion model refines these coarse estimates to align with high-resolution model outputs. In this study, the HEC-RAS model is employed to generate both coarse and fine-grid flood maps for model training and testing. The subgrid formulation within HEC-RAS incorporates fine-scale topographic details within each grid cell, significantly enhancing computational efficiency and accuracy. Additionally, the subgrid topography maps both coarse-grid and fine-grid mesh-level water level estimates onto the underlying terrain resolution, enabling compatibility with both structured and unstructured meshes.
The primary objective is to rapidly produce high-resolution flood maps, addressing the impracticality of fine-grid hydrodynamic models for operational flood management and probabilistic flood design due to their high computational demands. Once the coarse-grid model is executed at a catchment scale, the diffusion model can quickly generate high-resolution flood maps for user-specified areas. Within the diffusion model, digital elevation models (DEMs) and corresponding coarse-grid flood depth estimates serve as conditioning signals. The model processes flood depth raster patches of 128x128 pixels for both flood maps and DEM data. This raster size effectively balances spatial coverage and computational efficiency.
The proposed approach is tested on four large Australian catchments: Wollombi, Chowilla, Burnett River, and Lismore. Unlike general diffusion models focused on natural images, models trained for these catchments converged faster due to the strong correlation between coarse and fine-grid model outputs. The resulting flood depth maps closely matched fine-grid model predictions, outperforming popular U-Net-based super-resolution models in accuracy. Notably, a model trained on data from one catchment demonstrated strong generalizability, performing well on other catchments with minimal transfer learning.
While diffusion models traditionally have slower inference speeds due to iterative image generation from random noise, this study significantly reduced inference time at the inference stage by initiating the denoising process with noisy coarse-grid images instead of pure random noise. Future research will focus on further reducing inference times by transitioning from pixel-wise to latent space diffusion models.
How to cite: Herath Mudiyanselage, V. V. H., Marshall, L., Saha, A., Neo, S. H., Rasnayaka, S., and Seneviratne, S.: Physics-Informed Generative AI for High-Resolution Flood Mapping, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-9285, https://doi.org/10.5194/egusphere-egu25-9285, 2025.