EGU26-19225, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-19225
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Poster | Wednesday, 06 May, 08:30–10:15 (CEST), Display time Wednesday, 06 May, 08:30–12:30
 
Hall A, A.72
Multi-Sensor Terrain Reconstruction for High-Resolution Urban Flood Modelling
Yaxin Zhang1, Huili Cheng2, Qiuhua Liang1,2, Yifei Zong1, and Baoshan Shi1
Yaxin Zhang et al.
  • 1Zhengzhou University, School of Water Conservancy and Transportation, China (1514906491@qq.com)
  • 2School of Architecture, Building and Civil Engineering, Loughborough University, Loughborough, UK(q.liang@lboro.ac.uk)

High-resolution (approximately 1 m) flood modelling is increasingly recognised as essential for resolving flow pathways and hydraulic connectivity in complex urban environments. At this scale, flood dynamics are strongly controlled by micro-topographic features, including hydraulically permeable elements beneath vegetation and bridge structures, as well as small-scale obstructions such as kerbs and surface discontinuities. However, conventional Digital Terrain Model (DTM) generation approaches struggle to reliably represent such features due to vegetation occlusion and sensor-specific terrain acquisition limitations, often necessitating extensive manual intervention.

This study presents a semi-automated terrain reconstruction framework that integrates Unmanned Aerial Vehicle-borne LiDAR, Unmanned Aerial Vehicle (UAV)oblique photogrammetry, handheld LiDAR Simultaneous Localization and Mapping (SLAM), and Real-Time Kinematic Global Navigation Satellite Systems (RTK-GNSS) measurements to generate flood-ready DTMs for high-resolution hydrodynamic modelling. Rather than treating multi-source datasets as interchangeable inputs, the framework explicitly exploits their differing information characteristics and spatial sensitivities to occlusion and ground accessibility. UAV LiDAR provides spatially continuous but occlusion-prone surface measurements, handheld LiDAR SLAM offers dense ground-level observations in vegetated and structurally complex areas, and RTK-GNSS provides sparse but high-accuracy elevation control.

An initial DTM is established through adaptive fusion of morphologically filtered UAV-derived DTMs and SLAM-derived ground observations, supported by vegetation indices extracted from digital orthophoto maps and SLAM point-density metrics. To address residual elevation errors arising from partial occlusion and sensor limitations, a residual learning strategy based on a U-Net architecture is employed to predict local elevation corrections relative to RTK-GNSS ground truth. The learning component is explicitly constrained to operate as a local correction mechanism rather than an end-to-end terrain predictor, thereby preserving physical plausibility and spatial consistency of the reconstructed terrain.

The framework is demonstrated over the Zhengzhou University campus (approximately 1 km²), encompassing diverse building typologies, vegetation densities, and pedestrian and vehicular infrastructure. The hydrodynamic relevance of the reconstructed DTM is evaluated using the High-Performance Integrated Hydrodynamic Modelling System(HiPIMS) through comparative two-dimensional simulations of historical extreme flood events. Results demonstrate that improved representation of micro-topographic controls significantly enhances simulated flow connectivity, inundation extent, and inundation timing relative to conventional terrain products, and provides a transferable workflow for campus- and neighbourhood-scale flood modelling and risk assessment and urban digital twin applications.

How to cite: Zhang, Y., Cheng, H., Liang, Q., Zong, Y., and Shi, B.: Multi-Sensor Terrain Reconstruction for High-Resolution Urban Flood Modelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19225, https://doi.org/10.5194/egusphere-egu26-19225, 2026.