EGU24-16923, updated on 11 Mar 2024
https://doi.org/10.5194/egusphere-egu24-16923
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Flood Segmentation with Optical Satellite Images Under Clouds Using Physically Constrained Machine Learning

Chloe Campo1, Paolo Tamagnone1, Guillaume Gallion1, and Guy Schumann1,2
Chloe Campo et al.
  • 1RSS-Hydro, Research and Education Department , (ccampo@rss-hydro.lu)
  • 2School of Geographical Sciences, University of Bristol, Bristol, UK

Timely and accurate flood map production plays a key role in enhancing effective flood risk assessment and management. Satellite imagery is frequently employed in flood mapping as it can capture flooding across vast spatial and temporal scales. Floods are usually caused by prolonged or heavy precipitation correlated with dense cloudy conditions, posing challenges for accurate mapping.

The Synthetic Aperture Radar (SAR) active sensor is a popular option due to its feature of being weather agnostic, penetrating through clouds, fog, and darkness, providing images for the detection of flooded areas regardless of the weather conditions. However, this advantage is at the expense of low temporal resolution and double bounces in urban and heavily vegetated areas, which increase signal processing difficulty and misinterpretation. Passive microwave radiometry has also been explored for flood mapping, but its coarse spatial resolution limits the utility of the resulting flood maps. Multispectral optical imagery offers a balanced trade-off between temporal and spatial resolutions, with the only limitation that the acquired images might be hindered by the presence of clouds. Capitalizing on the utility of optical imagery, FloodSENS, a machine-learning (ML) algorithm consisting of a SENet and UNet, precisely delineates flooded areas from non-flooded areas in clear and partially clouded optical imagery. Although the current algorithm version enforces flood delineation involving topography-derived information in the ML processing, it is not capable of detecting floods under clouds; thus, we propose a new iteration of FloodSENS that utilizes auxiliary data in post-processing to improve the inferred flood maps.

The post-processing pipeline utilizes the inferred flood map generated by FloodSENS and the Digital Elevation Model (DEM) of the target area to accurately delineate the flood extent beneath clouds, adhering to the physical constraints in the topography. First, Pixels at elevations equal to or lower than the water level are designated as flooded pixels. These pixels are further refined with geoprocessing to establish hydrological connectivity and topographic consistency. Pixels that are both marked as flooded and hydrologically connected are confirmed as flooded pixels for the final flood map.

The post-processing proves essential in tropical and subtropical regions that frequently have high cloud cover during the monsoon seasons, making it imperative to map the affected areas during flooding events. The FloodSENS detection with the post-processing pipeline has been tested on partly clouded optical imagery obtained from the 2023 autumn flooding in southern Somalia.

How to cite: Campo, C., Tamagnone, P., Gallion, G., and Schumann, G.: Flood Segmentation with Optical Satellite Images Under Clouds Using Physically Constrained Machine Learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16923, https://doi.org/10.5194/egusphere-egu24-16923, 2024.