EGU24-12801, updated on 09 Mar 2024
https://doi.org/10.5194/egusphere-egu24-12801
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Relative performance of super-resolved Sentinel-2 images for built-up area mapping using deep learning

Misganu Debella-Gilo
Misganu Debella-Gilo
  • Norwegian University of Life Sciences (NMBU), misganu.debella-gilo(at)nmbu.no

Early detection of built-up areas is important for planning and understanding the impacts of urbanization. Although the high temporal resolution and open accessibility of major spaceborne remote sensing images offer great opportunities for such purposes, their spatial resolution continues to be a limitation. Enhancing the spatial resolution of such remote sensing images using deep learning algorithms is currently a hot research topic. Multiple cases have shown that impressive results could be achieved on spaceborne optical images such as Sentinel-2 images using variants of the Generative Adversarial Networks (GAN) used in image supper-resolution, when evaluated using quantitative metrics such as signal-to-noise ratio and perceptual metrics such as structural similarity index. Practical performance of such super-resolved images in performing various purposes, compared to their comparable resolution original images, are however less researched. In this work, we investigate the relative performance of super-resolved Sentinel-2 images, in comparison to the Very High Resolution (VHR) optical multispectral images from the Copernicus Contribution Missions (CCM) and the original resolution Sentinel-2 images at 10 m spatial resolution, for detecting built-up areas. A GAN based super-resolution model is trained using ten Sentinel-2 tiles in the southern part of Norway to enhance the resolution to 2.5 m. We gathered Sentinel-2 images acquired during the summer of 2021. VHR images from the same season with spatial resolution of 2 m are obtained from the CCM. Building footprint map from the national database is acquired for the same region. A U-net type semantic segmentation model based is then trained on the three datasets separately and built-up areas are then predicted for a test region. Their results are then compared in terms of pixel accuracy, intersection over union (Jaccard Index) and the number of building footprints detected.

How to cite: Debella-Gilo, M.: Relative performance of super-resolved Sentinel-2 images for built-up area mapping using deep learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12801, https://doi.org/10.5194/egusphere-egu24-12801, 2024.