- 1Department of Environment and Energy, Jeonbuk National University, Jeonju 54896, Republic of Korea
- 2Department of Physics, Research Institute for Materials and Energy Sciences, Jeonbuk National University
- 3Department of Earth and Environmental Sciences, Jeonbuk National University, 567 Baekje-daero, Deokjin-gu, Jeonju-si, Jeollabuk-do, 54896, Republic of Korea
- 4Department of Landscape Architecture and Rural Systems Engineering, College of Agriculture and Life Sciences, Seoul National University
- 5Satellite Application Division, Korea Aerospace Research Institute, 169-84 Gwahak-ro, Yuseong-gu, Daejeon
Although numerous satellites have been developed and launched recently, low Earth orbit satellites offer high spatial resolution but long revisit cycles, resulting in low temporal resolution, whereas geostationary satellites offer high temporal resolution but low spatial resolution. As a result, there are still limitations in reliably acquiring satellite images with high spatiotemporal resolution. To overcome these limitations, research on super-resolution fusion using various satellite images is underway. However, challenges such as data loss due to clouds, differences in revisit cycles between satellites, and sensor characteristic mismatches make it difficult to produce fused super-resolution images. Therefore, this study proposes a multi-satellite-based fusion framework that addresses these issues and reliably generates spatiotemporal super-resolution fusion images.
To this end, this study utilized various satellite images, including GK2A (high temporal frequency, 2km resolution), MODIS (high temporal frequency, 500m resolution), GOCI-II (high temporal frequency, 250m resolution), Landsat-8 (30m resolution), Sentinel-2 (10m resolution), PlanetScope (2.8m resolution), and KOMPSAT-3(3m resolution). Each satellite image underwent preprocessing steps, including geometric correction, radiometric correction, BRDF (Bidirectional Reflectance Distribution Function) correction, and normalization, to ensure spatial alignment and radiometric consistency.
Subsequently, a deep learning model based on DeepLabV3+ ResNet101 was used to generate cloud mask label data, creating mask labels for clouds and missing areas in the video. These labels were then used to apply a gap-filling technique to fill in the cloud and missing regions. Finally, a step-by-step resolution enhancement image fusion method based on spatial resolution was employed to produce a spatiotemporal super-resolution fused image.
The final super-resolution fused image will be validated using spectral data collected from a ground observation tower located in Naju, Jeollanam-do, South Korea. The multi-satellite fusion framework proposed in this study can efficiently overcome the limitations of spatiotemporal resolution by utilizing deep learning and physics-based models during various processing stages. The fusion results are expected to be applicable in various remote sensing fields, such as detecting climate change and environmental variations.
Acknowledgements: This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(RS-2025-00515357).
How to cite: Han, D., Hahn, S., Ryu, Y., Jeong, S., Ha, J., and Yeom, J.: A Deep Learning and Physics-Based Multi-Satellite Fusion Framework for Spatiotemporal Super-Resolution Image Generation, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9172, https://doi.org/10.5194/egusphere-egu26-9172, 2026.