EGU26-14956, updated on 14 Mar 2026
https://doi.org/10.5194/egusphere-egu26-14956
EGU General Assembly 2026
© Author(s) 2026. This work is distributed under
the Creative Commons Attribution 4.0 License.
Oral | Thursday, 07 May, 16:15–16:25 (CEST)
 
Room -2.15
Super-Resolving Any Place on Earth - Implicit Neural Representations for Sentinel-2 Time Series
Sander Jyhne1, Christian Igel2, Morten Goodwin1, Per-Arne Andersen1, Serge Belongie2, and Nico Lang2
Sander Jyhne et al.
  • 1Department of Information and Communication Technology (ICT), University of Agder, Grimstad, Norway
  • 2Department of Computer Science, University of Copenhagen, Copenhagen, Denmark

High-resolution imagery is limited by sensor technology, atmospheric effects, and acquisition costs. This is a well-known challenge in satellite remote sensing, but it also applies to ground-level imaging with handheld devices such as smartphones. Super-resolution seeks to overcome these limitations by enhancing image resolution algorithmically. Single-image super-resolution, however, is an ill-posed inverse problem and therefore depends on strong priors, typically learned from high-resolution training data or imposed through auxiliary information such as high-resolution guidance from another modality. While these methods often produce visually appealing results, they are prone to hallucinating structures that do not reflect the true scene content.

Multi-image super-resolution (MISR) addresses this issue by exploiting multiple low-resolution views of the same scene that are captured with sub-pixel shifts. In this work, we introduce SuperF, a test-time optimization approach for MISR based on coordinate-based neural networks, also known as neural fields. By representing images as continuous signals using implicit neural representations (INRs), neural fields are well suited for reconstructing high-resolution images from multiple aligned observations. The central idea of SuperF is to share a single INR across all low-resolution frames while jointly optimizing the image representation and the sub-pixel alignment between frames.

Compared to prior INR-based approaches adapted from burst fusion and layer separation, SuperF directly parameterizes the sub-pixel alignment using optimizable affine transformation parameters and performs the optimization on a super-sampled coordinate grid corresponding to the target output resolution. We evaluate the proposed method on simulated bursts of satellite imagery as well as on ground-level images captured with handheld cameras, and observe consistent improvements for upsampling factors of up to 8. A key advantage of SuperF is that it operates entirely at test time and does not rely on any high-resolution training data.

How to cite: Jyhne, S., Igel, C., Goodwin, M., Andersen, P.-A., Belongie, S., and Lang, N.: Super-Resolving Any Place on Earth - Implicit Neural Representations for Sentinel-2 Time Series, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14956, https://doi.org/10.5194/egusphere-egu26-14956, 2026.