ESSI1.5 | Multi-Modal, Multi-Sensor, Multi-Resolution and Multi-Temporal Approaches for Environmental Remote Sensing
Orals |
Fri, 14:00
Fri, 10:45
Tue, 14:00
EDI
Multi-Modal, Multi-Sensor, Multi-Resolution and Multi-Temporal Approaches for Environmental Remote Sensing
Convener: Gencer SümbülECSECS | Co-conveners: D. Tuia, Marc RußwurmECSECS, Nikolaos DionelisECSECS, Javiera Castillo NavarroECSECS
Orals
| Fri, 02 May, 14:00–15:45 (CEST)
 
Room -2.92
Posters on site
| Attendance Fri, 02 May, 10:45–12:30 (CEST) | Display Fri, 02 May, 08:30–12:30
 
Hall X4
Posters virtual
| Attendance Tue, 29 Apr, 14:00–15:45 (CEST) | Display Tue, 29 Apr, 08:30–18:00
 
vPoster spot 4
Orals |
Fri, 14:00
Fri, 10:45
Tue, 14:00
Recent breakthroughs in machine learning, notably deep learning, that facilitate massive amounts of data with data-driven AI models have led to an unprecedented potential for large-scale environmental monitoring through remote sensing. Despite the success of existing deep learning-based approaches in remote sensing for many applications, their shortcomings in jointly leveraging various aspects of Earth observation data prevent fully exploiting the potential of remote sensing for the environment. Namely, integrating multiple data modalities and remote sensing sensors, leveraging deep learning methods over multi-spatial/spectral resolution Earth observation data, and modeling space and temporality together offer remarkable opportunities for a comprehensive and accurate understanding of the environment. Throughout this session, we aim to gather the community to delve into the latest scientific advances that leverage these multi-dimensional approaches to tackle pressing environmental challenges.

Orals: Fri, 2 May | Room -2.92

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairperson: Gencer Sümbül
14:00–14:05
Multi-Resolution Approaches
14:05–14:15
|
EGU25-2475
|
On-site presentation
Lizhen Lu and Yuqi Du

The rapid expansion of Plastic-Mulched Landcover (PML),characterized by its relatively small size and short lifespan, necessitates precisely mapping PML using High-Resolution Remote Sensing Imagery (HRRSI). However, the high costs and limited temporal resolution of acquiring HRRSI pose significant challenges for precise PML identification. Remote Sensing Image Super-Resolution (RSISR) offers a viable solution by reconstructing high-resolution images from lower-resolution inputs, enhancing PML detection capabilities. This study, based on hybrid attention transformer, develops a Multi-Scale Gated Feedforward Attention Network (MSG-FAN) for super-resolution reconstruction of Sentinel-2 data to meter-level resolution. The main contributions include: (1) Construction of a PML RS dataset comprising 5300 pairs of 10-m Sentinel-2 and corresponding 2.5-m Gaofen-2 and Planetscope images from eight globally selected plastic-mulched planting regions. (2) Development of the MSG-FAN model, which enhances multi-channel, multi-scale and global attentions by integrating Gated Multi-Scale Feedforward Layer (GMS-FL), Top-k Token Selective Attention (TTSA) module and Global Context Attention (GCA) module. (3) Demonstration that MSG-FAN outperforms nine state-of-the-art deep learning-based super-resolution networks, achieving an average PSNR 30.81 and SSIM of 0.7287. Our proposed MSG-FAN model advances RSISR techniques and addresses critical challenges in monitoring plastic-mulched planting regions.

How to cite: Lu, L. and Du, Y.: A Multi-Scale Gated Feedforward Attention Network for Super-Resolution Reconstruction of Remote Sensing Images in Plastic-Mulched Planting Regions, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-2475, https://doi.org/10.5194/egusphere-egu25-2475, 2025.

14:15–14:25
|
EGU25-18922
|
On-site presentation
Manuel Traub, Matthias Karlbauer, Florian M. Hellwig, Thomas Jagdhuber, and Martin V. Butz

Remote sensing data from satellites offer real-world observations on large spatial scales without incorporating model biases and model simplifications such as contained in reanalysis datasets. Numerical weather prediction models benefit largely from data with high temporal and spatial resolution, as provided by Earth observation remote sensing missions. Yet, while geostationary (GEO) satellites provide data at high temporal, e.g., 15 minutes, but low spatial resolution, e.g., 5 km, low earth orbit (LEO) satellites deliver data at low temporal, e.g., 16 days, but high spatial resolution, e.g., 90 m. In this research study, we therefore train a combination of a masked autoencoder and a ResNet model to learn a mapping from GEO to LEO Land-Surface Temperature (LST) products. The model receives the coarse-resolution 5 km LST from the Copernicus Global Land Service (apart from other static inputs) to approximate the fine-grained 70 m LST product from NASA’s ECOSTRESS mission. We use the spatial domain extent over Europe defined by the Land Atmosphere Feedback Initiative (LAFI). In theory, our algorithm allows the generation of 70 m LST estimates at a temporal resolution of 15 minutes. However, missing or corrupted input patches, when covered by clouds or in the event of missing sensor coverage or outages, challenges this optimal resolution. Therefore, we aim at 70 m daily LST estimates across continental Europe. We will present examples of the super resolution results from different biome regions across Europe, highlighting the potential and limitations of our approach.

How to cite: Traub, M., Karlbauer, M., Hellwig, F. M., Jagdhuber, T., and Butz, M. V.: Land-Surface Temperature Super Resolution from Geostationary to Low Earth Orbit Satellite Products with a Masked Autoencoder, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-18922, https://doi.org/10.5194/egusphere-egu25-18922, 2025.

Multi-Temporal Approaches
14:25–14:35
|
EGU25-11619
|
ECS
|
On-site presentation
Elif Donmez Altindal, Johannes Leonhardt, Ribana Roscher, Thomas Heckelei, and Hugo Storm

Crop maps provide valuable insights for a range of applications, including water resources management, crop yield prediction, and the planning of domestic and foreign policies. Information on seasonal or yearly agricultural land cover can help governments and organizations make informed decisions to address agricultural challenges and promote environmental sustainability. However, large-scale land cover mapping remains a significant challenge due to the high computational demands of processing remote sensing data, especially when using high-resolution imagery for large-scale applications such as country-wide mapping.

This high computational requirement can be partially mitigated by performing object-based classification, where data is summarized into segments or fields. A challenge in object-level mapping is selecting an appropriate method for analyzing and interpreting the data. For instance, convolutional neural networks (CNNs), a commonly used deep learning algorithm, are not directly applicable in their basic form because they require a gridded structure. Graph Neural Networks (GNNs) present a novel approach, effectively analyzing relationships between objects represented as nodes and edges in a graph. The ability of GNNs to capture complex relationships between segments in a non-grid structure offers distinct advantages, such as handling irregular or non-Euclidean data and exploiting spatial and temporal dependencies within a region. This makes GNNs particularly well-suited for high-resolution remote sensing tasks where traditional grid-based methods may struggle with spatial context and object interactions.

This study applies GNNs to multitemporal Sentinel-1 Interferometric Wide Swath data (VV and VH polarizations), leveraging ten-day composites from May to September to capture seasonal crop growth dynamics. Training, testing, and validation datasets cover 40×40 km², 20×20 km², and 20×20 km² areas, respectively, within North Rhine-Westphalia, Germany. Sentinel-1 images are segmented using the Felzenszwalb-Huttenlocher algorithm, grouping pixels into objects. Each segment’s average backscatter values are calculated, and crop class labels are assigned using InVeKos ground truth data, which includes field boundaries and crop information. This data is transformed into a graph, where nodes represent segments, and edges define adjacency. The GraphSAGE framework is employed to train the GNN model.

Performance comparisons include segment-level and pixel-level neural networks (NNs). Preliminary results show that GNNs achieved the highest accuracy (88.01%), outperforming segment-level NN (86.02%) and pixel-level NN (78.89%). GNNs also demonstrated efficient computational performance, with shorter inference times (0.19 seconds) compared to pixel-based methods (10.7 seconds), and generated more homogeneous maps, minimizing the salt-and-pepper effect.

These results highlight the potential of GNNs for scalable, object-based mapping at high resolution. The approach will be expanded to classify cropland across Germany, generating a 10-meter spatial resolution crop map. By leveraging the temporal dynamics of Sentinel-1 data and incorporating 2022 data, this method offers an efficient and robust framework for large-scale applications in crop management, land-use monitoring, and resource planning.

How to cite: Donmez Altindal, E., Leonhardt, J., Roscher, R., Heckelei, T., and Storm, H.: Graph Neural Networks for Crop Cover Mapping, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-11619, https://doi.org/10.5194/egusphere-egu25-11619, 2025.

14:35–14:45
|
EGU25-9376
|
ECS
|
Virtual presentation
Gioacchino Alex Anastasi, Giuseppe Piparo, and Alessia Rita Tricomi

The integration of remote sensing and deep learning has revolutionized environmental monitoring, leveraging cutting-edge technologies to assist the decision-making processes in resource management and offering advanced tools for rapid disaster response. Our work employs satellite imagery to address pressing challenges in Earth observation, integrating multi-sensor, multi-resolution, and multi-temporal data for studying the aftermath of disastrous events by means of deep learning models, capable of handling such diverse data modalities.

We focused on the segmentation of wildfire-affected areas, using multispectral images from the Sentinel-2 satellites combined with the information from the Copernicus Emergency Management Service, in particular the geolocation and impact assessments, for more that 100 events occurred mostly in the European Mediterranean region. This dataset is further enriched with the observations from the Sentinel-1 and Sentinel-3 satellites, ensuring a comprehensive representation of the effects of each wildfire event by integrating measurements from multiple sensors with varying resolutions and revisit time. To streamline the workflow, a custom library based on the SentinelHub API has been developed, facilitating the download, preprocessing, and combination of data from different sources.

The study is performed on time-series of images, incorporating pre-event and post-event data, processed with a deep learning approach that combines Convolutional Long Short-Term Memory (ConvLSTM) layers in a UNet-like architecture. The results demonstrate the effectiveness of our model in accurately segmenting the affected areas, thus providing actionable insights for emergency management and recovery. Furthermore, the varied dataset, which comprises wildfire events occurring in diverse geographical conditions, enhances the robustness and generalizability of the described methodology.

This work is supported by ICSC – Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union – NextGenerationEU, and it has been carried out within the Spoke 2 (“Fundamental Research and Space Economy”) as part of the activities in the Working Group 6 (“Cross-Domain Initiatives and Space Economy”) under the flagship use-case “AI algorithms for (satellite) imaging reconstruction”.

How to cite: Anastasi, G. A., Piparo, G., and Tricomi, A. R.: Advancing environmental monitoring through deep learning: wildfire segmentation using time-series of images from the Sentinel constellation, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-9376, https://doi.org/10.5194/egusphere-egu25-9376, 2025.

Multi-Sensor Approaches
14:45–14:55
|
EGU25-16027
|
On-site presentation
Yeji Choi, Hyun Gon Ryu, Minseok Seo, and Doyi Kim

Geostationary satellites such as GOES, GK2A, Himawari, and MTG/MSG have been providing a wealth of observational data over the past few decades, capturing the evolution and movement of clouds and precipitation systems with unprecedented detail. This extensive record has been instrumental in advancing our understanding of atmospheric dynamics and supporting the continuous monitoring of extreme weather events and natural disasters. Despite their capability to observe wide areas and the advantage of covering the entire globe with data from just three satellites, utilizing these geostationary satellites to model the Earth system on a global scale has proven challenging due to differences in observation intervals, spectral channels, and coverage footprints. To address these challenges, we propose a zero-shot Video Frame Interpolation (VFI) framework designed to harmonize imagery from multiple geostationary satellites. This method adapts the Many-to-Many Splatting VFI model, originally developed for RGB video processing, to work with single-channel infrared satellite imagery. By generating intermediate frames at high temporal resolutions (e.g., 2–5-minute intervals), our approach enables near-synchronous global coverage with improved temporal consistency. This method offers two key benefits. First, it enhances uniform sampling across satellites, creating a cohesive global view that is particularly valuable in data-sparse regions. Second, the interpolated frames improve the ability to capture and track critical meteorological phenomena. In addition, we address practical considerations such as computational efficiency, consistency with radiative or brightness temperature fields, and the robustness of zero-shot generalization. Our findings suggest that this zero-shot VFI framework can significantly advance global nowcasting providing a pathway to more accurate and timely Earth system modeling.

How to cite: Choi, Y., Ryu, H. G., Seo, M., and Kim, D.: Harmonizing Multisatellite Geostationary Observations Using Zero-Shot Video Frame Interpolation, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16027, https://doi.org/10.5194/egusphere-egu25-16027, 2025.

14:55–15:05
|
EGU25-1195
|
On-site presentation
Guillaume Eynard-Bontemps, Stéphane May, Dawa Derksen, Nicolas Dublé, Pierre-Jean Coquard, and Pauline Audenino

Traditional approaches for classifying hydrological surfaces - water, turbid water, salt pan, snow, and ice - usually rely only on one remote sensing dataset (often optical data like Sentinel-2). They face limitations under cloud cover areas and often confuse similar surface types (snow & salt pan, water & shadows). To overcome this, the study explores the use of Convolutional Neural Networks that can integrate spatial context, trained with multiple data sources like SAR (e.g., Sentinel-1), optical imagery, and exogenous inputs (weather, elevation). 

Deep Neural Networks are well-suited for texture extraction in remote sensing imagery and can efficiently handle inputs with multiple spectral bands. However, processing data from various sensor modalities introduces the challenge of aligning these inputs within a shared feature space where correlations can be effectively captured. To address this challenge, we developed a classical encoder-decoder architecture and explored the use of multiple encoders feeding into a single shared decoder. Two types of encoder families – EfficientNet and Swin Transformer – and two types of decoders – UNET and FPN – alongside various fusion methods were tried and showed similar performances.

For this study, a global multimodal database was gathered using open-source data from the Copernicus program. Initial trials with 17 labelled scenes (50GB) showed poor generalisation capabilities, leading to the extension of the dataset to 57 different scenes worldwide. Additional products were integrated, including Sentinel-1 (GRD VV+VH) data, 30m digital elevation models (ASTER GDEM), and meteorological data (from the ECMWF) to build the final 350GB database. Segmentation masks were generated semi-automatically (using a first version of our DL network) and then refined through visual inspection of Sentinel-2 images.

Results showed improved classification performance for all target classes when elevation data was included, and a dedicated dual-encoder-decoder model architecture proved particularly effective. On the other side, the integration of Sentinel-1 SAR data did not improve performance, likely due to the low temporal correlation between Sentinel-1 and Sentinel-2 acquisition (3-days average). Similarly, adding meteorological information did not enhance results, as our experiments showed that the model consistently disregarded scalar inputs regardless of integration approach.

Our model demonstrated notable robustness on the global database and was compared to existing CNES classification chains, including SurfWater (surface water detection) and Let-It-Snow (snow segmentation in mountains). Classification performance was comparable to SurfWater, though snow classification showed limitations in comparison to Let-It-Snow, particularly in the French Pyrenees.

The findings from this study underscore the potential of a multimodal approach in improving hydrological surface classification, particularly by incorporating data such as elevation. Future work could focus on increasing the volume of labelled data used to train the network to further enhance the model’s global applicability and precision across varied geographic and climatic conditions. Additionally, to fully leverage SAR imagery, reworking the database with more precise, directly annotated products would be essential. Finally, other approaches have to be tried in order to take into account meteorological data, for example using seasonality or more complex inputs. Other exogenous data could be added like terrain shadows.

How to cite: Eynard-Bontemps, G., May, S., Derksen, D., Dublé, N., Coquard, P.-J., and Audenino, P.: Hydrological surfaces classification with Deep Learning using multiple sensors and exogeneous data, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-1195, https://doi.org/10.5194/egusphere-egu25-1195, 2025.

Multi-Modal Approaches
15:05–15:15
|
EGU25-16891
|
ECS
|
On-site presentation
Stella Girtsou, Lilli Freischem, Kyriaki-Margarita Bintsi, Guiseppe Castiglione, Emiliano Diaz Salas-Porras, Michael Eisinger, Emmanuel Johnson, William Jones, Anna Jungbluth, and Joppe Massant

Clouds affect Earth’s radiation balance by reflecting incoming sunlight (cooling effect) and trapping outgoing infrared radiation (warming effect). Their vertical distribution in the atmosphere significantly influences their radiative properties and overall climate impacts. However, how clouds will respond to climate change remains unknown: cloud feedbacks are the largest source of uncertainty in climate projections. Global 3D cloud data can help reduce these uncertainties, improve climate predictions, and support better decision-making.

Clouds are observed globally from space using satellites, which provide insights into their distribution, structure, and evolution. Observations from the Cloud Profiling Radar (CPR) aboard NASA’s CloudSat mission have provided valuable information on the vertical distribution of clouds. However, its long revisit times (~ 16 days), narrow swath (1.4 km) and observations limited to the same local time each day hinder our ability to study the temporal evolution of clouds or their diurnal cycle. In contrast, imaging instruments observe larger regions with higher temporal resolution but only offer a top-down view with limited vertical information.

In this work, we apply deep learning to images observed by geostationary satellites paired with vertical cloud profiles to extrapolate the vertical profiles beyond the observed tracks. Specifically, we use 11-channel imagery from the MSG/SEVIRI instrument, colocated with CPR vertical profiles. First, we pre-train models using self-supervised learning methods, specifically (geospatially-aware) Masked Autoencoders, applied to MSG/SEVIRI data from 2010. The pre-trained models are then fine-tuned for the 3D cloud reconstruction task using paired image-profile data. As only a small fraction of images overlap with CloudSat observations, the pre-training step enables us to exploit the full information contained in the MSG/SEVIRI images. We find that pre-training consistently improves reconstruction performance, particularly in complex regions such as the inter-tropical convergence zone. Notably, geospatially-aware pre-trained models incorporating time and coordinate encodings outperform both randomly initialized networks and simpler U-Net architectures, leading to improved reconstruction results compared to previous work.

In the future, we plan to extend this method to longer time periods and apply it to ESA’s EarthCARE data, once available, to further improve 3D reconstructions and enable the development of long-term 3D cloud products.

How to cite: Girtsou, S., Freischem, L., Bintsi, K.-M., Castiglione, G., Diaz Salas-Porras, E., Eisinger, M., Johnson, E., Jones, W., Jungbluth, A., and Massant, J.: Reconstructing 3D cloud fields from multispectral satellite images using deep learning, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16891, https://doi.org/10.5194/egusphere-egu25-16891, 2025.

15:15–15:25
|
EGU25-8714
|
ECS
|
On-site presentation
Mareike Dorozynski, Franz Rottensteiner, Thorsten Dahms, and Michael Hovenbitzer

For the analysis of the evolution of landscapes, it is required to determine not only current states of the Earth’s surface, but to gain knowledge about past states, too. Sources of information for historic land cover are historic remote sensing imagery, and scanned historic topographic maps. To make the contained information explicitly available for subsequent computer-aided spatio-temporal analysis, classification techniques can be exploited. Against this background, multi-modal land cover classification from maps and aerial orthoimages is developed in the context of the Gauss Center (Gauss Centre, 2025), aiming to benefit from both the textural and geometrical details contained in aerial images, as well as the small intra-class variability in topographic maps.

The proposed deep learning-based classifier is a variant of a UPerNet (Xiao et al., 2018) with four down-sampling stages and takes aerial orthoimagery and topographic maps of the same epoch as an input. Each input modality is processed by an individual encoder, either based on convolutions, e.g. a ResNet (He et al., 2016), or on attention mechanisms, e.g. a Swin Transformer (Liu et al., 2022). This results in uni-modal map features and uni-modal aerial image features at four levels of detail. As the aerial images provide finer details about the texture and boundaries of the land cover objects, the aerial features of the first three stages are directly presented to the decoder, while the highest level aerial image features are fused with those of the topographic maps in a mid-level fusion. To focus on the most relevant features of the two modalities in spatial and feature dimension both, locally and globally, features are weighted by attention weights that are learned following the strategy in (Song et al., 2022). The lower-level aerial features and the high-level multi-modal features are presented to the decoder to predict to multi-modal land cover.

Experiments are conducted on two multi-modal datasets; one for binary building classification and one for multi-class vegetation classification. Both datasets consist of pixel-aligned aerial orthoimages, topographic maps and reference data at a ground sampling distance of 1 m. For all experiments, weights obtained in a pre-training on ImageNet (Russakovsky et al., 2015) are selected for the two encoder branches, while all remaining network weights are randomly initialized based on variance scaling (He et al., 2015). Training is proceeded utilizing the ADAM optimizer (Kingma & Ba, 2015) with standard parameters and a learning rate of 10-2 until the validation F1-score does not improve for 30 epochs. For both datasets, multi-modal predictions are compared to uni-modal predictions. Furthermore, attention-based feature extraction is compared to the one based on convolutions. The achieved mean F1-socres are the highest for the multi-modal variants of the classifier, where a higher score of 90.1% can be achieved utilizing convolutions on the building dataset (multi-modal, attention: 86.9%; aerial, convolution: 89.2%; map, convolution: 84.6%) and attentions are to be preferred for vegetation classification, resulting in a mean F1-score of 83.0% (multi-modal, convolution: 82.2%; aerial, attention: 82.1%; map, attention: 54.0%).

How to cite: Dorozynski, M., Rottensteiner, F., Dahms, T., and Hovenbitzer, M.: Leveraging multi-modal classification of historical aerial images and topographic maps to derive past land cover, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-8714, https://doi.org/10.5194/egusphere-egu25-8714, 2025.

15:25–15:35
|
EGU25-12681
|
ECS
|
On-site presentation
Ayushi Sharma, Daniel Lusk, Johanna Trost, and Teja Kattenborn

As primary producers in Earth's system, plants drive global matter and energy fluxes. Understanding the global distribution of plant functional traits and their biodiversity is, therefore, critical for understanding ecosystem behavior and Earth system dynamics in the face of climate and global change. However, we lack observations for various plant functional traits, such as plant height, leaf size, and nitrogen content, at a global scale.

These data gaps could be addressed through citizen science projects, where thousands of individuals have already recorded millions of plant photographs for species identification purposes. While these photographs do not include direct information about plant traits, trait data for thousands of plant species can be accessed from scientific databases. By linking these two data sources—crowd-sourced plant photographs and trait information from scientific databases—through plant species, we can supervise computer vision models to infer plant traits from plant images. The principle of "form follows function" suggests that a plant's appearance can provide valuable insights into its functional properties.

To assess the potential of citizen science data for plant trait estimation, we propose testing the feasibility of using weak and noisy labels for effective trait prediction. Considering that different plant traits are not independent of each other, we leverage multi-trait learning. Additionally, our approach incorporates plant images along with ancillary environmental data, such as soil conditions and Earth observation satellite data, to provide crucial context on factors like climate or land surface properties.

To fairly evaluate model performance, we curate a clean dataset spanning diverse geographic regions, as well as taxonomic and phylogenetic groups. We conduct a comprehensive study on the resilience of trained models across these distribution shifts. Furthermore, we assess which traits can be effectively learned from noisy labels and explore the extent of trait transferability under different conditions.

Our findings indicate that models trained on noisy data can, to a notable extent, predict a series of plant traits, including plant height, leaf area, and specific leaf area. This approach provides an efficient, scalable, and non-destructive method for estimating important plant functional traits. It could lay the groundwork for large-scale biodiversity monitoring and ecosystem assessment, with the potential to revolutionize how we track the functional properties of ecosystems at a global scale.

How to cite: Sharma, A., Lusk, D., Trost, J., and Kattenborn, T.: PlantTraitNet: A Multi-Modal, Multi-Task Approach to Learning Global Plant Trait Patterns Using Citizen Science Data and Noisy Labels, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-12681, https://doi.org/10.5194/egusphere-egu25-12681, 2025.

15:35–15:45
|
EGU25-4523
|
ECS
|
Highlight
|
On-site presentation
Guillaume Astruc, Nicolas Gonthier, Clément Mallet, and Loic Landrieu

Learning rich and robust representations of Earth Observation (EO) data is critical for effective and accessible geoanalytics. While the ever-growing volume of EO data suggests high potential for self-supervised learning, most approaches are limited to fixed scales, resolutions, or modalities—thus failing to generalize beyond their original sensor configurations. To address these shortcomings, we introduce AnySat, a novel multimodal framework capable of self-supervised training on multiple, diverse EO datasets simultaneously.

AnySat’s design centers on two key innovations. First, we propose a Joint Embedding Predictive Architecture (JEPA) adapted for multimodal EO. Unlike pixel-level reconstruction methods, JEPA operates in latent space—making it inherently more resilient to cloud cover, time-of-day shifts, and varying acquisition angles. Second, scale-adaptive spatial encoders allow a single network to handle variable spatial and temporal resolutions. Notably, more than 75% of AnySat’s 100M parameters are shared across all supported modalities, scales, and resolutions, enabling the model to fully exploit diverse training corpora—a fundamental requirement for developing a true EO foundation model.

To train AnySat, we compile GeoPlex, a collection of five multimodal datasets (PASTIS-HD, TreeSatAI-TS, PLANTED, FLAIR, and S2NAIP), aiming for diversity: 11 distinct sensors including radar and optical modalities, 0.2–250 m resolution, single-image and time series, and 0.3-2600 ha per input sample. Thanks to its versatility, a single Anysat model can learn powerful representations by training from all five datasets simultaneously. We only use cross-modal alignment as a source of self-supervision, and do not require labels for pretraining.

We fine-tune and evaluate our model on the datasets of GeoPlex, as well as four external datasets to evaluate generalization.  We report state-of-the-art results on seven downstream tasks, including land cover mapping, crop-type classification, tree-species identification, deforestation detection, and disaster mapping. Notably, AnySat yields significant performance gains across multiple benchmarks, such as +2.8 mIoU on PASTIS-HD, +3.6 mIoU on SICKLE, +11.0 accuracy on TimeSen2Crop, and +10.2 IoU on BraDD-S1TS.

A major benefit of AnySat is its high performance when performing linear probing with fixed representations—even for semantic segmentation tasks. This combination of versatility, generalizability, and ease of use positions AnySat as a valuable tool for practitioners facing diverse sensor types, specialized data distributions, and limited annotations.

How to cite: Astruc, G., Gonthier, N., Mallet, C., and Landrieu, L.: AnySat: a Multi-Resolution/Modality/Scale Earth Observation Model, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-4523, https://doi.org/10.5194/egusphere-egu25-4523, 2025.

Posters on site: Fri, 2 May, 10:45–12:30 | Hall X4

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Fri, 2 May, 08:30–12:30
Multi-Resolution Approaches
X4.128
|
EGU25-10406
|
ECS
Charly Zimmer, Anja Neumann, Miguel Mahecha, and Josefine Umlauft

Many applications in Earth system sciences require continuous, gap-free data sets. However, remote sensing data in particular are plagued by gaps due to clouds, incomplete coverage, or low-quality flags. Gap-Filling in remote sensing data often requires model architectures that are tailored specifically to underlying dataset characteristics such as scale, resolution or range of values. This limits the transferability to other gap-filling scenarios. Training these models is further hindered by the lack of adequate training samples, as they must be gathered from gap-afflicted data themselves. In this work, we present a spatiotemporal, univariate and multiscale gap-filling method that is independent of any specific dataset. A modular implementation allows for the customization of system parameters, so that the method can be adjusted and applied to various datasets, even outside the Earth Science domain. By employing a patch-wise gap-filling approach, introducing masked loss functions, and providing effective methods for synthetic gap generation, we are able to leverage gap-afflicted datasets and gather large amounts of training samples from them. To demonstrate the flexibility of the system, we perform gap-filling on multiple climatic variables from Earth System Data Cubes (ESDC) (Mahecha et al. 2020) using a 3D CNN architecture, making this the first global-scale gap-filling solution on ESDC. By capturing both spatial and temporal relations, the model is able to generate predictions that are coherent on large scale and across patches, thus demonstrating the potential of the patch-wise gap-filling framework and the use of 3D CNN architectures for spatiotemporal gap-filling tasks.

How to cite: Zimmer, C., Neumann, A., Mahecha, M., and Umlauft, J.: PATCH-FILL: Multiscale and Univariate Gap-Filling in Remote Sensing Data, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-10406, https://doi.org/10.5194/egusphere-egu25-10406, 2025.

X4.129
|
EGU25-46
Guillermo E. Ponce-Campos, Philip Heilman, Cynthia L. Norton, Shang Gao, Michael A. Crimmins, and Mitchel P. McClaran

Integrating fine-scale measurements with broad-scale monitoring is a challenge for environmental monitoring, but it is a critical advancement in the face of increasing climate variability. We addressed this challenge by integrating fine-scale measures from Unoccupied Aerial Systems (UAS) to train broad-scale satellite imagery via machine learning algorithms. We applied this integration to detect how the spatial patchiness of bare ground varies over five years across a 100 km² semi-arid landscape in southern Arizona, USA. We used the Largest Patch Index (LPI) as the measure of spatial patchiness of bare ground. Our findings reveal three key advances in monitoring spatial patchiness over time and across a large landscape. First, the UAS-trained satellite estimates of LPI effectively represented the expected bare ground response to extreme climate events, where LPI increased during severe drought (-2.47 Standardized Precipitation-Evapotranspiration Index (SPEI)) and LPI decreased during exceptional wet periods (+1.95 SPEI). Second, the estimates of LPI were consistently 30-60% greater at lower and drier elevations, validating the ability to represent known ecological gradients. Third, and most notably, we confirmed that LPI is a scale-sensitive measure that differs between 3-m and 30-m grids, and that the magnitude of the differences is inversely related to the density of data in the satellite imagery. LPI was greatest using the 30-m grid Landsat 8 data with a density of 0.02 B/m² and LPI was least when using the 3-m grid PlanetScope data with a density of 0.9 B/m². But we found intermediate LPI values when resampling PlanetScope to 30-m grid while maintaining the greater data density. This previously unrecognized role of data density enriches the understanding of scale effects in landscape pattern analysis. In the end, we demonstrated a practical solution for integrating fine-scale UAS and broad-scale satellite observations via machine learning to support broad-scale environmental monitoring.

How to cite: Ponce-Campos, G. E., Heilman, P., Norton, C. L., Gao, S., Crimmins, M. A., and McClaran, M. P.: Integrating Unoccupied Aerial Systems and Satellite Data to Map the Patchiness of Bare Ground at the Landscape Scale, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-46, https://doi.org/10.5194/egusphere-egu25-46, 2025.

Multi-Temporal Approaches
X4.130
|
EGU25-5955
|
ECS
Muhammad Afif Fauzan, Holger Virro, and Evelyn Uuemaa

Agricultural landscape features are small fragments of natural or semi-natural vegetation in agricultural land, which, compared to their relatively small size, are essential in providing various ecosystem services and supporting biodiversity in the agricultural landscape. The Common Agricultural Policy (CAP) includes landscape features in its payment instruments, allowing farmers to receive incentives for preserving landscape features on their land. However, to effectively manage and monitor the status of landscape features requires their mapping, which is often done manually. The potential of deep learning methods has been promising in automatically segmenting particular objects on remote sensing images, but they require large amounts of labelled data to train the model, which is time-consuming to prepare manually. 

The aim of our study was to develop a deep learning methodology to automate the detection of landscape features in agricultural lands. We leveraged the publicly available dataset of landscape features’ polygons that has been created manually by farmers in Estonia to create labelled training data. To ensure that all landscape features in the database still actually exist, we filtered the dataset by applying a threshold of Normalized Difference Vegetation Index (NDVI) value difference between each field island and its surrounding arable land from three Sentinel-2 seasonal composites. Additionally, we checked the digitization quality of field island polygons by comparing them to orthophotos and the digital elevation model. The labelled training data were used to train a U-Net deep learning model to detect landscape features from orthophotos. We also experimented with adding elevation data as input to improve detection accuracy. We used F1-score and Intersection over Union (IoU) to evaluate the model performance. The results showed that the model is reliable for automated landscape feature detection and can be adopted by the relevant stakeholders to automate their workflow in delineating landscape features for incentive schemes to preserve small landscape features. 

How to cite: Fauzan, M. A., Virro, H., and Uuemaa, E.: Deep learning for detecting landscape features in agricultural lands, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-5955, https://doi.org/10.5194/egusphere-egu25-5955, 2025.

Multi-Sensor Approaches
X4.131
|
EGU25-12622
|
ECS
Anna Jungbluth, Lilli Freischem, J. Emmanuel Johnson, Robert Jarolim, Christoph Schirninger, and Anne Spalding

Climate change is fundamentally altering Earth's natural systems, from shifting weather patterns and sea level rise to increasingly frequent extreme events. Understanding and responding to these changes demands continuous, reliable observations of our planet. While Earth-observing satellites have collected terabytes of data in recent decades with ever-increasing temporal, spatial, and spectral resolution, synthesizing these diverse data sources into homogeneous, long-term records remains a significant challenge for climate monitoring and situational awareness. 

We address this challenge with Instrument-to-Instrument Translation (ITI), an artificial intelligence framework that learns to translate between different satellite imaging domains. Building on unpaired image-to-image translation techniques, ITI overcomes a fundamental limitation in satellite data integration - it does not require the instruments to observe the same location at the same time. This flexibility enables ITI to perform instrument intercalibration, enhance image quality, mitigate sensor degradation, and achieve super-resolution asynchronously across multiple wavelength bands to enable multi-vantage point observations

Building on ITI's proven success in harmonizing solar observations, we extend the framework to address the unique challenges of Earth observation and atmospheric monitoring. More specifically, we demonstrate ITI’s capability by harmonizing observations from two geostationary weather satellites with complementary coverage: the Meteosat Second Generation (MSG) monitoring Europe and Africa with 11 spectral bands, and the Geostationary Operational Environmental Satellite (GOES-16) observing the Americas with 16 spectral bands. For this, we developed rs_tools, a comprehensive software package that streamlines the creation of machine learning-ready datasets, and adapted the ITI pipeline to handle the specific complexities of Earth observation data, e.g. missing observations of visible bands at night. 

Our results reveal good agreement between the ITI-translated imagery and actual high-quality observations, especially for infrared spectral channels. We conduct a multi-faceted performance analysis using image quality metrics (PSNR, histogram distributions, power spectra) across varying spatial scales, spectral bands, and geographic features (land/ocean). The unique overlap in MSG and GOES-16 coverage over the Atlantic Ocean enables additional validation through paired metrics (MSE, Pearson correlation, SSIM) after projecting both observing systems into a common reference frame.

The ITI tool is available as open-source software for the research community, and can easily be adapted to novel datasets and research applications. This research outcome is supported by NASA award 22-MDRAIT22-0018 (No. 80NSSC23K1045) and managed by Trillium Technologies, Inc.

How to cite: Jungbluth, A., Freischem, L., Johnson, J. E., Jarolim, R., Schirninger, C., and Spalding, A.: Instrument-to-Instrument translation: An AI tool to intercalibrate and homogenize observations from Earth-observing satellites, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-12622, https://doi.org/10.5194/egusphere-egu25-12622, 2025.

Multi-Modal Approaches
X4.132
|
EGU25-6757
|
ECS
Ziming Li and Bin Chen

Detailed and up-to-date information on urban land use plays a key role in understanding urban environment, evaluating urban planning and promoting the development of sustainable cities and communities. Recent years have witnessed many efforts dedicated to developing effective land use classification methods and generating products at different scales. Nevertheless, an accurate and fine-grained delineation of parcel-level urban land use for the entire China is still lacking. In this study, we developed a novel urban land use mapping framework to identify accurate land use categories by integrating multimodal deep learning model and multisource geospatial data. With complete and precise land parcels generated by road networks from two public source as minimum classification units, we produced a nationwide Urban Essential Land Use Categories (EULUC) map covering all cities in China for 2022, named as EULUC 2.0. The mapping results show that residential, industrial and park and greenspaces are the dominant land use categories across China, collectively accounting for nearly 80% of the urban area. The spatially explicit information provided by EULUC 2.0 can reveal distinct spatial patterns of the heterogeneous land use landscape in each city. The evaluation results found the overall accuracies of Level-I and Level-II classification could be as high as 72% and 79%, with substantial improvements across all categories over previous product. The advancements can be mainly attributed to the effectiveness of deep learning for multi-modal input, especially the graph modeling of Point-of-interest (POI) data. The free-access product and insights in this study can potentially help researcher and practitioners to investigate and address the pressing urban challenges in the process of urbanization.

How to cite: Li, Z. and Chen, B.:  Mapping Nationwide Essential Urban Land Use Categories by Integrating Multimodal Deep Learning and Multi-source Geospatial Data, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-6757, https://doi.org/10.5194/egusphere-egu25-6757, 2025.

X4.133
|
EGU25-6248
|
ECS
Kourosh Ahmadi and Amir Naghibi

Groundwater quality is a critical concern in agricultural regions, where nitrate contamination poses environmental and health risks, and ammonium levels play a pivotal role in nitrogen cycling processes. This study introduces a multi-task learning (MTL) framework designed to jointly predict nitrate and ammonium levels in groundwater, addressing the interdependencies between these variables. Conducted in Odense, Denmark, the study leverages spatial and temporal data, including hydrological, environmental, and anthropogenic variables, alongside land-use maps. The MTL approach outperforms traditional single-task models by capturing shared environmental and hydrological variables. By sharing information across tasks, the model identifies overlapping spatial, enabling robust predictions even in data-scarce scenarios. Additionally, the shared layers of the MTL model reduce overfitting, improving generalizability and providing deeper insights into the drivers of groundwater quality. The dataset used in this study includes geospatial nitrate and ammonium measurements, which were modeled alongside predictor variables such as land use, soil characteristics, and topographical variables. Model evaluation metrics demonstrated the superiority of the MTL approach, with increased accuracy, R², and reduced root mean squared error (RMSE) compared to separate models. The results highlight the potential of MTL to improve predictions and foster integrated groundwater management strategies. This study underscores the importance of advanced machine learning techniques in environmental modeling, showcasing a novel approach to jointly predict interrelated water quality variables.

How to cite: Ahmadi, K. and Naghibi, A.: Integrated Multi-Task Learning Framework for Groundwater Nitrate and Ammonium Prediction in Odense, Denmark, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-6248, https://doi.org/10.5194/egusphere-egu25-6248, 2025.

X4.134
|
EGU25-14877
|
ECS
Qianbao Hou, Ce Hou, Fan Zhang, and Qihao Weng

Urban dwellers today frequently document their daily experiences through digital photography, sharing these moments across popular social networking platforms. While these platforms host vast collections of images, the geographical data associated with many photos is often imprecise or missing entirely. Accurately determining the geographic coordinates of user-submitted photographs adds substantial value to these visual records, offering practical applications in city development planning, architectural studies, and public safety monitoring. Despite its potential benefits, the process of precise image geo-localization remains technically complex and challenging. This study presents an innovative approach to geo-localize crowd-sourced images in urban settings, addressing the limitations of traditional methods. By combining street-view panoramas and satellite imagery through a novel contrastive learning framework, we significantly improve localization accuracy. Using Hong Kong as a case study, we demonstrate substantial improvements over existing approaches, reducing median and average errors by 77.4% and 63.6%, respectively. Surprisingly, our findings reveal that satellite imagery alone outperforms street-view data in geo-localization tasks, challenging previous assumptions. This research not only advances the field of urban image geo-localization but also provides a valuable multi-source benchmark, paving the way for future innovations in urban sensing, mapping, and analysis across various disciplines.

How to cite: Hou, Q., Hou, C., Zhang, F., and Weng, Q.: Crowd-sourced images geo-localization method based on multi-modal deep learning, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-14877, https://doi.org/10.5194/egusphere-egu25-14877, 2025.

X4.135
|
EGU25-19171
|
ECS
Lorenzo Beltrame, Jules Salzinger, Jasmin Lampert, and Phillipp Fanta-Jende

Frequent cloud cover and terrain-induced shadows pose significant challenges for reliable forest monitoring. Traditional monitoring methods, such as ground-based observations and aerial surveys, often suffer from low temporal resolution, making it difficult to track seasonal changes or detect sudden forest anomalies, such as windthrow damage. Earth Observation (EO), particularly Sentinel-2 imagery, offers the potential for high revisit rates and global coverage, but these advantages are diminished by the persistent presence of clouds and shadows, particularly during winter months in mountainous areas. The tasks of forest anomaly detection and windthrow damage assessment particularly benefit from the increased temporal resolution provided by cloud and shadow free Sentinel-2 imagery. 

The SAFIR project aims to develop a scalable and robust framework for comprehensive forest monitoring, with a focus on resilience in complex terrains, including mountainous regions. Within the project and to fully leverage the advantages of EO, it is crucial to implement effective preprocessing techniques to address cloud and shadow disturbances. These challenges can be overcome by employing a method that predicts missing image information by reconstructing the albedo. This process involves integrating spatial, spectral, temporal, and physical priors into the image restoration, allowing for the extraction of meaningful information from partially obscured satellite measurements. 

This contribution introduces a concept for a modular deep learning framework designed to process cloudy or shadowed satellite images and predicting the corresponding albedo values. The framework consists of two core modules: a shadow remover and a cloud remover. Both modules undergo pretraining on large cloud-free satellite datasets to build robust spatiotemporal embeddings. They are subsequently fine-tuned using physics-based methods to improve accuracy in restoring obscured and clouded image areas. Unlike traditional approaches that prioritize visual clarity, this framework is optimized for machine learning. The objective is to create enhanced data products for downstream forest monitoring applications. The effectiveness of this approach is validated by comparing the results with non-enhanced Sentinel-2 data, making the downstream tasks a methodological validation step.  

Validation is also conducted using multimodal data, integrating satellite imagery with high-resolution Unmanned Aerial Vehicle (UAV) data. The planned UAV campaigns, conducted in Portugal, Germany and Austria, capture low-altitude imagery at 120 m. Hence, they provide ground-truth validation by revealing surface conditions beneath cloud cover. This validation step supports the fine-tuning of the image restoration models and ensures that restored satellite images align closely with real-world conditions.  

By leveraging heterogeneous data sources, including high-quality in situ UAV data, this contribution introduces a scalable concept for high-frequency satellite monitoring. The framework aims to go beyond experimental setups and achieve operational deployment in the GTIF initiative by ESA, making EO more efficient. 

How to cite: Beltrame, L., Salzinger, J., Lampert, J., and Fanta-Jende, P.: Towards a Scalable Deep Learning Framework for Forest Monitoring under Challenging Conditions with Multimodal Data, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19171, https://doi.org/10.5194/egusphere-egu25-19171, 2025.

Posters virtual: Tue, 29 Apr, 14:00–15:45 | vPoster spot 4

The posters scheduled for virtual presentation are visible in Gather.Town. Attendees are asked to meet the authors during the scheduled attendance time for live video chats. If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access Gather.Town appears just before the time block starts. Onsite attendees can also visit the virtual poster sessions at the vPoster spots (equal to PICO spots).
Display time: Tue, 29 Apr, 08:30–18:00
Chairpersons: Filippo Accomando, Andrea Vitale

EGU25-7785 | ECS | Posters virtual | VPS19

Ecosystem Services Trade-Offs in the Chaohu Lake Basin Based on Land-Use Scenario Simulations 

Aibo Jin
Tue, 29 Apr, 14:00–15:45 (CEST) | vP4.11

Amid global environmental degradation, understanding the spatiotemporal dynamics and trade-offs of ecosystem services (ESs) under varying land-use scenarios is critical for advancing the sustainable development of social–ecological systems. This study analyzed the Chaohu Lake Basin (CLB), focusing on four scenarios: natural development (ND), economic priority (ED), ecological protection (EP), and sustainable development (SD). Using the PLUS model and multi-objective genetic algorithm (MOGA), land-use changes for 2030 were simulated, and their effects on ESs were assessed quantitatively and qualitatively. The ND scenario led to significant declines in cropland (3.73%) and forest areas (0.18%), primarily due to construction land expansion. The EP scenario curbed construction land growth, promoted ecosystem recovery, and slightly increased cropland by 0.05%. The SD scenario achieved a balance between ecological and economic goals, maintaining relative stability in ES provision. Between 2010 and 2020, construction land expansion, mainly concentrated in central Hefei City, led to a marked decline in habitat quality (HQ) and landscape aesthetics (LA), whereas water yield (WY) and soil retention (SR) improved. K-means clustering analysis identified seven ecosystem service bundles (ESBs), revealing significant spatial heterogeneity. Bundles 4 through 7, concentrated in mountainous and water regions, offered high biodiversity maintenance and ecological regulation. In contrast, critical ES areas in the ND and ED scenarios faced significant encroachment, resulting in diminished ecological functions. The SD scenario effectively mitigated these impacts, maintaining stable ES provision and ESB distribution. This study highlights the profound effects of different land-use scenarios on ESs, offering insights into sustainable planning and ecological restoration strategies in the CLB and comparable regions.

How to cite: Jin, A.: Ecosystem Services Trade-Offs in the Chaohu Lake Basin Based on Land-Use Scenario Simulations, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-7785, https://doi.org/10.5194/egusphere-egu25-7785, 2025.

EGU25-16549 | Posters virtual | VPS19

Monitoring Long-Term Land Cover Transformations in the Danube Delta using Landsat Satellite Imagery 

Albert Scrieciu and Andrei Toma
Tue, 29 Apr, 14:00–15:45 (CEST) | vP4.12

The STARS4Water project addresses the critical need to understand the impacts of climate change and anthropogenic activities on freshwater availability and ecosystem resilience at the river basin scale. By developing innovative data services and models tailored to stakeholder needs, the project will improve decision-making processes for sustainable water resource management. A distinctive feature of STARS4Water is its focus on co-creating solutions with local stakeholders using a living lab approach, ensuring that newly developed tools remain relevant and usable beyond the life of the project.

 

This extension of the original project—funded with a special grant from Unitatea Executivă pentru Finanțarea Învățământului Superior, a Cercetării, Dezvoltării și Inovării (UEFISCDI) from Romania—focuses on a detailed change detection analysis to monitor and quantify land cover transformations in the emblematic Danube Delta region. The objective is to assess how environmental and anthropogenic changes have influenced this ecologically significant wetland over several decades. To achieve this, a comprehensive database of multispectral satellite images from the Landsat archive, spanning from 1985 to 2023, will be constructed. The long-term dataset enables a detailed temporal analysis, important for detecting land cover dynamics over time.

 

The methodology involves several key phases: (1) data collection and preprocessing of Landsat satellite images to correct errors and align imagery for consistent comparative analysis; (2) sampling and training a deep learning model using convolutional neural network (CNN) architectures, to classify various land cover types; (3) performing land cover classification on the processed images using the trained model, followed by accuracy assessment; and (4) conducting a comprehensive change detection analysis to quantify and interpret the observed transformations in land use and land cover.

 

The results of this analysis will deliver important knowledge on the long-term dynamics of the Danube Delta landscape, highlighting critical changes with implications for biodiversity, water management and ecosystem services. This approach will support adaptive ecosystem management and contribute to the scientific understanding of climate-related and anthropogenic changes in fragile wetland ecosystems.

 

Acknowlegments

This work was supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI - UEFISCDI, project number PN-IV-P8-8.1-PRE-HE-ORG-2023-0094, within PNCDI IV.

How to cite: Scrieciu, A. and Toma, A.: Monitoring Long-Term Land Cover Transformations in the Danube Delta using Landsat Satellite Imagery, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16549, https://doi.org/10.5194/egusphere-egu25-16549, 2025.