ESSI4.9 | Novel methods and applications of satellite and aerial imagery
Fri, 14:00
Fri, 14:00
EDI Poster session
Novel methods and applications of satellite and aerial imagery
Convener: Ionut Cosmin SandricECSECS | Co-conveners: George P. Petropoulos, Marina VîrghileanuECSECS, Juha Lemmetyinen
Posters on site
| Attendance Fri, 02 May, 14:00–15:45 (CEST) | Display Fri, 02 May, 14:00–18:00
 
Hall X4
Posters virtual
| Attendance Fri, 02 May, 14:00–15:45 (CEST) | Display Fri, 02 May, 08:30–18:00
 
vPoster spot 4
Fri, 14:00
Fri, 14:00

Posters on site: Fri, 2 May, 14:00–15:45 | Hall X4

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Fri, 2 May, 14:00–18:00
Chairperson: Ionut Cosmin Sandric
X4.85
|
EGU25-20365
|
ECS
minghui chang and shihua li

Semantic segmentation of cropland is critical for accurately extracting crop distribution from satellite remote sensing (RS) images. However, the dynamic temporal patterns caused by crop rotations and the heterogeneous spatial characteristics of cropland pose significant challenges for achieving high-precision segmentation. To tackle these issues, we propose a novel spatiotemporal feature-enhanced network (STFE) designed specifically for cropland segmentation in remote sensing time-series images (RSTI). The STFE network effectively integrates temporal and spatial features by introducing key innovations. First, we design an edge-guided spatial attention (EGSA) module to enhance spatial detail extraction, particularly for delineating ambiguous boundaries. Second, a progressive feature enhancement (PFE) strategy is developed to capture and fuse multi-scale features progressively across network layers. Third, for temporal feature extraction, we incorporate a differential awareness attention (DAA) module, built on ConvLSTM, to dynamically aggregate temporal information, enabling the model to better capture crop rotation patterns and temporal variations. Experimental results on three benchmark datasets—PASTIS, ZueriCrop, and DNETHOR—demonstrate the superior performance of STFE compared to state-of-the-art methods, achieving mean IoU improvements of 3.2% over the best-performing baseline. The model excels particularly in handling challenging scenarios such as irregular crop shapes and mixed cropping patterns. Its adaptability to complex and evolving agricultural landscapes provides a scalable and reliable solution for supporting sustainable farming practices and informed decision-making.

How to cite: chang, M. and li, S.: Cropland segmentation leveraging a synergistic edge enhancement and temporal difference-aware network with Sentinel-2 time-series imagery, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-20365, https://doi.org/10.5194/egusphere-egu25-20365, 2025.

X4.86
|
EGU25-19140
|
ECS
Yomna Eid and Edzer Pebesma

Remote sensing analysis is often used to provide supporting information for evidence-informed policy-making. Typically, such analysis presents results as classification maps, such as a land cover classification used to estimate deforestation areas in a region. For such analyses, where aggregated areal values of specific classes are the primary targets, a critical question arises: do the results significantly degrade when lower spatial resolution Earth Observation (EO) products are used instead of higher-resolution ones?

EO products like Dynamic World land use and land cover maps, produced at a high temporal and spatial resolution (5 days and 10m, respectively), are built on the widely held belief that higher resolutions inherently yield better results. However, with the exponential growth in data volumes and the computational demands of high-resolution workflows, it becomes increasingly important to determine where these resource-intensive approaches provide meaningful advantages — and where they do not — to balance computational efficiency with the need for accuracy in remote sensing workflows.

To address this question, we examine two case studies: deforestation in the Cerrado Biome of Brazil, and the imperviousness of sealed surfaces in Germany. Classification maps from each study are systematically downsampled from their native resolutions in steps up to 10 km spatial resolution. Using Ripley’s Equation1, numerically approximated with a Gaussian-Quadrature approach, we compute standard errors to assess the impact of spatial resolution on classification accuracy.

We report our findings on how the aggregated target values derived from lower-resolution data compare to those from higher-resolution inputs. We also try to identify the resolution thresholds beyond which the quality of the final product loses acceptable representation of the phenomena in the selected use cases.

1 See Eq. 3.4, page 23 in Ripley, B.D. (1981), “Spatial Sampling” in Spatial Statistics.

How to cite: Eid, Y. and Pebesma, E.: When is a finer spatial resolution justified in remote sensing analysis?, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19140, https://doi.org/10.5194/egusphere-egu25-19140, 2025.

X4.87
|
EGU25-17511
Hye-Won Kim and Sang Cherl Lee

KARI (Korea Aerospace Research Institute) is responsible for operating three geostationary satellites, including the GK-2A (Geo-KOMPSAT-2A), and plays a key role in ensuring the stable operation of these satellites. The institute contributes to the continuous acquisition of satellite imagery data, providing around-the-clock support for satellite operations and monitoring. GK-2A was launched on July 5, 2018, from the Guiana Space Centre in French Guiana. It is part of Korea's next-generation geostationary meteorological satellite program, designed to enhance weather forecasting capabilities. This satellite is equipped with advanced payloads, including the Advanced Meteorological Imager (AMI) and the Korea Space Environment Monitor (KSEM). The AMI is dedicated to observing atmospheric conditions in real time, providing high-resolution imagery for weather analysis and forecasting, while the KSEM is tasked with monitoring space weather phenomena, such as solar radiation and geomagnetic storms, which can impact satellite operations and communication systems. The AMI operates across multiple spectral bands, enabling detailed observations of clouds, precipitation, and other atmospheric phenomena. It covers a wide area, including the East Asia region, with a temporal resolution that allows for frequent imaging of the Earth’s atmosphere. One of the AMI observation modes, Local Area (LA), typically covers the Korean Peninsula. However, in the case of special observations, the AMI can perform LA observations, where it is capable of imaging any region within its Field of View (FOV), beyond the standard observation area. This flexibility enhances its capacity for targeted monitoring, making it particularly useful for high-priority events, localized weather phenomena and global disasters requiring rapid observations.
This paper presents an overview of the operational results since the launch of the GK-2A, with a particular focus on special observations conducted using the AMI. The results from the special observation operations during the normal operational period of GK-2A are expected to provide insights into the future direction for the development of special observation operations using the AMI. 

How to cite: Kim, H.-W. and Lee, S. C.: Operational Results of GK-2A AMI Special Observations, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-17511, https://doi.org/10.5194/egusphere-egu25-17511, 2025.

X4.88
|
EGU25-19832
Yuze Wang, Mariana Belgiu, Aoran Hu, Rong Xiao, and Chao Tao

The dense Satellite Images Time Series (SITS) plays an important role in the agriculture semantic segmentation task. However, in real-world scenarios, cloud contamination and temporary sensor outages can lead to significant data missing in SITS, which declines the performance of models trained on ideal scenarios. A common approach is to reconstruct the complete SITS before the model’s prediction, where the reconstruction is independent of the prediction. This approach not only leads to the error accumulation from reconstruction to prediction, but also the detailed rebuilding of complete SITS may be redundant for the prediction. In this paper, we proposed a features reconstruction and prediction joint learning framework. The collaborative optimization of the two tasks aims to encourage the model to efficiently reconstruct complete features beneficial for prediction from incomplete SITS. Specifically, we simulate the data-missing scenarios with masks. The prediction task of masked data is supervised by labels. Meanwhile, by using the model that is well-trained on ideal scenarios as a teacher, we leverage its extracted temporal features from the data before masking as the target of the feature reconstruction task. The gradient flow of two tasks will be shared, which enables mutual supervision between them. Feature reconstruction prevents the model from acquiring incorrect reasoning ability caused by the shortest path problem during prediction, whereas prediction keeps reliability and reduces redundancy of reconstructed information. Furthermore, after training with the proposed framework, the model architecture remains unchanged and still maintains its robustness of complete SITS, which enhances the model's feasibility in practical applications. The experiments were conducted across multiple agricultural semantic segmentation datasets with incomplete SITS, sourced from Sentinel-2 and Planet satellites. We also validate its robustness for the common model architectures, and visualize the intermediate features to explore the mutual influence between the two tasks.

How to cite: Wang, Y., Belgiu, M., Hu, A., Xiao, R., and Tao, C.: A Features Reconstruction and Prediction Joint Learning Framework with Incomplete SITS for Agriculture Semantic Segmentation , EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-19832, https://doi.org/10.5194/egusphere-egu25-19832, 2025.

X4.89
|
EGU25-18783
|
ECS
Leo Helling, Barbara Belletti, Mathis Messager, Louis Rey, Hervé Parmentier, and Hervé Piégay

Global water occurrence data derived from satellite imagery provide critical insights into surface water dynamics, informing science and management of key issues like climate change, water scarcity, and biodiversity loss. The Landsat-based Global Surface Water (GSW) dataset (Pekel et al., 2016) has notably provided an important archive of the global surface water areas and its changes over time. However, its 30-m resolution limits its applicability for smaller river systems. Since the launch of the Copernicus Sentinel-2 (S2) program, higher-resolution imagery (10 m) at recurrence times of 5 days is available, but has not yet been fully exploited. The only large-scale, temporally explicit layer of water occurrence based on S2 was provided by Yang et al. (2020) for the French Metropolitan region, but is limited by noise from clouds, terrain shadows, and seasonal snow. 

The recently developed Dynamic World database (Brown et al., 2022) provides a probabilistic, pixel-scale land cover classification of S2 images updated globally in near-real time, potentially enabling computationally efficient, temporally continuous water mapping at high resolution. Here we evaluate DW’s water detection capabilities and propose a workflow for large-scale, monthly surface water occurrence mapping. Our approach integrates probabilistic and physical-based water classification, topographic filtering, and cloud masking to overcome limitations of GSW and existing Sentinel-2 applications. DW’s water probabilities were compared to spectral indices (NDWI, MNDWI) and combinations of these metrics were explored. We also assessed the potential for topographic data (FABDEM) and pixel-quality measures (CloudScore+) to reduce misclassification and allow the inclusion of more observations. The analysis is applied to the French Rhône-Mediterranean basin, a region chosen due to its diverse hydrological, climatic and geomorphological conditions. Verification is performed using a recently developed high-resolution annual land use product for mainland France (Manière, 2023) and results are compared to the GSW layer.

Preliminary results demonstrate that DW natively detects water in most areas well, but noise from shadow remains a challenge. Through combination with NDWI and further filtering with topographical data, significant classification improvements can be achieved. In addition, the pixel-based cloud-filtering with CloudScore+ enables the inclusion of more observations compared to previous methods. We implemented this approach on Google Earth Engine with a simple and efficient algorithm providing monthly water occurrence observations for a whole year. This scalable workflow holds the potential to address significant limitations of prior methods and facilitate large-scale surface water mapping at high resolution. The results are especially significant in areas where in-situ hydrological monitoring is scarce.

 

References

Brown, C. F., Brumby, S. P., Guzder-Williams, B., et al. (2022). Dynamic World, Near real-time global 10 m land use land cover mapping. Scientific Data, 9(1), Article 1. https://doi.org/10.1038/s41597-022-01307-4

Manière, L. (2023). Projet MAPD’O. https://bassinversant.org/wp-content/uploads/2023/03/presentation_mapdo.pdf

Pekel, J.-F., Cottam, A., Gorelick, N., & Belward, A. S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540(7633), Article 7633. https://doi.org/10.1038/nature20584

Yang, X., Qin, Q., Yésou, H., et al. (2020). Monthly estimation of the surface water extent in France at a 10-m resolution using Sentinel-2 data. Remote Sensing of Environment, 244, 111803. https://doi.org/10.1016/j.rse.2020.111803

How to cite: Helling, L., Belletti, B., Messager, M., Rey, L., Parmentier, H., and Piégay, H.: Towards Sentinel-2-based monthly water occurrence mapping with the Dynamic World data suite, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-18783, https://doi.org/10.5194/egusphere-egu25-18783, 2025.

X4.90
|
EGU25-16846
|
ECS
Ziqian Li

Accurate and efficient mapping of crop spatial distribution is crucial for agricultural monitoring, yield prediction, and environmental sustainability.  In this study, we developed a novel workflow, GEDI-Guided Crop Mapping Framework (GGCMF), for high-resolution mapping of corn and sorghum by integrating GEDI data, Sentinel-2 imagery, and machine learning classifiers within the Google Earth Engine (GEE) platform. The GGCMF workflow begins by utilizing historical CDL crop type maps to extract canopy height and vertical structural differences from GEDI L2A Vector data, which are processed within a newly developed GEE-compatible framework.  This ensures minimal geolocation errors and allows the accurate differentiation of high- and low-vegetation classes (e.g., corn + sorghum vs. other crops).  Subsequently, Sentinel-2 imagery is employed to capture unique phenological and spectral features, enabling the generation of high-quality training samples for the fine-scale differentiation of corn and sorghum.

This automated approach was applied to multiple years (2019–2022) and regions (China and the U.S.), assessing its transferability and robustness.  Validation of corn classification achieved an average overall accuracy (OA) of 0.91, with strong correlations to independent labels, published mapping products (R² = 0.98), and official statistics (R² = 0.96).  The current results for corn show that the GGCMF method is not only highly accurate but also robust across different temporal and spatial scales. The integration of GEDI and Sentinel-2 data within GEE offers a cost-effective and scalable solution for mapping structurally distinct crops.  By leveraging GEDI's canopy height data for automatic labeling and combining it with Sentinel-2's high-resolution imagery, GGCMF presents a novel, automated workflow for crop mapping.  This approach has significant potential for large-scale agricultural monitoring, providing timely and reliable data to support sustainable agricultural management.

How to cite: Li, Z.: Automated and Scalable Corn and Sorghum Mapping Across Diverse Regions Using GEDI and Time-Series Sentinel-2 Imagery, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-16846, https://doi.org/10.5194/egusphere-egu25-16846, 2025.

X4.91
|
EGU25-13947
Daniel Price, Philipp Sueltrop, Mark Rocket, Matthew Fladeland, and Stefan Baumgartner

High Altitude Platform Stations (HAPS) are an emerging technology solution for Earth Observation that are beginning to reach fruition. Operating for weeks at a time in the lower stratosphere (~ 20 km, 65,000 ft), the solar-powered long endurance aircraft provide a transformative ability to monitor areas of interest at unprecedented temporal and spatial resolution. At these improved resolutions, HAPS provide a significant advantage over satellite-based sensors and have a broad range of scientific and operational applications. Large-scale deployment of HAPS technology will revolutionise Earth Sciences with direct benefit to any science question attempting to improve understanding of Earth-surface system processes. Key industry applications include environmental monitoring, precision agriculture, forestry, smart cities and atmospheric sounding. The technology could play a critical operational role in advancing maritime domain awareness and disaster response.

At Kea Aerospace we are currently conducting flight operations with our Mk1 Kea Atmos aircraft capable of stratospheric flight to an optimal altitude of ~50,000 ft. The Mk1 has a 12.5 m wingspan and can deploy a 2.5 kg payload, with a 200L x 200W x 300H mm volume. The average payload power consumption will influence the mission profile and thermal control requirements with power availability mission and payload specific. We aim to deploy optical hyperspectral, synthetic aperture radar and atmospheric sampling instrumentation with key scientific and industry partners including the National Aeronautics and Space Administration (NASA) and German Aerospace Center (DLR).

We present preliminary findings from our Mk1 flight test programme in New Zealand and an overview of our future aspirations and upcoming Mk2 stratospheric long endurance aircraft programme.

How to cite: Price, D., Sueltrop, P., Rocket, M., Fladeland, M., and Baumgartner, S.: High Altitude Platform Stations: A Novel Earth Observation Technique from the Stratosphere, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-13947, https://doi.org/10.5194/egusphere-egu25-13947, 2025.

X4.92
|
EGU25-12032
Katharina Schleidt and Stefan Jetschny

As the Copernicus program matures, ever more gridded data becomes available to researchers to incorporate into their studies. This data is partially raw satellite data, but an increasing amount of derived products are becoming available. In addition, data from various terrestrial sources is being aggregated to gridded formats, enabling integration with products derived from satellite data.

The technologies available for provision, sharing and processing of gridded data have traditionally been developed by the EO community, with functionality tailored towards the requirements of satellite data. Where these technologies have been applied to the more terrestrial products, both derived from satellite data as well as that generated from terrestrial sources, gaps become apparent in the metadata provided. 

These gaps pertain to concepts not required for satellite data, as they are either not relevant, or have clear default values. For example in ISO 19123-1:2023, while one can define if the value being provided pertains to the center of the grid cell or one of the corners (Pixel-in-center, pixel-in-corner), it is not possible to indicate that the value pertains to the entire area of the cell, as required for land cover or population grids.

A further gap becomes apparent regarding the Observable Property that is being conveyed by the provided data. When dealing with satellite data, the only Observable Property being provided tends to be radiance, the only additional metadata to be provided details the individual frequency bands. When dealing with terrestrial products there are almost infinite lists of Observable Properties for which data is collected or generated, clean indication of what exactly the data represents is essential.

In some cases such information is provided through the use of relevant extensions, e.g. the STAC raster extension, that foresees a link to a semantic resource defining what the data actually represents. However, often, this information is not provided in a structured form. The user must extract this information from textual documentation to understand what the data actually represents. 

Proper provision of Observable Property concepts with gridded data would greatly enhance both data discoverability and reuse, as essential concepts describing the data are cleanly exposed, not requiring the user to guess from titles or poorly defined keywords. Proper integration of Observable Property concepts in core metadata structures would greatly increase the FAIRness of provided data.

Further issues encountered in sharing gridded data from diverse sources have to do with the currently available standardized web services and APIs. OGC WCS has been shown to have integral errors providing data over time, while work on OGC API - Coverage has yet to be completed. The openEO API is an interesting alternative, but as this is a processing API, deployment of this API purely for data accessibility entails a great deal of unnecessary overhead.

In conclusion, in order to reap the potential that can be gained from the diverse gridded data products emerging from both terrestrial and satellite sources, there are still a number of issues to be resolved, both in their description and accessibility.

This work was enabled by the FAIRiCUBE EU Horizon Project.

How to cite: Schleidt, K. and Jetschny, S.: Between Heaven and Earth, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-12032, https://doi.org/10.5194/egusphere-egu25-12032, 2025.

X4.93
|
EGU25-6359
Yungyo Im and Yangwon Lee

Satellite imagery is essential for continuously monitoring Earth phenomena, detecting disasters and hazards, and effectively identifying large and small-scale changes across wide areas. Over the past few decades, advancements in satellite technology have significantly increased the use of satellite imagery. In particular, in change detection studies or disaster monitoring research utilizing multi-temporal and multi-satellite imagery, the fusion of images from two or more time periods for the same region is indispensable. However, due to the inherent characteristics of satellite imagery being captured from a distance, geometric distortions are likely to occur, potentially resulting in misalignment between the images and the actual ground surface. The accuracy of high-resolution satellite imagery is determined by the precision of geometric corrections, which becomes an even more critical factor when using multi-satellite and multi-temporal imagery. Consequently, image registration is an essential process in studies that utilize the fusion of high-resolution satellite imagery. In this study, we propose a highly accurate image registration method using high-resolution satellite imagery from CAS500-1, KOMPSAT-3A, and KOMPSAT-3. To overcome the limitations of feature point detection, a ResShift-based super-resolution technique was applied to generate a dataset with higher resolution than the original data, maximizing the performance of the feature matching models. For deep learning-based feature point detection and matching models, SuperPoint, SuperGlue, LightGlue, and RoMa were utilized. Notably, the RoMa model demonstrated exceptional performance by recording over 2,300 correct matches on the super-resolved dataset. The results of this study are expected to contribute to effective image registration in various fields that utilize multi-temporal and multi-satellite imagery.

This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant RS-2022-00155763).

 

How to cite: Im, Y. and Lee, Y.: Image Registration of CAS500-1 and KOMPSAT-3/3A Satellite Images Using Deep Learning-Based Feature Matching and Super-Resolution Techniques, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-6359, https://doi.org/10.5194/egusphere-egu25-6359, 2025.

Posters virtual: Fri, 2 May, 14:00–15:45 | vPoster spot 4

The posters scheduled for virtual presentation are visible in Gather.Town. Attendees are asked to meet the authors during the scheduled attendance time for live video chats. If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access Gather.Town appears just before the time block starts. Onsite attendees can also visit the virtual poster sessions at the vPoster spots (equal to PICO spots).
Display time: Fri, 2 May, 08:30–18:00
Chairpersons: Davide Faranda, Valerio Lembo

EGU25-17156 | Posters virtual | VPS20

Leveraging EO for Security and Resilience 

Michela Corvino and the Michela Corvino
Fri, 02 May, 14:00–15:45 (CEST) | vP4.5

The ESA Directorate of Earth Observation Programmes has been actively leveraging satellite-based environmental information to address fragility contexts, focusing on areas such as environmental crimes, crimes against humanity, cross-border crimes, and onset of crises. Over the past decade, ESA has explored digital intelligence crime analysis by employing advanced data mining and machine learning tools to uncover hidden patterns and relationships in historical crime datasets, enabling better detection, prediction, and prevention of criminal activities.

Despite these advancements, the integration of Earth Observation (EO) capabilities into investigative practices remains limited. This is due to several challenges, including low awareness of EO's potential, a lack of illustrative use cases showcasing its benefits, inconsistencies in satellite data collection compared to investigative needs, high costs of very high-resolution imagery, and restricted access to national intelligence sources. To overcome these barriers, ESA has been investigating strategies to systematically incorporate EO-derived information into investigative frameworks also as legal evidence, aiming to enhance situational awareness and support stakeholders in developing procedures to exploit EO and OSINT for addressing international crimes and assessing fragility contexts, in cooperation with international organizations including Interpol, UNODC and ICC.

Recent developments in EO technology and methodologies have created significant opportunities for more impactful applications. ESA has focused on tailoring EO-based services and OSINT to meet the case-sensitive requirements of security and development end-users, enabling better integration of EO-derived insights into intelligence models. These efforts include developing advanced EO information products that go beyond routine offerings, testing and evaluating these products in collaboration with end-users, and demonstrating their value in operational settings.

The GDA Fragility, Conflict, and Security initiative has been a cornerstone of ESA’s work, involving partnerships with International Financial Institutions (IFIs) to co-design tools that provide precise and timely information. These tools have supported initiatives aimed at reducing inequalities, promoting economic development, and enhancing environmental safety in fragile and conflict or post conflict-affected areas. By combining geospatial data with diverse data sources, ESA has delivered customized analyses and reports to improve emerging threats analysis and decision-making processes.

Several ESA initiatives have demonstrated the benefits of EO services for assessing fragility risk exposure, characterizing dynamic needs in fragile contexts, planning post-conflict reconstruction, and managing natural resources. ESA constantly engages with stakeholders, including the OECD, security organizations, and humanitarian actors, and its community of industries and research centres to promote the adoption of EO in international development, humanitarian aid, and peacebuilding. Through these efforts, ESA continues to advance the role of EO in supporting justice, accountability, and sustainable recovery in fragile settings.

How to cite: Corvino, M. and the Michela Corvino: Leveraging EO for Security and Resilience, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-17156, https://doi.org/10.5194/egusphere-egu25-17156, 2025.

EGU25-11808 | ECS | Posters virtual | VPS20

The use of InSAR and DInSAR for detecting land subsidence in Albania 

Pietro Belba
Fri, 02 May, 14:00–15:45 (CEST) | vP4.6

INTRODUCTION. InSAR or Interferometric Synthetic Aperture Radar is a technique for mapping ground deformation using radar images of the Earth's surface collected from orbiting satellites. DInSAR or Differential SAR Interferometry is an active remote sensing technique based on the principle that, due to the very high stability of the satellite orbits, it is possible to exploit the informative contribution carried by the phase difference between two SAR images looking at the same scene from comparable geometries.

AIM. In this setting, the main objective of this study is to evaluate the region near the closed rock salt mine in the south of Albania. Our input for this exercise will be two images of the land near the former rock salt mine in Dhrovjan near the Blue Eye (Saranda, Albania).

RESULTS. By combining the phases of 2 images we produce an interferogram where the phase is correlated to the terrain topography and deformation so if the phase shifts related to the topography are removed from the interferogram, the difference between the resulting products will show surface deformation patterns or cure between the two acquisition dates and this methodology is called differential interferometry Processing, Phase Unwrapping, and at the end creating the displacement map. We use in our study the difference in time with the algorithm which consists of working step by step with these operators: Read the two split products, Applying Orbit files, Back-Geocoding, Enhanced Spectral Diversity, Interferogram, TOPSAR Deburst, and Write. The resulting difference of phases is called an interferogram containing all the information on relative geometry. Removing the topographic and orbital contributions may reveal ground movements along the line of sight between the radar and the target.

The next algorithm we worked with these operators: Read the debursted interferogram, TopoPhaseRemoval, Multilook, Goldstain Filtering, and Write. At the same time from Goldstain Filtering, we add the Snaphu Export operator.

Correct phase unwrapping procedures must be performed to retrieve the absolute phase value by adding multiples of 2π phase values to each pixel to extract accurate information from the signal. In this study, we will use SNAPHU, which is a two-dimensional phase unwrapping algorithm consists of working step by step with these operators: read (the wrapped image) and read (2) the unwrapped image, Snaphu Import, PhaseToDisplacement, and Write. We can display it in Google Earth after saving it as .kmz and also make a profile of the displacements.

DISCUSSION AND CONCLUSIONS

One of the SAR Interferometry applications is deformation mapping and change detection. This work demonstrates the capability of interferometric processing for the observation and analysis of instant relative surface deformations in the radar LOS direction. When two observations are made from the same location in space but at different times, the interferometric phase is proportional to any change in the range of a surface feature directly. All three stages of the work are important and require accurate interpretation knowledge, especially when working with the Snaphu program.

KEY-WORDS

InSAR, DInSAR, Interferogram

How to cite: Belba, P.: The use of InSAR and DInSAR for detecting land subsidence in Albania, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-11808, https://doi.org/10.5194/egusphere-egu25-11808, 2025.

EGU25-6240 | ECS | Posters virtual | VPS20

Enhancing the use of Geoinformation technologies to assess the socioeconomic impacts of climate change in the Arctic: Insights from the EO-PERSIST Project 

Georgios-Nektarios Tselos, Spyridon E. Detsikas, Beata Kroszka, Patryk Grzybowski, and George P. Petropoulos
Fri, 02 May, 14:00–15:45 (CEST) | vP4.7

In today's changing climate, there is an urgent need to understand the adverse impacts of climate change on natural environments, infrastructures, and industries.Particularly permafrost regions in the Arctic are highly vulnerable to global warming, impacting both the environment and socioeconomic aspects. Thus, systematic monitoring of such environments, is of paramount significance. Advances in Geoinformation technologies, and in particular in Earth Observation (EO), cloud computing, GIS, web cartography create new opportunities and challenges for Arctic research examining the socioeconomic impact of climate change.The rapid advancements in EOin particular have led to an exponential increase in the volume of geospatial data that come from spaceborne EO sensors. This surge, combined with the fast developments in GIS and web cartography present significant challenges for effective management, access, and utilization by researchers, policymakers, and the public. Consequently, there is a growing need for advanced methodologies to organize, process, and deliver geospatial information that comes from EO satellites in an accessible and user-friendly manner.

Recognizing thepromising potential of geoinformation technologies, the European Union (EU) has funded several research projects that leverage advanced technologies such as geospatial databases and WebGIS platforms to streamline EO data handling and dissemination. One such project is EO-PERSIST (http://www.eo-persist.eu), which aims to create a collaborative research and innovation environment focusing on leveraging existing services, datasets, and emerging technologies to achieve a consistently updated ecosystem of EO-based datasets for permafrost applications. To formulate the socioeconomic indicators, the project exploits state of the art cloud processing resources, innovative Remote Sensing (RS) algorithms, Geographic Information Systems (GIS)-based models formulating, exchanging also multidisciplinary knowledge.EO-PERSIST innovative approach is anticipated to contribute to more informed decision-making and broader data accessibility for researchers, policymakers, and other stakeholders.

The present contribution aim is two-fold: at first, to provide an overview of EO-PERSIST Marie Curie Staff Exchanges EU-funded research project; second, to present some of the key project outputs delivered so far relevant to the selected Use Cases of the project and the geospatial database developed for assessing the socioeconomic impacts of climate change in the permafrost Arctic regions.

This study is supported by EO-PERSIST project which has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 101086386.

KEYWORDS:earth observation, cloud platform, Arctic, socioeconomic impact

How to cite: Tselos, G.-N., Detsikas, S. E., Kroszka, B., Grzybowski, P., and Petropoulos, G. P.: Enhancing the use of Geoinformation technologies to assess the socioeconomic impacts of climate change in the Arctic: Insights from the EO-PERSIST Project, EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-6240, https://doi.org/10.5194/egusphere-egu25-6240, 2025.

EGU25-5359 | ECS | Posters virtual | VPS20

Innovating Coral Reef Mapping with Drones & NASA Fluid Lensing Technology in the Mariana Islands
(withdrawn)

Jonelle Sayama and Keanno Fausto
Fri, 02 May, 14:00–15:45 (CEST) | vP4.9

EGU25-8063 | ECS | Posters virtual | VPS20

Deploying UAV technology to assess typhoon impacts in vulnerable communities in Guam  

Keanno Fausto and Jonelle Sayama
Fri, 02 May, 14:00–15:45 (CEST) | vP4.8

The U.S. territory of Guam is threatened annually by high-intensity storms and typhoons due to its location in the western Pacific Ocean. The island’s infrastructure – buildings, roads, and utilities – bear the brunt of typhoon damage, which in turn affects public health, the economy, and natural resources. Traditionally, these impacts have been observed via satellite, radar, and official weather stations.  Damages are assessed in the aftermath of the typhoon with a manual, on-the-ground approach led by the National Weather Service (NWS). This is often exhaustive and time-consuming for the assessment team. Observations from the ground can inadvertently create data gaps on damage assessments due to inaccessible areas caused by vegetative and construction debris, and flooded roads and pathways. This may not capture many impacts eligible for local or federal assistance. To address these data gaps and augment damage assessments, the University of Guam (UOG) Drone Corps program aims to assist local and federal government agencies (e.g., utility companies, public health, emergency services, and natural resource management) by collecting high-resolution aerial imagery to help prioritize and allocate limited resources. This presentation highlights the results of this novel collaboration of UOG, NWS, Guam Homeland Security (GHS), and the Office of the Governor of Guam in the creation of the damage assessment of Typhoon Mawar, which ravaged Guam on 24-25 May 2023. Following the typhoon, UOG worked with NWS to identify and capture imagery of vulnerable sites that were heavily impacted. This presentation will also share how UOG Drone Corps’ data was disseminated among other agencies as supplemental data for natural disaster recovery efforts. The presentation will conclude with a summary of the UOG Drone Corps program model as a resource for developing resiliency strategies for vulnerable island communities using advanced and emerging technologies. 

How to cite: Fausto, K. and Sayama, J.: Deploying UAV technology to assess typhoon impacts in vulnerable communities in Guam , EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-8063, https://doi.org/10.5194/egusphere-egu25-8063, 2025.