ESSI4.2 | Novel methods and applications of satellite and aerial imagery
EDI
Novel methods and applications of satellite and aerial imagery
Convener: Ionut Cosmin Sandric | Co-conveners: George P. Petropoulos, Marina Vîrghileanu, Juha Lemmetyinen
Orals
| Fri, 19 Apr, 14:00–15:45 (CEST)
 
Room G2
Posters on site
| Attendance Fri, 19 Apr, 16:15–18:00 (CEST) | Display Fri, 19 Apr, 14:00–18:00
 
Hall X2
Posters virtual
| Attendance Fri, 19 Apr, 14:00–15:45 (CEST) | Display Fri, 19 Apr, 08:30–18:00
 
vHall X2
Orals |
Fri, 14:00
Fri, 16:15
Fri, 14:00
Understanding Earth's natural processes, particularly in the context of global climate change, has gained widespread recognition as an urgent and central research priority that requires further exploration. Recent advancements in satellite technology, characterized by new platforms with high revisit times and the growing capabilities for collecting repetitive ultra-high-resolution aerial images through unmanned aerial vehicles (UAVs), have ushered in exciting opportunities for the scientific community. These developments pave the way for developing and applying innovative image-processing algorithms to address longstanding and emerging environmental challenges.
The primary objective of the proposed session is to convene scientific researchers dedicated to the field of satellite and aerial time-series imagery. The aim is to showcase ongoing research efforts and novel applications in this dynamic area. This session is specifically focused on presenting studies centred around the creation and utilization of pioneering algorithms for processing satellite time-series data, as well as their applications in various domains of remote sensing, aimed at investigating long-term processes across all Earth's realms, including the sea, ice, land, and atmosphere.
In today's era of unprecedented environmental challenges and the ever-increasing availability of data from satellite and aerial sources, this session serves as a platform to foster collaboration and knowledge exchange among experts working on the cutting edge of Earth observation technology. By harnessing the power of satellite and aerial time-series imagery, we can unlock valuable insights into our planet's complex systems, ultimately aiding our collective efforts to address pressing global issues such as climate change, natural resource management, disaster mitigation, and ecosystem preservation.
The session organizers welcome contributions from researchers engaged in applied and theoretical research. These contributions should emphasize fresh methods and innovative satellite and aerial time-series imagery applications across all geoscience disciplines. This inclusivity encompasses aerial and satellite platforms and the data they acquire across the electromagnetic spectrum.

Orals: Fri, 19 Apr | Room G2

Chairperson: Ionut Cosmin Sandric
14:00–14:05
14:05–14:15
|
EGU24-15052
|
On-site presentation
Hongzhao Tang, Chenchao Xiao, Kun Shang, and Taixia Wu

Water quality is crucial for human health and the sustainable development of the ecological environment. Traditional water quality monitoring methods rely on discrete in-situ measurements, limiting our understanding of water quality variations at large temporal and spatial scales. While remote sensing technology offers efficient water quality observation, it is mostly confined to monitoring optical active substances, making it challenging to assess water quality changes caused by chemical indicators. This study proposes a concept that changes in water quality status caused by chemical indicators within a certain range are responsive in terms of water-leaving reflection. To validate this hypothesis, water quality index (WQI) was initially calculated using data from water quality monitoring stations, resulting in water quality status data (ranging from excellent to severe pollution). Following this, an information extraction and classification inversion approach was proposed to establish a connection between ZY1-02D hyperspectral imagery and water quality status, leading to the development of a robust water quality status identification model. Validation results showed an average model accuracy of up to 82%, confirming the hypothesis of this study. Subsequently, this model was used to assess the water quality status of 180 large lakes and reservoirs (hereafter referred to as lakes) within China from 2019 to 2023 for the first time. The results indicated that 76.1% of the lakes exhibited excellent to good water quality conditions, with a spatial distribution pattern showing a "better in the west, worse in the east" trend. Over the 4-year period, 33.33% of the lakes showed improvement, while 50% remained stable, with the western and eastern regions primarily exhibiting stability and improvement, respectively. The long-term changes in water quality status are influenced by various interacting factors, with different patterns of influence existing in different time periods and regions. In the early years, natural factors (average elevation) played a dominant role. However, over time, the impact of meteorological factors (precipitation and wind speed) and anthropogenic factors (gross domestic product) gradually increased. These influences can be attributed to significant climate changes and effective management measures over the past two decades. The findings support rapid assessment of environmental conditions and sustainable resource management, highlighting the potential of remote sensing technology in water quality monitoring.

How to cite: Tang, H., Xiao, C., Shang, K., and Wu, T.: A new method for remote sensing assessment of water quality status based on ZY1-02D hyperspectral imagery—A case study of large lakes and reservoirs in China, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15052, https://doi.org/10.5194/egusphere-egu24-15052, 2024.

14:15–14:25
|
EGU24-5528
|
On-site presentation
Wang Shanshan

Airborne hyperspectral remote sensing data provide a wide range of rapid, non-destructive and near laboratory quality reflectance spectra for mineral mapping and lithological discrimination, thereby ushering an innovative era of remote sensing. In this study, NEO HySpex cameras, which comprise 504 spectral channels in the spectral ranges of 0.4–1.0 μm and 1.0–2.5 μm, were mounted on a delta wing XT-912 aircraft. The designed flexibility and modular nature of the HySpex aircraft hyperspectral imaging system made it relatively easy to test, transport, install, and remove the system multiple times before the acquisition flights. According to the design fight plan, including the route distance, length, height, and flight speed, we acquired high spectral and spatial resolutions airborne hyperspectral images of Yudai porphyry Cu (Au, Mo) mineralization in Kalatag District, Eastern Tianshan terrane, Northwest China.

Using hyperspectral images on our own HySpex airborne flight, we extracted and identified alteration mineral assemblages of the Yudai porphyry Cu (Au, Mo) mineralization (Kalatag District, northwest China). The main objectives of this study were to (1) acquire HySpex airborne hyperspectral images of the Yudai Porphyry Cu (Au, Mo) mineralization, (2) determine a workflow for processing HySpex images, and (3) identify alteration minerals using a random forest (RF) algorithm and a comprehensive field survey.

By comparing the features of the HySpex hyperspectral data and standard spectra data from the United States Geological Survey database, endmember pixels of spectral signatures for most alteration mineral assemblages (goethite, hematite, jarosite, kaolinite, calcite, epidote, and chlorite) were extracted. After a HySpex data processing workflow, the distribution of alteration mineral assemblages (iron oxide/hydroxide, clay, and propylitic alterations) was mapped using the random forest (RF) algorithm. The experiments demonstrated that the workflow for processing data and RF algorithm is feasible and active, and show a good performance in classification accuracy. The overall classification accuracy and Kappa classification of alteration mineral identification were 73.08 and 65.73%, respectively. The main alteration mineral assemblages were primarily distributed around pits and grooves, consistent with field-measured data. Our results confirm that HySpex airborne hyperspectral data have potential application in basic geology survey and mineral exploration, which provide a viable alternative for mineral mapping and identifying lithological units at a high spatial resolution for large areas and inaccessible terrains.

How to cite: Shanshan, W.: Identifying and Mapping Alteration Minerals Using HySpex Airborne Hyperspectral Data in the Yudai Porphyry Cu (Au, Mo) Mineralization, Kalatag District, NW China, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5528, https://doi.org/10.5194/egusphere-egu24-5528, 2024.

14:25–14:35
|
EGU24-19819
|
Virtual presentation
Evaluating the MULESME downscaling scheme in retrieving soil moisture content: A case study from Greece
(withdrawn)
Spyridon E. Detsikas and George Petropoulos
14:35–14:45
|
EGU24-8334
|
ECS
|
On-site presentation
Nick Schüßler, Jewgenij Torizin, Michael Fuchs, Dirk Kuhn, Dirk Balzer, Claudia Gunkel, Steffen Prüfer, Kai Hahne, and Karsten Schütze

Geological mapping in dynamic coastal areas is crucial, but traditional methods are laborious and expensive. Employing uncrewed aerial vehicle (UAV) - based mapping for high-resolution imagery, combined with deep learning for texture classification and segmentation, offers a promising improvement.

In the “AI-aided Assessment of Mass Movement Potentials Along the Steep Coast of Mecklenburg Western Pomerania” project, we explore the use of deep learning for geological mapping. We conduct repetitive UAV surveys across five distinct coastal areas, documenting various cliff types under different lighting and seasonal conditions. The imagery yields texture patterns for categories such as vegetation, chalk, glacial till, sand, water and cobble.

We apply two strategies: classification and semantic segmentation. Classification predicts one label per texture patch, while semantic segmentation labels each pixel. Classification requires distinct files with pre-labeled textures, whereas segmentation needs a training dataset with label masks, assigning class values to each texture pixel.

We employ Convolutional Neural Networks (CNN) for classification tasks, designing custom nets with convolutional blocks and attention layers, and testing existing architectures like ResNet50. We evaluate classification performance using accuracy measures and run sensitivity analysis to identify the smallest effective patch size for texture recognition. The effective patch size determines the final mapped class resolution. Classification is less detailed than segmentation but potentially more generalizable.

For semantic segmentation, we employ UNet architectures with encoder-decoder structures and attention gates for improved image context interpretation. We evaluate segmentation using the intersect over union index (IoU). Due to the need for extensive, accurate training data, we employ data augmentation to create artificial datasets blending real-world textures inspired by the Prague texture dataset.

Classification results show about 95% accuracy across target classes using RGB image input. Notably, the pre-trained ResNet50 exhibits moderate performance in texture recognition and is outperformed by simpler net designs trained from scratch. However, it shows adequate performance when pre-trained weights are neglected. For overall classification improvement, we anticipate that adding a Near Infrared band (NIR) will enhance classification, particularly for vegetation and glacial till, which are currently prone to misclassification.

Semantic segmentation yields IoUs of around 0.94 on artificial datasets. However, when applied to real-world imagery, the models show a noisy performance, yielding significant misclassifications. Thus, better generalization requires further fine-tuning and possibly integrating real-world data along with artificial datasets. Also, further experiments with data augmentation by extending the dataset and introducing different complexity levels could provide better generalization to real-world data.

In summary, combining UAV mapping with AI techniques holds significant potential for improving geological mapping efficiency and accuracy in dynamic coastal environments, providing reliable parametrization for data-driven models that require up-to-date geological information in high resolution.

How to cite: Schüßler, N., Torizin, J., Fuchs, M., Kuhn, D., Balzer, D., Gunkel, C., Prüfer, S., Hahne, K., and Schütze, K.: Rapid geological mapping based on UAV imagery and deep learning texture classification and segmentation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8334, https://doi.org/10.5194/egusphere-egu24-8334, 2024.

14:45–14:55
|
EGU24-6725
|
ECS
|
On-site presentation
Yi Xu, Tiejun Wang, and Andrew Skidmore

As fine-resolution remotely sensed data rapidly evolves, individual trees are increasingly becoming a prevailing unit of analysis in many scientific disciplines such as forestry, ecology, and urban planning. Fusion of airborne LiDAR and aerial photography is a promising means for improving the accuracy of individual tree mapping. However, local misalignments between these two datasets are frequently ignored. Solving this problem using traditional pixel-based image registration methods requires extensive computation and is extremely challenging on large scales. In our earlier research, we proposed an approach that involved determining the optimal offset vector for a local area and using it to rectify the spatial positions of all individual trees in that area. Although the approach is effective in addressing mismatch issues, it still exhibits large errors for some trees and is susceptible to changes in scale. Here, we propose an enhanced algorithm by constructing a data structure called a k-dimensional tree (also known as K-D Tree) to efficiently search for each tree’s unique offset vector and assigning the closest determined offset vector to candidate trees that lack corresponding counterparts in the reference data. The enhanced algorithm significantly improves the matching accuracy of individual trees, elevating it from 0.861 ± 0.152 to 0.911 ± 0.126 (p < 0.01, t-test). Moreover, it substantially reduces the computational time by approximately 70% and successfully overcomes limitations associated with scale changes. The example data, source code, and instructions for the enhanced algorithm are publicly available on GitHub*.

*https://github.com/XUYIRS/Individual_trees_matching

How to cite: Xu, Y., Wang, T., and Skidmore, A.: An enhanced algorithm for co-registering individual trees extracted from airborne LiDAR and aerial photographs, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6725, https://doi.org/10.5194/egusphere-egu24-6725, 2024.

14:55–15:05
|
EGU24-5766
|
ECS
|
On-site presentation
Laura Sotomayor, Teja Kattenborn, Florent Poux, Darren Turner, and Arko Lucieer

Semi-arid terrestrial ecosystems exhibit sparse vegetation, characterised by herbaceous non-woody plants (e.g., forbs or grass) and woody plants (e.g., trees or shrubs). These ecosystems encounter challenges from global climate change, including shifts in rainfall, temperature variations, and elevated atmospheric carbon dioxide (CO2) levels. Effective monitoring is essential for informed decision-making and sustainable natural resource management in the context of rapid environmental changes.

Fractional Vegetation Cover (FVC) is a key biophysical parameter for monitoring ecosystems, indicating their balance and resilience. The assessment of FVC is important for evaluating vegetation biomass and carbon stocks, pivotal components of ecosystem health. The precise mapping of FVC across various scales involves three key cover types: photosynthetic vegetation (PV), representing ground covered by green leaves facilitating photosynthesis; non-photosynthetic vegetation (NPV), encompassing branches, woody stems, standing litter, and dry leaves with reduced or no chlorophyll content; and bare earth (BE), representing the uncovered ground surface without vegetation. FVC offers a quantitative measure of the relative contribution of each cover type to the total ground surface, aiding in characterising vegetation composition.

Efficient and accurate remote sensing techniques are essential to complement conventional field-based methods for performing FVC measurements.  Drone remote sensing technologies provide opportunities to capture fine-scale spatial variability in vegetation, enabling the derivation of ecological (e.g., FVC), biophysical (e.g., aboveground biomass), and biochemical variables (e.g., leaf chlorophyll content). Local calibration and validation of drone products enhance upscaling to coarser spatial scales defined by satellite observations, improving the understanding of vegetation dynamics at the national scale for subsequent change detection analyses.

The research project applies deep learning methods in remote sensing to enhance understanding of ecosystem composition, structure, and function features, with a specific focus on diverse terrestrial ecosystems in Australia. Leveraging drone technologies and advanced deep learning algorithms, the project develops automated workflows for systematic ecosystem health assessments, thereby making a significant contribution to the validation of satellite observations. The research framework emphasises the potential of Deep Learning methods in generating FVC products from RGB and multispectral imagery obtained through drone data. The conclusion highlights the benefits of integrating LiDAR data with Deep Learning approaches for analysing denser vegetation structure scenarios, offering a holistic approach for a comprehensive understanding of ecosystem health and dynamics. This approach provides valuable insights for environmental monitoring and management.

How to cite: Sotomayor, L., Kattenborn, T., Poux, F., Turner, D., and Lucieer, A.: Investigating Deep Learning Techniques to Estimate Fractional Vegetation Cover in the Australian Semi-arid Ecosystems combining Drone-based RGB imagery, multispectral Imagery and LiDAR data., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5766, https://doi.org/10.5194/egusphere-egu24-5766, 2024.

15:05–15:15
|
EGU24-14124
|
ECS
|
On-site presentation
Tian Zhao, Wanjuan Song, and Xihan Mu

Fractional Vegetation Cover (FVC) is an important vegetation structure factor for agriculture, forestry, and ecology. Due to its simplicity and reasonable precision, the vegetation index-based (VI-based) mixture model is commonly used to estimate vegetation cover from remotely sensed data. Improving the accuracy and computational efficiency of FVC estimations requires rapidly and precisely calculating the model's two most important parameters, namely the pure vegetation index of fully-vegetated and bare soil pixels. However, no pure normalized difference vegetation index (NDVI) values mapping has yet been produced. When there is a lack of pure pixels in many ecosystems, traditional empirical statistical approaches for obtaining pure vegetation index values are unreliable and challenging. In this study, the pure NDVI values mapping over China is achieved by combining the traditional empirical statistical method and the multi-angle remotely sensed inversion method (MultiVI), which can be adapted to various application scenarios for vegetation cover estimation when utilized with vegetation indices with different spatial and temporal resolutions. When the pure NDVI values extracted from a total of 19 GF-2 images in various parts of China were compared to those obtained in this study, the findings showed a good degree of accuracy. Furthermore, in semi-arid areas where fully-vegetated pixels are lacking and vegetation evergreen areas where bare soil pixels are lacking, this study can compensate for the fact that empirical statistical methods are unable to obtain accurate pure NDVI values and provide reasonable endmember NDVI values for vegetation cover estimation using the VI-based mixture model.

How to cite: Zhao, T., Song, W., and Mu, X.: Mapping pure vegetation index values based on multisource remote sensing data over China for estimation of fractional vegetation cover, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14124, https://doi.org/10.5194/egusphere-egu24-14124, 2024.

15:15–15:25
|
EGU24-12119
|
ECS
|
On-site presentation
Owen Smith, Xiaojie Gao, and Josh Gray

As the volume of satellite observation data experiences exponential growth, our ability to process this data and extract meaningful insights is struggling to keep pace. This challenge is particularly pronounced when dealing with dynamic and variable phenomena across diverse spatiotemporal scales. Achieving accurate representation of these nuances necessitates data generation at high spatial and temporal resolutions, resulting in significant redundancy in computation and storage.

This issue is notably evident in the case of products that monitor plant phenology over time, which are crucial for assessing the impacts of climate change and monitoring agriculture. Computational complexities often limit these products to coarse resolutions (500m-1km) or short time frames, distorting our understanding of phenology across scales. In contrast, various approaches in hydrology and land surface modeling have utilized tiled grids and meshes to capture spatial heterogeneity and reduce dimensionality for complex modeling. This is accomplished through decomposing or aggregating modeling surfaces into response units representative of system drivers and have been shown to enable improved computational capabilities while still maintaining accurate approximations. We believe that similar modeling techniques can be leveraged to enable phenological modeling at higher resolutions. 

Building on these advancements, we develop a variable resolution scheme to represent land surface heterogeneity for modeling Land Surface Phenology (LSP) and decompose Landsat and Sentinel-2 Enhanced Vegetation Index (EVI) into adaptive areal units. Through this method we operationalize the Bayesian Land Surface Phenology (BLSP) model, a hierarchical Bayesian algorithm capable of constructing LSP data for the complete Landsat archive. While BLSP produces highly valuable results, it faces computational challenges for large-scale applications as its current time series approach necessitates each pixel to be computed individually. Our innovative approach reduces the dimensionality of modeling LSP by an order of magnitude to improve computational efficiency and enable the production of a 30 m BLSP product. These improvements are key to provide a region wide long-term phenometrics product at 30m resolution necessary to support studies into the long-term changes at a fine scale.



How to cite: Smith, O., Gao, X., and Gray, J.: Overcoming Big Data Challenges in Satellite Observation: A Variable Resolution Scheme for Modeling Land Surface Phenology, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12119, https://doi.org/10.5194/egusphere-egu24-12119, 2024.

15:25–15:35
|
EGU24-22091
|
ECS
|
Virtual presentation
Andrei Toma, Ionut Sandric, Bogdan Mihai, and Albert Scrieciu

Shorelines, as interfaces between land and water, are subject to continuous transformation due to various natural phenomena and human-induced activities. Natural processes, such as erosion and sedimentation play an important role in shaping coastal areas, while human activities, like urban expansion, also exert a significant stress on coastal areas. An illustrative instance of human-induced changes is exemplified by the Mamaia beach enlargement project, which was initiated along the coast of Romania at the Black Sea by the end of 2020 and executed throughout 2021. The analysis of this coastal transformation started in 2020, preceding the actual implementation of beach enlargement, and extended until late 2023. This timeframe was selected to capture the entirety of the dynamic changes that can be observed in the study region. Utilizing the advanced multi-temporal CoastSat toolkit, the analysis involved a detailed examination of 130 high-resolution images acquired by Copernicus Sentinel-2 satellites. Implemented within a Jupyter notebook environment using Python, CoastSat showcased its efficacy in extracting shorelines from the multi-temporal dataset, enabling a thorough understanding of the coastal dynamics observed in the Mamia beach enlargement project. The analysis reveals an expansion of over 200 m on the southern part of Mamaia beach. This transformation underscores the significant impact of human activities, emphasizing the need for sustainable coastal management practices in the face of evolving environmental challenges.

How to cite: Toma, A., Sandric, I., Mihai, B., and Scrieciu, A.: Automatic analysis of shoreline dynamics on Sentinel-2 datasets using CoastSat software toolkit, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-22091, https://doi.org/10.5194/egusphere-egu24-22091, 2024.

15:35–15:45
|
EGU24-22442
|
On-site presentation
Alfonso Valerio Ragazzo, Alessandro Mei, Valeria Tomaselli, Francesca Carruggio, Andrea Berton, Giuliano Fontinovo, Fabio Michele Rana, and Maria Adamo

Biodiversity supervising through remote sensing assumes crucial importance in monitoring ecosystem integrity and resilience. This study features the integration of optical and LiDAR data coming from Unmanned Aerial Vehicles (UAV) for Digital Elevation Models (DEM) retrieval of the Lesina (Puglia, Italy) Coastal Dune Systems (CDS), aiming to support ecosystem monitoring for the habitat type 2250* "Coastal dunes with Juniperus spp”. This work aims to provide a Free and Open-Source Software (FOSS) workflow able to extract and calculate Digital Surface Model (DSM), Digital Terrain Model (DTM), and Digital Difference Model (DDM) through LiDAR and optical data in a very dense vegetation environment. By using the contribution of RStudio, Cloud Compare, and Quantum GIS software, it was possible to develop a useful methodology for DDM extraction, to compute Juniperus spp. architecture (areas and volumes), which can reflect habitat reduction and fragmentation when compared at different timescales. According to this, a point cloud integration from the two datasets (optical and LiDAR) was provided. Consequently, the generation of an orthophoto and a DSM occurred, needed for the extraction of a vegetation mask using spectral indices (e.g., Excess Green) and for the choice of a pixel threshold, both able to isolate as much as possible the contribution of the vegetation along the DSM. Using scripts in RStudio it was possible to simplify and speed up the processing procedure, inserting additional useful codes for a further isolation of the vegetation matrix from the terrain. Consequently, areas belonging to the presence of vegetation took “NoData” values. So, to fill these areas with significant elevation values of their surroundings, a linear interpolation technique was used by using Inverse Distance Weight (IDW) interpolator, obtaining a “raw” DTM populated by fewer signal noise due to wind disturbance and shading. Following this, the subsequent processing involves the elimination of persistent noise from the point cloud extracted from the “raw” DTM. By using the segmentation tool in Cloud Compare software, not-inherent cloud’s points were removed, allowing to eliminate altimetric errors in the elevation model. Following this operation, a final DTM was extracted from the point cloud representing more accurately the altimetry of the terrain in the study area. Finally, to obtain the height of the canopies, through the expression “DSM-DTM=DDM” used in the QGIS Raster Calculator, the DDM was obtained. The canopies have been considered as 2.5D geometries, resulting in heights representing only the contribution of the above ground biomass. Finally, vegetation areas and volumes were designed from the DDM computation, where canopies’ total volume calculation occurred by adding all the results obtained for each pixel of interest. Hence, this methodology allowed us to monitor biomass parameters (areas and volumes), by a FOSS methodology, in a CDS context with very dense Juniperus spp. vegetation.

How to cite: Ragazzo, A. V., Mei, A., Tomaselli, V., Carruggio, F., Berton, A., Fontinovo, G., Rana, F. M., and Adamo, M.: Digital Terrain Model retrieval within a Coastal Dune Systems by integrating Unmanned Aerial Vehicles’ optical and LiDAR sensors by using a FOSS workflow, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-22442, https://doi.org/10.5194/egusphere-egu24-22442, 2024.

Posters on site: Fri, 19 Apr, 16:15–18:00 | Hall X2

Display time: Fri, 19 Apr, 14:00–Fri, 19 Apr, 18:00
X2.15
|
EGU24-3127
|
ECS
Miloš Pandžić, Dejan Pavlović, Sanja Brdar, Milan Kilibarda, and Oskar Marko

Transfer learning (TL) is rapidly gaining popularity in recent years in various research disciplines due to its practicality, temperate need for resources, and often quite promising results. The same principle applies for Earth observation, especially for tasks such as crop mapping where TL already showed a certain potential. Our focus in this research was on temporal transfer learning for a single agricultural region. In our study we built an initial CNN-1D crop mapping model for Vojvodina province, Serbia, using SAR satellite imagery and ground truth (GT) data collected for 2017-2020. We did it using a leave-one-year-out approach where each year served only once as a validation dataset. The top-performing model was further employed for transfer learning analysis, utilising a limited set of target season ground truth data. The aim was to diminish reliance on labour-intensive and time-consuming large-scale ground truth data collection, typically carried out through hands-on field inspections. Instead of collecting it all over Vojvodina for the 2021 season, we tried to focus on a limited area around the departure point. Three options were analysed, i.e., approximately 20, 25, and 30 km radius around the departure point for which the province capital Novi Sad was taken. From the practical standpoint, labels of these parcels are easier to record than those more distant (distributed), so it seems reasonable to visit only these locations to reduce the costs of ground truth collection. Visited parcels that fell within these radiuses served for retraining the model, and the remaining parcels (those outside 30 km radius) served for testing and accuracy assessment. For each parcel 50 randomly selected pixels were used for the analysis. After 5 retraining cycles, the average F1 score for transfer learning approach of the CNN-1D model for 20, 25 and 30 km buffer zones was 74%, 79% and 83%, respectively. Training the same CNN-1D model from scratch reached 69%, 73% and 78% respectively, i.e., approximately 5% lower score on average. Inferencing using the pre-trained model as such (without adaptations) achieved F1 score of 78%, which set 20 km radius case to the irrational use of TL, while the use of other two buffer areas were justified as they achieved comparably better results. Also, three buffer cases achieved between 3% and 9% lower F1 score than their respective pairs when the same number of parcels used for retraining were randomly distributed all over the test area. This was likely in a relationship with the restricted sampling region characteristics (uniform soil type, management practice, weather conditions) and distribution of classes in that region which may not have properly represented the entire test area. In addition, the comparison of these two approaches showed that adding more samples for retraining scaled down the difference. The viability of the presented approach was confirmed within the experiment and, therefore, practitioners are encouraged to weigh the trade-off between practicality and accuracy in their future work.

How to cite: Pandžić, M., Pavlović, D., Brdar, S., Kilibarda, M., and Marko, O.: Cracking Ground Truth Barriers: Harnessing the Power of Transfer Learning for Crop Mapping, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3127, https://doi.org/10.5194/egusphere-egu24-3127, 2024.

X2.16
|
EGU24-18868
Zeyu Xu, Tiejun Wang, Andrew Skimore, and Richard Lampery

The point and bounding box are the two widely used annotation techniques for deep learning-based wild animal detection using remote sensing. However, the impact of these two annotation methods on the performance of deep learning is still unknown. Here, using a publicly available Aerial Elephant Dataset, we evaluate the effect of two annotation methods on model accuracy in two commonly used neural networks (YOLO and U-Net). The results show that when using YOLO, there are no statistically significant differences between the point and bounding box-based annotation methods, as indicated by an overall F1-score being 82.7% and 82.8% (df = 4, P = 0.683, t-test), respectively. While when using U-Net, the accuracy of the results based on bounding boxes with an overall F1-score of 82.7% is significantly higher than that of the point-based annotation method with an overall F1-score of 80.0% (df = 4, P < 0.001, t-test). Our study demonstrates that the effectiveness of the two annotation methods is influenced by the choice of deep learning models. The result suggests that the deep learning method should be taken into account when deciding on annotation techniques for animal detection in remote sensing images.

How to cite: Xu, Z., Wang, T., Skimore, A., and Lampery, R.: A comparison of point and bounding box annotation methods to detect wild animals using remote sensing and deep learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18868, https://doi.org/10.5194/egusphere-egu24-18868, 2024.

X2.17
|
EGU24-5791
|
ECS
Maximilian Hell and Melanie Brandmeier

The region of Amazônia Legal in Brazil is in constant change due to deforestation and degradation of forest and conversion into arable land to use for farming or cattle ranching. It is important to monitor these changes with respect to global climate change and to aid political decision makers. These changes are best captured and analyzed through openly accessible satellite data, such as the products of ESA's Sentinel Missions. Land use and land cover (LULC) classification is often performed on remotely sensed data through supervised learning algorithms that rely on precise labels to produce accurate results. However, this kind of data is often not available and it is a time consuming task to create such data at the required accuracy level through image interpretation. This can be alleviated by using existing LULC maps from other sources such as the classification maps produced by the MapBiomas Brasil project used in our project. These maps are created using Landsat time series data and multiple machine and deep learning models to classify the whole of Brazil into five macro and multiple micro classes. This data has it's own bias and is not correct in all places or highly inaccurate, especially compared to data which is higher in spatial resolution -- as the aforementioned Sentinel data -- and reveals more detail in the land coverage. Thus, it is a critical step to investigate the noise in the label data. There are multiple approaches in the relevant literature to tackle learning with noisy labels, most of these approaches rely on robust loss functions or learned models to identify the noise. We present a novel approach where the satellite imagery is split pixel-wise in the prior given five macro class labels. For each class, a self-organizing map (SOM) is learned to cluster the data in the spectral domain and thus identify representative prototypes of each class. Each class is represented by the same number of prototypes, which overcomes the problem of imbalanced classes. The labels are then checked through neighborhood rules if they belong to their given class or are labeled as unsure or even switch classes otherwise. 
In our study, approx. 79.5% of pixels keep their given class, while the rest is reassigned or even discarded. To validate the approach, the results are compared to a manually created validation set and inspected visually for qualitative correctness. The MapBiomas LULC maps reach an overall accuracy of 62.6% in the created validation areas. After relabeling the data with the presented approach, the overall accuracy reached a score of 81.3%, showing a significant increase. This approach is independent of a specifically learned model and only leverages on the relationship between the training data and the given label data --- the Sentinel-2 imagery and the MapBiomas LULC map, respectively.

How to cite: Hell, M. and Brandmeier, M.: Detection of noise in supervised label data: a practical approach in the Amazonas region of Brazil using land use and land cover maps, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5791, https://doi.org/10.5194/egusphere-egu24-5791, 2024.

X2.18
|
EGU24-2675
|
ECS
Decai Jin

Satellites strive to strike a delicate balance between temporal and spatial resolution, thereby rendering the achievement of high resolution in both aspects challenging. In particular, Earth observations at sub-daily intervals are more diffuclt. Spatiotem-poral fusion algorithms have emerged as a promising solution to tackle this challenge. However, the current spatiotemporal fusion methods still face a critical challenge: accurately and efficiently predicting fine images in large-scale area applications, while en-suring robustness. To address this challenge, the study proposes a multiscale Attention-Guided deep optimization network for Spatiotemporal Data Fusion (AGSDF) method. An optimization strategy is employed to directly predict high-resolution image at multi-scales based coarse-resolution image. Specifically, a varia-tion attention module is proposed to focus on the edges and tex-tures of abrupt land cover changes. The spatiotemporal fusion kernel is developed to provide essential spatial details for spatio-temporal fusion. Furthermore, the implementation of spatiotem-poral fusion at multiple scales improves the reliability of predic-tion. The performance and robustness of AGSDF were evaluated and compared to nine methods at six sites worldwide. The exper-imental results indicate that AGSDF achieves a better overall performance in quantitative accuracy assessment, transfer ro-bustness, predictive stability and efficiency. Consequently, AGSDF holds the high potential to produce accurate remote sens-ing products with high temporal and spatial resolution across extensive regions.

How to cite: Jin, D.: AGSDF: A Multiscale Attention Guided DeepOptimization Network for Spatiotemporal Fusion, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2675, https://doi.org/10.5194/egusphere-egu24-2675, 2024.

X2.19
|
EGU24-7821
Jewgenij Torizin, Nick Schüßler, Michael Fuchs, Dirk Kuhn, Dirk Balzer, Kai Hahne, Steffen Prüfer, Claudia Gunkel, Karsten Schütze, and Lars Tiepolt

Coastal areas are dynamic zones where geological, marine, and atmospheric processes interact. The coastal shapes constantly evolve due to both natural factors and human activity. Gravitational mass movements, commonly called landslides, are prominent indicators of coastal dynamics. With the current climate projections indicating increasing stormy weather and extreme water levels, coastal communities face an escalating hazard of more frequent and severe landslides on steep coastlines. Mecklenburg-Western Pomerania exhibits a cliff coast of approximately 140 km, which is assessed to be actively receding in most parts.

The project, titled “AI-aided Assessment of Mass Movement Potentials Along the Coast of Mecklenburg-Western Pomerania,” focuses on developing advanced methods for quantitatively evaluating the hazard potential of mass movements in these ever-changing environments. This approach should enhance the efficiency and effectiveness of hazard assessment routines. The project covers five small study areas exhibiting different cliff types composed of chalk, glacial till sediments, and sand.

The exposition of the complex geological conditions through the coastal retreat may change. Therefore, one of the most significant challenges is the accurate mapping of current geological conditions controlling, among other factors, the occurrence of landslides. In some parts, the average coastal retreat is about 0.5 m annually. At the same time, detailed geological mappings conducted years or even decades ago do not adequately represent the current geological conditions that could be fed into models to conduct a landslide susceptibility assessment since some mapped features no longer exist.

Because traditional detailed field mapping by experts is time-consuming and costly, we seek options to enhance the mapping by employing uncrewed aerial vehicles (UAVs) equipped with multispectral sensors. These UAVs, through repetitive surveying missions, gather detailed data that enable precise change detection in photogrammetric point clouds. This data is essential for accurately calculating coastal retreat, mass balancing, and structural analysis. Employed AI algorithms interpret the UAV imagery, performing semantic segmentation to classify the surface into meaningful categories for further modeling. Given the need for extensive labeled datasets to train AI algorithms, we also explore data augmentation strategies. These strategies aim to generate extensive artificial datasets based on real-world data, which are crucial for effectively training the desired models.

Overall, we try to design a workflow to streamline the analysis steps, starting with UAV flight campaigns and classical photogrammetric processing paired with AI components to derive geological information. The derived parameters provide input in data-driven landslide susceptibility models. Furthermore, the generated spatio-temporal time series should be used for pre-failure pattern analysis with advanced AI for the long-term outlook.

How to cite: Torizin, J., Schüßler, N., Fuchs, M., Kuhn, D., Balzer, D., Hahne, K., Prüfer, S., Gunkel, C., Schütze, K., and Tiepolt, L.: AI-aided Assessment of Mass Movement Potentials Along the Coast of Mecklenburg-Western Pomerania – Project Introduction and Outlook, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7821, https://doi.org/10.5194/egusphere-egu24-7821, 2024.

X2.20
|
EGU24-14941
Chenchao Xiao, Hongzhao Tang, and Kun Shang

China's first hyperspectral operational satellite constellation, launched in 2023, has significantly enhanced comprehensive Earth observation capabilities by integrating extensive quantitative data from space and ground sources. The constellation comprises the GF5-02, GF5-01A and ZY1-02D and ZY1-02E satellites. Operating in a sun-synchronous orbit, these satellites constitute a medium-resolution Earth observation system. Each satellite, GF5-02, GF5-01A, ZY1-02D and ZY1-02E, is equipped with visible and near-infrared as well as hyperspectral imager, enabling them to perform wide swath observations and acquire intricate spectral data. Significantly, ZY1-02E has been additionally equipped with a thermal infrared camera, thereby broadening its detection scope. The satellite team, collaborating with specialists across various fields, conducted 32 business tests in areas like land resources, geology, mapping, and marine monitoring, adhering to standards for natural resources survey and monitoring. After a year of operation, the constellation has shown robust functionality, stability, and data quality, meeting requirements for diverse applications such as resource enforcement, geological surveys, ecological restoration, geospatial updates, coastal surveillance, and industrial capacity reduction. The success in quantitative application tests of hyperspectral and thermal infrared payloads demonstrates the satellite's potential in providing critical insights for global users in the hyperspectral domain.

How to cite: Xiao, C., Tang, H., and Shang, K.: Advancing Earth Monitoring: China's hyperspectral Operational Satellite Constellation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14941, https://doi.org/10.5194/egusphere-egu24-14941, 2024.

Posters virtual: Fri, 19 Apr, 14:00–15:45 | vHall X2

Display time: Fri, 19 Apr, 08:30–Fri, 19 Apr, 18:00
vX2.1
|
EGU24-10351
Graham Wilkes, Aliyan Haq, and Anastasiia Khokhriakova

ISO's Technical Committee 211's Working Group 6 (WG6) standardizes geographic information, focusing on imagery, gridded data, and coverage data, along with their associated metadata. With emphasis on remote sensing and earth observation, WG6 provides standards for geopositioning, calibration, and validation. These combined efforts are foundational in creating structured, multidimensional data for use in data cubes and other gridded data endpoints. Upstream structured grid data is foundational, providing consistency for downstream AI analytics. WG6's standards foster interoperability for use in diverse systems, enabling machines to process and interpret data over spatial, temporal, and spectral dimensions. Such work is critical in advancing open standards for interoperable, multi-dimensional analysis-ready data, for future geospatial and Earth observation data analysis. We will present some of the fundamental standards that exist or are in creation to support multi-dimensional analysis-ready data.

How to cite: Wilkes, G., Haq, A., and Khokhriakova, A.: Advancing Geospatial and Earth Observation Data Analysis: The Role of ISO's Technical Committee 211 in Standardizing Imagery and Gridded Data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10351, https://doi.org/10.5194/egusphere-egu24-10351, 2024.