ESSI1.7 | Novel methods and applications of satellite and aerial imagery
EDI
Novel methods and applications of satellite and aerial imagery
Convener: Ionut Cosmin Sandric | Co-conveners: George P. Petropoulos, Marina VîrghileanuECSECS, Dionissios Hristopulos
Orals
| Thu, 27 Apr, 10:45–12:30 (CEST)
 
Room 0.51
Posters on site
| Attendance Fri, 28 Apr, 10:45–12:30 (CEST)
 
Hall X4
Posters virtual
| Attendance Fri, 28 Apr, 10:45–12:30 (CEST)
 
vHall ESSI/GI/NP
Orals |
Thu, 10:45
Fri, 10:45
Fri, 10:45
Understanding Earth’s system’s natural processes, especially in the context of global climate change, has been recognized globally as a very urgent and central research direction which needs further exploration. With the launch of new satellite platforms with a high revisit time, combined with the increasing capability for collecting repetitive ultra-high aerial images through unmade aerial vehicles, the scientific community have new opportunities for developing and applying new image processing algorithms to solve old and new environmental issues.

The purpose of the proposed session is to gather scientific researchers related to this topic aiming to highlight ongoing research and new applications in the field of satellite and aerial time-series imagery. The session focus is on presenting studies aimed at the development or exploitation of novel satellite times series processing algorithms and applications to different types of remote sensing data for investigating longtime processes in all branches of Earth (sea, ice, land, atmosphere).

The conveners encourage both applied and theoretical research contributions focusing on novel methods and applications of satellite and aerial time-series imagery in all geosciences disciplines, including both aerial and satellite platforms (optical and SAR) and data acquired in all regions of the electromagnetic spectrum.

Orals: Thu, 27 Apr | Room 0.51

Chairpersons: Ionut Cosmin Sandric, Lorraine Tighe
10:45–10:50
10:50–11:00
|
EGU23-3434
|
ESSI1.7
|
ECS
|
On-site presentation
Johannes Balling, Martin Herold, and Johannes Reiche

Cloud penetrating Synthetic Aperture Radar (SAR) imagery has proven effective for tropical forest monitoring at national and pan-tropical scales. Current SAR-based disturbance detection methods rely on identifying decreased post-disturbance backscatter values as an indicator of forest disturbances. However, these methods suffer from a major shortcoming, as they show omission errors and delayed detections for some disturbance types (e.g., logging or fires). Here, post-disturbance debris or tree remnants result in stable SAR backscatter values similar to those of stable forest. Despite fairly stable backscatter values we hypothesize that different orientation and arrangement of tree remnants lead to an increased heterogeneity of adjacent disturbed pixels. Increased heterogeneity can be quantified by textural features. We assessed six Gray-Level Co-Occurrence Matrix (GLCM) textural features utilizing Sentinel-1 C-band SAR time series. We used a pixel-based probabilistic change detection algorithm to detect forest disturbances based on each GLCM feature and compared them against forest disturbances detected using only backscatter data. We further developed a method to combine both backscatter and GLCM features to detect forest disturbances. GLCM Sum Average (SAVG) performed best out of the tested GLCM features. Omission errors were reduced of up to 36% and the timeliness of detections was improved of up to 30 days by applying the combination method of backscatter and GLCM SAVG. Test sites characterized by large unfragmented disturbance patches (e.g., large-scale clearings, fires and mining) showed the greatest spatial and temporal improvement. A GLCM kernel size of 5 leads to the best trade-off of improving timeliness of detections and reducing omission errors while not introducing commission errors. The robustness of the developed method was verified for a variety of natural and human-induced forest disturbance types in the Amazon Biome. Our results show that combined SAR-based textural features and backscatter can overcome omission errors caused by post-disturbance tree remnants. Combining textural features and backscatter can support law enforcement activities by improving spatial and temporal accuracy of operational SAR-based disturbance monitoring and alerting systems.

How to cite: Balling, J., Herold, M., and Reiche, J.: The benefit of textural features for SAR-based tropical forest disturbance mapping, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3434, https://doi.org/10.5194/egusphere-egu23-3434, 2023.

11:00–11:10
|
EGU23-8825
|
ESSI1.7
|
ECS
|
On-site presentation
Rodrigo Pueblas, Jann Weinand, Patrick Kuckertz, and Detlef Stolten

Photovoltaic (PV) and wind are currently the highest growing renewable energies according to the annual World Energy Outlook 2022. However, in the case of solar PV in Europe, this growth is mainly driven by utility-scale installations. Distributed residential generation has many benefits, such as relieving the electrical grid and an increase of self-sufficiency. One challenge of this topic is to accurately estimate the rooftop PV potential of different regions, in order to best allocate economic resources and regulate accordingly. Multiple approaches have been proposed in the past, such as infering from proxy variables like population density, automatically analyzing residential 3D point clouds or automatically analyzing satellite images. The latter has gained popularity in the recent years given the increased availability of satellite imagery and the improvement of Computer Vision methods. However, in research, the analysis of satellite imagery is impeded by the lack of transparency, reproducibility, and standardization of methods. Studies are heterogeneous, target different types of potential with redundant efforts, and are mostly not open source or using private datasets for training. This makes it challenging for users of various backgrounds to find and use the existing approaches.

For these reasons, this paper proposes a conceptual framework that describes and categorizes the tasks that need to be considered when estimating PV potential, thus creating a clear framework along which the contents of this research report can be classified. Addidionally the open source workflow PASSION is introduced, which integrates the assessment of geographical, technical and economic potentials of regions under consideration along with the calculation of surface areas, orientations and slopes of individual rooftop sections. It also includes the detection of obstacles and existing PV installations. It is based on a novel two-look approach, in which three independent models are deployed in parallel for the identification of rooftops, sections and superstructures. The three models show a mean Intersection Over Union (IoU) between classes of 0.847, 0.753 and 0.462 respectively, and more importantly show consistent results in non-selected real life samples.

How to cite: Pueblas, R., Weinand, J., Kuckertz, P., and Stolten, D.: PASSION: a workflow for the estimation of rooftop photovoltaic potential from satellite imagery., EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8825, https://doi.org/10.5194/egusphere-egu23-8825, 2023.

11:10–11:20
|
EGU23-13610
|
ESSI1.7
|
ECS
|
On-site presentation
Vibolroth Sambath, Nicolas Viltard, Laurent Barthès, Audrey Martini, and Cécile Mallet

Due to climate change, understanding the changes in the water cycle has become a pressing issue. It is increasingly important to study prolonged periods of intense precipitation or dry spells to better manage water supply, infrastructure and agriculture. However, obtaining fine-scale precipitation data is challenging due to the intermittent nature of rain in time and space. Ground-based instruments could have mismatches between different regions due to spatial distribution, calibration, and complex topography. On the other hand, space-borne observations have uncertainties in their retrieval algorithms. This study proposes to deal directly with microwave images from space remote sensing, as this type of data makes it possible to study the evolution of the atmospheric water cycle on a global scale and with a temporal coverage of several decades by avoiding the uncertainties from retrieval methods. In recent years, convolutional neural networks have shown promising capabilities in identifying cyclones and weather fronts in large labelled climate datasets. However, these models required large labelled datasets for training and testing. The present study aims to test unsupervised segmentation approaches of microwave images, which are thus segmented into different classes. Instead of focusing only on one aspect, for example, precipitation, the obtained classes contain many physical properties. This is due to the fact that microwave brightness temperatures contain essential information relative to the atmospheric water cycle that can be used to derive many products such as rain intensity, water vapour, cloud fraction, and sea surface temperature. The unsupervised segmentation model consists of blocks of fully convolutional networks serving as feature extractors. Without labels, pseudo-targets from the feature extractors are used to train the model. The performance of the model in terms of intra-class and inter-class distances is compared with those of simpler models such as Kmeans. A major challenge in the unsupervised approach is validating and interpreting the resulting classes. Most of the obtained cluster patterns provide geographically coherent regions whose mode of variability of geophysical quantities can be highlighted. The presented study will then explore how the different classes computed by the unsupervised methods can be labelled and how the properties of the said classes change through time and space.

How to cite: Sambath, V., Viltard, N., Barthès, L., Martini, A., and Mallet, C.: Unsupervised Segmentation Of Microwave Brightness Temperatures To Study The Changes In The Water Cycle, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13610, https://doi.org/10.5194/egusphere-egu23-13610, 2023.

11:20–11:30
|
EGU23-14375
|
ESSI1.7
|
ECS
|
On-site presentation
|
Ritu Yadav, Andrea Nascetti, Hossein Azizpour, and Yifang Ban

Flooding is a natural disaster that has been increasing in recent years due to climate and land-use changes. Earth observations, such as Synthetic Aperture Radar (SAR) data, are valuable for assessing and mitigating the negative impacts of flooding. Cloud cover is highly correlated with flooding events, making SAR a preferable choice over optical data for flood mapping and monitoring.

Traditional methods for flood mapping and monitoring using SAR data, such as otsu and CVA, can be affected by noise, false detections due to shadows and occlusions, and geometric distortions. While automatic thresholding can be effective with these methods, manual adjustment of the threshold is often required to produce an accurate change map. 

Supervised deep learning methods using large amounts of labeled data could potentially improve the accuracy of flood mapping and monitoring. We have a large amount of earth observation data, but the availability of labeled data is limited and labeling data is time-consuming and requires domain expertise. On the other hand, Supervised model training on small datasets causes severe generalizability issues when inference is taken on a new site. 

To address these challenges, we propose a novel self-supervised method for mapping and monitoring floods on Sentinel-1 SAR time-series data. We propose a probabilistic model trained on unlabeled data using self-supervised techniques, such as reconstruction and contrastive learning. The model is trained to learn the spatiotemporal features of the area. It monitors the changes by comparing the latent feature distribution at each time stamp and generates change maps to reflect the changes in the area. 

We also propose a framework for flood monitoring that continuously monitor the area using time series data. This framework automatically detects the change point i.e. when the major change started reflecting on available SAR data. Our continuous monitoring framework combined with a better temporal resolution (better than Sentinel-1) can potentially detect flood events in an early stage, allowing for more time for evacuation planning. 

The model is evaluated on nine recent flood events from ‘Mekong’, ‘Somalia’, ‘Scotland’, ‘Australia’, ‘Bosnia’, ‘Germany’, ‘Spain’, ‘Bolivia’, and ‘Slovakia’ sites. We compared our results with traditional methods, and existing supervised and unsupervised methods. Our detailed evaluation indicates that our model is more accurate and generalizable to new sites. The model achieves an average Intersection Over Union (IoU) value of 70% and an F1 score of 81.14%, which are both higher than the scores of the previous best-performing method. Overall, our proposed model’s improvement range from 7-26% in terms of F1 and 8-31% in terms of IoU score. 

How to cite: Yadav, R., Nascetti, A., Azizpour, H., and Ban, Y.: Self-Supervised Contrastive Model for Flood Mapping and Monitoring on SAR Time-Series, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14375, https://doi.org/10.5194/egusphere-egu23-14375, 2023.

11:30–11:40
|
EGU23-17425
|
ESSI1.7
|
On-site presentation
Huiran Jin and Xiaonan Tai

Riparian ecosystems are biodiversity hotspots and provide crucial services to human wellbeing. Currently, the knowledge of how riparian ecosystems respond to and in turn influence the variations of the environment remains considerably limited. As a first step toward filling the gap, this research aims to characterize the dynamics of riparian vegetation during the past several decades across multiple aquatic sites operated by the National Ecological Observatory Network (NEON) of the US. Specifically, it leverages high-resolution hyperspectral and lidar data collected by NEON’s airborne observational platform (AOP) surveys, the long-term records of satellite optical and radar imagery, and advanced data fusion and classification techniques to generate a time-series record of riparian vegetation on a seasonal-to-yearly basis. The maps derived will provide a new basis for understanding how riparian vegetation has changed across continental US, and for predicting how it is likely to change in the future. This work is sponsored by NSF’s Macrosystems Biology and NEON-Enabled Science (MSB-NES) Program (2021/9–2024/8), and the overarching goal of the project is to mechanistically link riparian vegetation dynamics to hydroclimate variations and assess the functional importance of riparian ecosystems to macrosystem fluxes of carbon and water.

How to cite: Jin, H. and Tai, X.: Spatiotemporal mapping of riparian vegetation through multi-sensor data fusion and deep learning techniques, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-17425, https://doi.org/10.5194/egusphere-egu23-17425, 2023.

11:40–11:50
|
EGU23-8451
|
ESSI1.7
|
On-site presentation
Emma Izquierdo-Verdiguier, Álvaro Moreno-Martínez, Jordi Muñoz-Mari, Nicolas Clinton, Francesco Vuolo, Clement Atzberger, and Gustau Camps-Valls

The presence of clouds and aerosols in satellite imagery hamper their use to monitor, observe and analyze the Earth's surface. Multisensor fusion can alleviate this problem. The HISTARFM algorithm developed by Moreno-Martinez et al. (2020) can generate monthly gap-filled reflectance data at 30 m spatial resolution by blending Landsat (30 m pixel size every 16 days) and MODIS (500 m pixel size daily) data using a bias-aware Kalman filter. 

Cloud computing platforms such as Google Earth Engine (GEE) help us to efficiently process public data archives from different remote sensing data sources. Therefore, GEE allows us to adapt the HISTARFM algorithm to obtain gap-filled data at higher spatial resolution. To reduce the massive number of images involved in the process, the bias-aware Kalman filter blends the available and preprocessed HISTARFM monthly gap-filled reflectance (30 m pixel size every month) and Sentinel-2 (10 m pixel size at five days) data. The very high spatial gap-filled images provide reflectance information at feasible scales to obtain new products that improve decision-making activities in variable territories with complex topographies. Also, new derivative products (e.g. land cover maps, biophysical parameters, or phenological indicators) will provide the scientific community better understanding and monitoring of bio-geographical and ecoclimatic characteristics of the Earth.

Additionally, a reduction of the time resolution of the temporal series is manageable with this approach by linear interpolation producing five days of gap-filled reflectance Sentinel-2 data. The proposed approach shows promising preliminary results and provides gap-free reflectance Sentinel-2 images with their associated uncertainties. These results foster the development of improved near-real-time applications for crop and natural vegetation monitoring at continental scales.

How to cite: Izquierdo-Verdiguier, E., Moreno-Martínez, Á., Muñoz-Mari, J., Clinton, N., Vuolo, F., Atzberger, C., and Camps-Valls, G.: Enhanced and gap-free Sentinel-2 reflectance data at vast scales with GEE, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8451, https://doi.org/10.5194/egusphere-egu23-8451, 2023.

11:50–12:00
|
EGU23-15020
|
ESSI1.7
|
ECS
|
On-site presentation
Maximilian Hell, Melanie Brandmeier, and Andreas Nüchter

Mapping of Land use and land cover (LULC) changes over time requires automated processes and has been investigated using various machine learning algorithms and, more recently, deep learning models for semantic classification. New applications of these models to different satellite data and areas are regularly published. However, studies on the transfer of these models to other data and study areas are rather scarce. In a previous study [1], we used multi-modal and –temporal Sentinel data for LULC classification using traditional and novel deep learning models. The data covered parts of the Amazon basin and was comprised of a twelve-month time series of radar imagery (Sentinel-1), combined with a singular multi-spectral image (Sentinel-2). All satellite images were captured throughout the year 2018. The label map (Collection 4) of the Amazon produced by the MapBiomas project [2] was used as training and test labels. Besides state-of-the-art models, we developed five variations of a deep learning model—DeepForest—which leverages on the multi-temporal and -modal aspect of the data. The best model variation (DF1c) reached an overall accuracy of 74.4% on the test data.

Currently we are investigating the transferability of these models to more recent data of the same region. The new dataset was processed in the same way as in the previous study. It comprises a Sentinel-1 time-series and a single Sentinel-2 images from 2020, with an updated version of the label map of the MapBiomas project (Collection 6). This posed some challenges, as the classification scheme changed and is not fully backwards compatible with the one used to train the DeepForest models. A test dataset was chosen in the state of Mato Grosso, as the satellite scenes cover most classes used in the classification scheme. However, this data exhibits some class imbalance, as two of the eleven classes are dominating the scene. All five DeepForest variations reached accuracies higher than 79% and thus generalize well on the major LULC classes. For comparison and to further improve our models, we currently retrain the models on the new, larger data set (114,376 training image tiles compared to 18,074). Preliminary results will be shown during the session.

 

References

  • Cherif, E.; Hell, M.; Brandmeier, M. DeepForest: Novel Deep Learning Models for Land Use and Land Cover Classification Using Multi-Temporal and -Modal Sentinel Data of the Amazon Basin. Remote Sensing 2022, 14, 5000, doi:10.3390/rs14195000.
  • MapBiomas Brasil; Available online: https://mapbiomas.org/en

How to cite: Hell, M., Brandmeier, M., and Nüchter, A.: Transfer Learning for LULC Classification on multi-modal data in the Amazon Basin, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15020, https://doi.org/10.5194/egusphere-egu23-15020, 2023.

12:00–12:10
|
EGU23-8916
|
ESSI1.7
|
ECS
|
On-site presentation
Wanting Yang, Daniel Ortiz Gonzalo, Xiaoye Tong, Dimitri Pierre Johannes Gominski, Martin Brandt, Ankit Kariryaa, Florian Reiner, and Rasmus Fensholt

Distinguishing trees on agricultural land from forests is essential for a better understanding of the relationship between forests and human farming activities. However, it is difficult to separate them with remote sensing imagery since they share similar canopy cover, especially on the edge of the amazon rain forest, which has a much-complicated agriculture pattern. Except for annual crops and pasture, there are also lots of agroforestry applications and shifting cultivation, which integrates many tree systems. And those tree systems are not well separated from the forest in the existing land cover map. Recent techniques allow for the mapping of single trees outside of forests, now we take the next step by identifying those diverse tree-involved systems in agricultural land. Here we aim to generate a robust, cost-efficient method to distinguish trees within agricultural land from the forest. We started our exploration from Peruvian Amazon, where the competition for land has increased in the last decades, causing possible adverse effects on livelihoods and ecosystem services. Deep learning models, data sampling, and fine-tuning strategies are tested and optimized with PlanetScope satellite imagery. Our research target is to provide a tool for separating tree systems in farmland from the forest. It can also be used as a base map to explore the dynamic of agriculture transition and its impact on livelihoods and ecosystem services. 

How to cite: Yang, W., Ortiz Gonzalo, D., Tong, X., Pierre Johannes Gominski, D., Brandt, M., Kariryaa, A., Reiner, F., and Fensholt, R.: Separating tree systems in agricultural lands from forests using Deep learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8916, https://doi.org/10.5194/egusphere-egu23-8916, 2023.

12:10–12:20
|
EGU23-5175
|
ESSI1.7
|
ECS
|
Virtual presentation
Spyridon E. Detsikas, George P. Petropoulos, Nikos Koutsias, Dionisios Gasparatos, Vasilis Pisinaras, Heye Bogena, Frank Wendland, Frank Herrmann, and Andreas Panagopoulos

Obtaining Soil Moisture Content (SMC) over large scales is of key importance in several environmental and agricultural applications especially in the context of climate change and transition to digital farming. Remote sensing (RS) has a demonstrated capability in retrieving SMC over large areas with several operational products already available at different spatiotemporal resolutions. At the same time, cosmic-ray neutron sensing is a recently emerged approach in retrieving high temporal resolution SMC at intermediate spatial scales. The present study conducts an intercomparison between different RS-based soil moisture products, daily SMC retrievals from a cosmic-ray neutron sensor (CRNS) station and a network of in situ SoilNet wireless sensors installed at the Pinios Hydrologic Observatory ILTER site in central Greece for a time period of 2018-2019. The RS-based soil moisture products included herein are from NASA’s Soil Moisture Active Passive (SMAP) and Metop-A/B Advanced Scatterometer (ASCAT) satellite missions. The methodological workflow adopted includes standardized validation procedures employing a series of statistical measures to quantify the agreement between the different RS-based soil moisture products, CRNS-based SMC and the SoilNet ground truth data. Our study results contribute towards global efforts aiming at exploiting CRNS data in the context of soil moisture retrievals and their potential synergies with RS-based products. Furthermore, our findings provide valuable insights into assessing the capability of CRNS at retrieving more accurate SMC estimates at arid and semi-arid environments such as those found in the Mediterranean basin, while supporting also ongoing global validation efforts.

Keywords: Cosmic Ray Neutron Sensors; SMAP; ASCAT; SoilNet; Soil Moisture Content

How to cite: Detsikas, S. E., Petropoulos, G. P., Koutsias, N., Gasparatos, D., Pisinaras, V., Bogena, H., Wendland, F., Herrmann, F., and Panagopoulos, A.: A comparative study of SMAP and ASCAT satellite soil moisture products with cosmic-ray neutron sensing and in-situ data in a Mediterranean setting, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5175, https://doi.org/10.5194/egusphere-egu23-5175, 2023.

12:20–12:30
|
EGU23-15875
|
ESSI1.7
|
ECS
|
On-site presentation
Marine Floating Plastic Detection using Open Source Earth Observation resources
(withdrawn)
Srikanta Sannigrahi and Francesco Pilla

Posters on site: Fri, 28 Apr, 10:45–12:30 | Hall X4

Chairpersons: Marina Vîrghileanu, Ionut Cosmin Sandric, George P. Petropoulos
X4.152
|
EGU23-6800
|
ESSI1.7
|
ECS
Jinfeng Xu, Xiaoyi Wang, Guanting Lv, and Tao Wang

The upper range limit of trees is the most conspicuous boundary on the Earth. However, the publicly available forest extent or forest cover datasets systematically underestimated sparse tree cover, which hindered our recognition of tree limit distribution and its drivers over cold and arid regions. Here, we built a three-step upscaling strategy, that integrates in situ measured vegetation types with spaceborne Light Detection and Ranging (LiDAR), microwave, and Landsat images in a Convolutional Neural Networks (CNN) classification algorithm, to develop a new map of the upper range limit of trees over the Three-River-Source National Park circa 2020 at 30 m resolution. The multi-satellite-based new products consider vertical structure information that could better detect sparse trees and better distinguish between the shrub, grass, and forest. Validation shows our result reveals high consistency with manual interpretations from Google Earth high-resolution images (R2 = 0.97, slope = 0.99, ME = 18 m). Our proposed method provides a fast and effective tree limit mapping solution at the global scale.

How to cite: Xu, J., Wang, X., Lv, G., and Wang, T.: High-resolution map of the upper range limit of trees over the cold and arid region, a case study in the Three-River-Source National Park, Tibetan Plateau, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6800, https://doi.org/10.5194/egusphere-egu23-6800, 2023.

X4.153
|
EGU23-11873
|
ESSI1.7
|
ECS
PM10 concentration data analysis monitoring in Volos during the period 2001-2018, Greece
(withdrawn)
Georgios Proias, Kostantinos Moustris, and Panagiotis Nastos

Posters virtual: Fri, 28 Apr, 10:45–12:30 | vHall ESSI/GI/NP

Chairpersons: Ionut Cosmin Sandric, Marina Vîrghileanu, Dionissios Hristopulos
vEGN.1
|
EGU23-1668
|
ESSI1.7
Triantafyllia Petsini and George P. Petropoulos

Soil moisture is an important parameter of the Earth system and plays a key role in understanding soil-atmosphere interactions through energy balance and the hydrological cycle. Information on its spatiotemporal variability is of crucial importance in several research topics and applications. Remote sensing, today, provides a very promising avenue towards obtaining information on the variability of soil moisture at varying spatial and temporal resolutions. Also, currently, a number of relevant operational products is available from different satellite sensors.

The objective of the present study has been to evaluate one such product specifically that from the SMOS satellite in a typical Mediterranean setting located in Greece. In particular, this study examines the agreement of the SMOS soil moisture product with collocated field measurements from the Prefecture of Larisa for calendar year of 2020 acquired from Neuropublic S.A.. The agreement between the two datasets was evaluated on the basis of several statistical measures. Also, the effect of topographical and geomorphological features, land use/cover and the relative satellite orbit type and the Radio Frequency Interference (RFI) was examined as part of our analysis.

To our knowledge, this study is one of the few providing an insight of the SMOS soil moisture product accuracy in a Greek setting. Findings of our study can provide important insights towards understanding the practical value of such products in agricultural and arid/semi-arid Mediterranean environments such that of Greece and also help efforts directed towards improving their retrieval accuracy.

Keywords: soil moisture; operational product; remote sensing; SMOS; validation; agriculture; Mediterranean setting

How to cite: Petsini, T. and Petropoulos, G. P.: Assessing the retrieval accuracy of SMOS soil moisture product in a Greek agricultural setting, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1668, https://doi.org/10.5194/egusphere-egu23-1668, 2023.

vEGN.2
|
EGU23-1761
|
ESSI1.7
|
ECS
Christina Lekka, Spyridon E. Detsikas, and George P. Petropoulos

The Environmental Mapping and Analysis Program (EnMAP), is a new spaceborne German hyperspectral satellite mission for monitoring and characterizing the Earth’s environment on a global scale. EnMAP mission supports the retrieval of high-quality and abundant detailed spectral information in VNIR and SWIR ranges within a large-scale area in wide temporal coverage and high spatial resolution. Taking advantage of high-quality data freely available to the scientific community great potential is revealed in a wide range of ecological and environmental applications, such as i.e. accurate and up-to-date LULC thematic maps.

The objectives of the present study are to explore the accuracy of EnMAP in land cover mapping over a heterogeneous landscape. As a case study is used a typical Mediterranean setting located in Greece. The methodology is based on the synergistic use of machine learning techniques and ENMAP imagery coupled with other ancillary data and was carried out in EnMAP Box-3, a toolbox designed within a GIS open-source software. Validation of the derived LULC maps has been carried out using the standard error matrix approach and also via comparisons versus existing LULC operational products.

To our knowledge, this research is one of the first to explore the advantages of the hyperspectral EnMAP satellite mission in the context of LULC mapping. Results of the present study are expected to provide valuable input for applications of LULC mapping and demonstrate the potential of hyperspectral EnMAP data for improved performance and the highest accuracy of LULC mapping.

 

KEYWORDS: EnMAP, Land cover, Land use, Hyperspectral remote sensing, Machine Learning

How to cite: Lekka, C., Detsikas, S. E., and Petropoulos, G. P.: Exploring the synergy of EnMAP hyperspectral imagery with Machine Learning for land use- land cover mapping in a Mediterranean setting, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1761, https://doi.org/10.5194/egusphere-egu23-1761, 2023.

vEGN.3
|
EGU23-1785
|
ESSI1.7
|
ECS
Katerina Dermosinoglou and George P. Petropoulos

Information on Impervious Surface Areas (ISA) is required in various studies related to the urban environment. The continuous expansion of these surfaces is being noticed in large urban centers as a result of urbanization. The development of automated methodologies for mapping the ISas using remote sensing data has experienced a great growth in recent years.

The aim of the present study is the long-term mapping of ISA changes in Athens, Greece, from 1984 to 2022, exploiting the Landsat archive and contemporary methods of geospatial data processing, such as Machine Learning. The study implementation is also carried out in Google Earth Engine cloud platform and the final results obtained are presented in a WebGIS environment.

The results of the present study can contribute to a better understanding of the urban expansions dynamics and the key drivers linked to the urban sprawl that affect cities such as Athens. Furthermore, they can serve as a reference for further development of applications related to urban environments, using machine learning techniques combined with remote sensing data.

 

KEYWORDS: ISA, urban sprawl, Landsat, GEE, WebGIS, Greece

How to cite: Dermosinoglou, K. and Petropoulos, G. P.: Long term monitoring of the changes in Impervious Surface Areas in a Greek setting using Machine Learning and Remote Sensing data: the case of Athens Greece, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1785, https://doi.org/10.5194/egusphere-egu23-1785, 2023.

vEGN.4
|
EGU23-14137
|
ESSI1.7
|
ECS
Yang Xu and Lian Feng

The developments of global aquaculture ponds provide valuable socio-economic benefits in the Anthropocene epoch, also cause potential environmental and ecological impacts. However, the extent and trajectory of aquaculture ponds over the past 37 years remain unknown on a global scale. Our study maps the global distribution of aquaculture ponds over 9 periods (1984-1994, 1995-2000, and every 3 years from 2001 to 2021) based on a deep-learning method and Landsat observations. The total area of global aquaculture ponds expands from 10043.3 km2 to 18779.70 km2 and showed a slowing growth rate. Asia fishpond area accounts for up to 82% of the world's area. The extent of aquaculture ponds in Asia and South America have doubled in size since 1984. China, Vietnam, and Indonesia- the three countries with the largest fishpond area- exhibited the largest fishponds area at 2004-2006. Our study provides a critical basis for assessing spatial-temporal trajectory and potential influences of aquaculture ponds.

How to cite: Xu, Y. and Feng, L.: The Global Distribution and Trajectory of Aquaculture Ponds, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14137, https://doi.org/10.5194/egusphere-egu23-14137, 2023.