ESSI1.4 | Advances in Geospatial Artificial Intelligence for large-scale, regional and continental mapping
EDI
Advances in Geospatial Artificial Intelligence for large-scale, regional and continental mapping
Convener: Nouri Sabo | Co-conveners: Michael Tischler, Alexandre Hippert-FerrerECSECS, Elisa Mariarosaria FarellaECSECS, Ewelina Rupnik
Orals
| Mon, 15 Apr, 14:00–15:40 (CEST)
 
Room -2.16
Posters on site
| Attendance Mon, 15 Apr, 10:45–12:30 (CEST) | Display Mon, 15 Apr, 08:30–12:30
 
Hall X4
Posters virtual
| Attendance Mon, 15 Apr, 14:00–15:45 (CEST) | Display Mon, 15 Apr, 08:30–18:00
 
vHall X4
Orals |
Mon, 14:00
Mon, 10:45
Mon, 14:00
Geospatial artificial intelligence (GeoAI) has gained popularity for creating maps, performing analyses, and developing geospatial applications that are national, international or global in scope, thanks to its capacity to process and understand large geospatial data and infer valuable patterns and information. Rapid geo-information updates, public safety improvement, smart city developments, green deal transition as well as climate change mitigation and adaptation are among the problems that can now be studied and addressed using GeoAI.
Along with the acceleration of GeoAI adoption, a new set of implementation challenges is ascending to the top of the agenda for leaders in mapping technologies. These challenges relate to “operationalizing large GeoAI systems”, including automating the AI lifecycle, tracking and adapting models to new contexts and landscapes, temporal and spatial upscaling of models, improving explainability, balancing cost and performance, creating resilient and future-proof AI and IT operations, and managing activities across Cloud and on-premise environments.
This session aims to provide a venue to present the latest applications of GeoAI for mapping at national, international and global scales as well as their operationalization challenges. The themes of the session include, but are not limited to:
· Requirements of GeoAI methods for national mapping agencies, their relationship with industrial/commercial stakeholders, and the role of national agencies in establishing GeoAI standards.
· GeoAI interoperability and research translation.
· Extracting core geospatial layers and enhancing national basemaps from multi-scale, multi-modal remote-sensing data sources.
· Large-scale point cloud analysis for use cases in infrastructure development, urban planning, forest inventory, energy consumption/generation modeling, and natural resources management.
· Measuring rates and trends of changes in landscape patterns and processes such as land-cover/land-use change detection and disaster damage proxy mapping.
· Modernizing national archives, including geo-referencing, multi-temporal co-registration, super-resolution, colorization, and analysis of historical air photos.

Orals: Mon, 15 Apr | Room -2.16

Chairpersons: Nouri Sabo, Michael Tischler, Elisa Mariarosaria Farella
14:00–14:20
|
EGU24-6477
|
solicited
|
On-site presentation
Anatol Garioud

The National Institute of Geographic and Forest Information (IGN) has developed Artificial Intelligence (AI) models that describe land cover at the pixel level from IGN aerial images. This is part of the production process for the Large Scale and Land Use Reference (OCS GE). This contribution is threefold:

Methodology: the training strategy and the use of these models will be reviewed by focusing on i) the selection of the task performed by the models, ii) the approach for choosing and producing learning samples and iii) the training strategy to generalize to the scale of Metropolitan France. The evaluation of the models using various metrics will also be discussed. Visuals will be provided to illustrate the quality of the results. Furthermore, we will explain how AI products are incorporated into the production of the OCS GE.

Continuous improvement: the models are continuously improved, particularly through the implementation of FLAIR (French Land cover from Aerospace ImageRy) challenges towards the scientific community. The challenges FLAIR#1 and FLAIR#2 dealt with model generalization and domain adaptation as well as data fusion, i.e., how to develop an AI model that can process very high spatial resolution images (e.g., IGN aerial acquisitions) and satellite image time series (Sentinel-2 images) as input. We will both review the challenges implementation and the obtained results, leveraging convolutional and attention-based models, ensembling methods and pseudo-labelling. As the AI model for land cover goes far beyond the context of OCS GE production, additional experiments outside of the challenges will be discussed, allowing the development of additional AI models to process other modalities (very high spatial resolution satellite images, historical images, etc.).

Open access: all source code and data, including AI land cover predictions maps, are openly distributed. These resources are distributed via the challenges and as products (CoSIA: Land Cover by Artificial Intelligence) by a dedicated platform, which is of interest for AI users and non-specialists including users from the geoscience and remote sensing community.

How to cite: Garioud, A.: Artificial intelligence for country-scale land cover description., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6477, https://doi.org/10.5194/egusphere-egu24-6477, 2024.

14:20–14:30
|
EGU24-20985
|
On-site presentation
Samantha Arundel, Michael Von Pohle, Ata Akbari Asanjan, Nikunj Oza, and Aaron Lott

Landform mapping (also referred to as geomorphology or geomorphometry) can be divided into two domains: general and specific (Evans 2012). Whereas general landform mapping categorizes all elements of the study area into landform classes, such as ridges, valleys, peaks, and depressions, the mapping of specific landforms requires the delineation (even if fuzzy) of individual landforms. The former is mainly driven by physical properties such as elevation, slope, and curvature.  The latter, however, must consider the cognitive (human) reasoning that discriminates individual landforms in addition to these physical properties (Arundel and Sinha 2018).

Both mapping forms are important. General geomorphometry is needed to understand geological and ecological processes and as boundary layer input to climate and environmental models. Specific geomorphometry supports such activities as disaster management and recovery, emergency response, transportation, and navigation.

In the United States, individual landforms of interest are named in the U.S. Geological Survey (USGS) Geographic Names Information System, a point dataset captured specifically to digitize geographic names from the USGS Historical Topographic Map Collection (HTMC). Named landform extent is represented only by the name placement in the HTMC.

Recent work has investigated CNN-based deep learning methods to capture these extents in machine-readable form. These studies first relied on physical properties (Arundel et al. 2020) and then included the HTMC as a band in RGB images in limited testing (Arundel et al. 2023). Results from the HTMC dataset surpassed those using just physical properties. The HTMC alone performed best due to the hillshading and elevation (contour) data incorporated into the topographic maps. However, results fell short of an operational capacity to map all named landforms in the United States. Thus, our current work expands upon past research by focusing on the HTMC and physical information as inputs and the named landform label extents.

Specifically, we propose to leverage pre-trained foundation models for segmentation and optical character recognition (OCR) models to jointly map landforms in the United States. Our approach aims to bridge the disparities among independent information sources to facilitate informed decision-making. The modeling pipeline performs (1) segmentation using the physical information and (2) information extraction using OCR in parallel. Then, a computer vision approach merges the two branches into a labeled segmentation. 

References

Arundel, Samantha T., Wenwen Li, and Sizhe Wang. 2020. “GeoNat v1.0: A Dataset for Natural Feature Mapping with Artificial Intelligence and Supervised Learning.” Transactions in GIS 24 (3): 556–72. https://doi.org/10.1111/tgis.12633.

Arundel, Samantha T, and Gaurav Sinha. 2018. “Validating GEOBIA Based Terrain Segmentation and Classification for Automated Delineation of Cognitively Salient Landforms In Proceedings of Workshops and Posters at the 13th International Conference on Spatial Information Theory (COSIT 2017), Lecture Notes in Geoinformation and Cartography, edited by Paolo Fogliaroni, Andrea Ballatore, and Eliseo Clementini, 9–14. Cham: Springer International Publishing.

Arundel, Samantha T., Gaurav Sinha, Wenwen Li, David P. Martin, Kevin G. McKeehan, and Philip T. Thiem. 2023. “Historical Maps Inform Landform Cognition in Machine Learning.” Abstracts of the ICA 6 (August): 1–2. https://doi.org/10.5194/ica-abs-6-10-2023.

Evans, Ian S. 2012. “Geomorphometry and Landform Mapping: What Is a Landform?” Geomorphology 137 (1): 94–106. https://doi.org/10.1016/j.geomorph.2010.09.029.

How to cite: Arundel, S., Von Pohle, M., Akbari Asanjan, A., Oza, N., and Lott, A.: GeoAI advances in specific landform mapping, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20985, https://doi.org/10.5194/egusphere-egu24-20985, 2024.

14:30–14:40
|
EGU24-2378
|
On-site presentation
Mozhdeh Shahbazi, Mikhail Sokolov, Ella Mahoro, Victor Alhassan, Evangelos Bousias Alexakis, Pierre Gravel, and Mathieu Turgeon-Pelchat

Canadian national air photo library (NAPL) comprises millions of historical airborne photographs dating over 100 years. Historical photographs are rich chronicles of countrywide geospatial information. They can be used for creating long-term time series and supporting various analytics such as monitoring expansion/shrinking rates of built areas, forest structure change measurement, measuring thinning and retreating rates of glaciers, and determining rates of erosion at coastlines. Various technical solutions are developed at Natural Resources Canada (NRCan) to generate analysis-ready mapping products from NAPL.

Photogrammetric Processing with a Focus on Automated Georeferencing of Historical Photos: The main technical challenge of photogrammetric processing is identifying reference observations, such as ground control points (GCP). Reference observations are crucial to accurately georeference historical photos and ensure the spatial alignment of historical and modern mapping products. This is critical for creating time series and performing multi-temporal change analytics. In our workflow, GCPs are identified by automatically matching historical images to modern optical satellite/airborne ortho-rectified images. In the matching process, first, we use convolutional neural networks (D2Net) for joint feature detection and description in the intensity space. Then, we convert intensity images to phase congruency maps, which show less sensitivity to nonlinear radiometric differences of the images, and we extract an additional set of features using the Fast detector and describe them using the radiation-invariant feature transform (RIFT). Feature-matching outliers are detected and removed via random sample consensus (Ransac), enforcing a homographic transformation between corresponding images. The remaining control points are manually verified through a graphical interface built as a QGIS plugin. The verified control points are then used in a bundle block adjustment, where external orientation parameters of the historical images and the intrinsic calibration parameters of the cameras are refined, followed by dense matching and generation of digital elevation models and ortho-rectified mosaics using conventional photogrammetric approaches. These solutions are implemented using our in-house libraries as well as MicMac open-source software. Through the presentation, examples of the generated products and their qualities will be demonstrated.

Deep Colourization, Super Resolution and Semantic Segmentation: Considering the fact that NAPL mostly contains grayscale photos, their visual appeal and interpretability are less than modern colour images. In addition, the automated extraction of colour-sensitive features from them, e.g. water bodies, is more complicated than colour images. With this regard, we have developed fully automated approaches to colourize historical ortho-rectified mosaics based on image-to-image translation models. Through the presentation, the performance of a variety of solutions like conditional generative adversarial networks (GAN), encoder-decoder networks, vision transformers, and probabilistic diffusion models will be compared. In addition, using a customized GAN, we improve the spatial resolution of historical images which are scanned from printed photos at low resolution (as opposed to being scanned directly from film rolls at high resolution). Our semantic segmentation models, trained initially on optical satellite and airborne imagery, are also adapted to historical air photos for extracting water bodies, road networks, building outlines, and forested areas. The performance of these models on historical photos will be demonstrated during the presentation.

How to cite: Shahbazi, M., Sokolov, M., Mahoro, E., Alhassan, V., Bousias Alexakis, E., Gravel, P., and Turgeon-Pelchat, M.: Applications of GeoAI in Extracting National Value-Added Products from Historical Airborne Photography, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2378, https://doi.org/10.5194/egusphere-egu24-2378, 2024.

14:40–14:50
|
EGU24-9729
|
ECS
|
On-site presentation
Riccardo Contu, Valerio Marsocci, Virginia Coletta, Roberta Ravanelli, and Simone Scardapane

The ability to detect changes occurring on the Earth's surface is essential for comprehensively monitoring and understanding evolving landscapes and environments.

To achieve a comprehensive understanding, it is imperative to employ methodologies capable of efficiently capturing and analyzing both two-dimensional (2D) and three-dimensional (3D) changes across various periods.

Artificial Intelligence (AI)  stands out as a primary resource for investigating these alterations, and when combined with Remote sensing (RS) data, it has demonstrated superior performance compared to conventional Change Detection (CD) algorithms.

The recent introduction of the MultiTask Bitemporal Images Transformer [1] (MTBIT) network has made it possible to simultaneously solve 2D and 3D CD tasks leveraging bi-temporal optical images.

However, this network presents certain limitations that necessitate being considered. These constraints encompass a tendency to overfit the training distribution and challenges in inferring extreme values [1]. To address these shortcomings, this work introduces a series of custom augmentations, including strategies like Random Crop, Crop or Resize, Mix up, Gaussian Noise on the 3D CD maps, and Radiometric Transformation. Applied individually or in specific combinations, these augmentations aim to bolster MTBIT's ability to discern intricate geometries and subtle structures that are otherwise difficult to detect.

Furthermore, the evaluation metrics used to assess MTBIT, such as Root Mean Squared Error (RMSE) and the change RMSE (cRMSE), have their limitations. As a response, the introduction of the true positive RMSE (tpRMSE) offers a more comprehensive evaluation, specifically focusing on MTBIT's efficacy in the 3D CD task by considering only the pixels affected by actual elevation changes.

The implementation of custom augmentations particularly when applied in synergy, like Crop or Resize with Gaussian Noise on the 3D map, yielded substantial improvements. These interventions led – through the best augmentation configuration – to the reduction of the cRMSE to 5.88 meters and the tpRMSE to 5.34 meters, compared to the baseline (standard MTBIT) values of 6.33 meters and 5.60 meters, respectively.

The proposed augmentations significantly bolster the practical usability and reliability of MTBIT in real-world applications, effectively addressing critical challenges within the realm of Remote Sensing CD. 

REFERENCES:

 

  • [1] Marsocci, V., Coletta, V., Ravanelli, R., Scardapane, S., Crespi, M., 2023. Inferring 3D change detection from bitemporal optical images. ISPRS Journal of Photogrammetry and Remote Sensing, 196, 325-339

How to cite: Contu, R., Marsocci, V., Coletta, V., Ravanelli, R., and Scardapane, S.: Urban 3D Change Detection with Deep Learning: Custom Data Augmentation Techniques, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9729, https://doi.org/10.5194/egusphere-egu24-9729, 2024.

14:50–15:00
|
EGU24-10034
|
On-site presentation
Onur Can Bayrak, Ma Zhenyu, Elisa Mariarosaria Farella, and Fabio Remondino

Urban and natural landscapes are distinguished by different built and vegetated elements with unique features, and their proper identification is crucial for many applications, from urban planning to forestry inventory or natural resources management. With the rapid evolution and deployment of high-resolution airborne and Unmanned Aerial Vehicle (UAV) technologies, large areas can be easily surveyed to create high-density point clouds. Photogrammetric cameras and LiDAR sensors can offer unprecedented high-quality 3D data (a few cm on the ground) that allows for discriminating and mapping even small objects. However, the semantic enrichment of these 3D data is still far from being a fully reliable, accurate, unsupervised, explainable and generalizable process deployable at large scale, on data acquired with any sensor, and at any possible spatial resolution.

This work reports the state-of-the-art and recent developments in urban and natural point cloud classification, with a particular focus on the:

  • Standardization in defining the semantic classes through a multi-resolution and multi-scale approach: a multi-level concept is introduced to improve and optimize the learning process by means of a hierarchical concept to accommodate a large number of classes. 
  • Instance segmentation in very dense areas: closely located and overlapping individual objects require precise segmentation to be accurately identified and classified. We are developing a hierarchical segmentation method specifically designed for urban furniture with small samples to enhance the comprehensiveness of dense urban areas.
  • Generalization of the procedures and transferability of developed models from a fully-labelled domain to an unseen scenario.
  • Handling of under-represented objects (e.g., pole-like objects, pedestrians, and other urban furniture): classifying under-represented objects presents a unique set of challenges due to their sparse occurrence and similar geometric characteristics. We introduce a new method that specifically targets the effective identification and extraction of these objects in combination with knowledge-based methods and deep learning.
  • Available datasets and benchmarks to evaluate and compare learning-based methods and algorithms in 3D semantic segmentation: urban-level aerial 3D point cloud datasets can be classified according to the presence of color information, the number of classes, or the type of sensor used for data gathering. The ISPRS - Vaihingen, DublinCity, DALES, LASDU and CENAGIS-ALS datasets, although extensive in size, do not provide color-related information. Conversely, Campus3D, Swiss3DCities, and Hessigheim3D include color data but feature limited coverage and a few class labels. SensatUrban, STPLS3D, and HRHD-HK were collected across extensive urban regions, but they also present a reduced number of classes. YTU3D surpasses other datasets in terms of class diversity, but it encompasses less extensive areas than SensatUrban, STPLS3D, and HRHD-HK. Despite these differences, the common deficiency in all datasets is the presence of classes with under-represented objects, the limited generalization, and the low accuracy in classifying unbalanced categories, making using these models difficult for real-life scenarios.

The presentation will highlight the importance of semantic enrichment processes in the geospatial and mapping domain and for providing more understandable data to end-users and policy-makers. Available learning-based methods, open issues in point cloud classification and recent progress will be explored over urban and forestry scenarios.

How to cite: Bayrak, O. C., Zhenyu, M., Farella, E. M., and Remondino, F.: Operationalize large-scale point cloud classification: potentials and challenges, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10034, https://doi.org/10.5194/egusphere-egu24-10034, 2024.

15:00–15:10
|
EGU24-15565
|
On-site presentation
Elena Viero, Donatella Gubiani, Massimiliano Basso, Marco Marin, and Giovanni Sgrazzutti

Regulations relating to the disposal of the use of asbestos was introduced in Italy with Law no. 257 of 1992 and its implementation took place over time. The Regional Asbestos Plan was put in place in 1996 and is updated periodically.

Modern remote sensing techniques constitute an essential tool for studies over an environmental and territorial scale. These systems can detect for each pixel of the acquired image from tens to hundreds of bands of the electromagnetic spectrum. This is useful as any material has its own characteristic spectral signature that can be exploited for different types of investigation.

The work involved the experimentation of a neural network for the classification of airborne remotely sensed hyperspectral images to identify and map the asbestos-cement roofing existing in some Municipalities of the Autonomous Region of Friuli Venezia Giulia.

The Region covers an area of approximately 8,000 square kilometres. To detect the entire area, it was necessary to carry out flights on different directions, different days and with different solar exposure conditions and so, the radiometric quality of the images is not uniform. Moreover, the images have high geometric resolution (1 meter pixel) and radiometric resolution (over 180 bands), that required a particular attention in their management: more than 4,000 images, for a total size of 25-30 TB.

Starting from these hyperspectral images and using the information already available relating to the mapping of the asbestos roofs of 25 Municipalities of the Region, we generated an adequate ground truth to train, test and validate a neural network implemented using the Keras library.

Given the differences in the territories of the various Municipalities, in the first step of the processing we calculated 3 different models generated on different training datasets for each considered Municipality: a total and a partial one that are independent on the considered Municipality, and the last one adapted to the specific Municipality. The combination of these predictions allowed us to obtain a raster result which is supposed to better adapt to the characteristics of the considered Municipality.

Obtained the data, it was then necessary to move on from the raster results to vector data using a zonal analysis on the buildings available in the Regional Numerical Map. An initial automatic classification, determined through the definition of adequate thresholds, was then manually refined exploring it with additional tools, such as Google StreetView and the 10 cm regional orthophoto, to obtain a final refined classification.

The results obtained for the 5 pilot Municipalities represent a clear indication of the presence of asbestos material on some building roofs. This work emphasized an operational workflow using data at a regional scale and could also be easily extended to other territorial entities. It has the great advantage to allow the government authority to save at least an order of magnitude in term of costs with respect to traditional investigations. Finally, the automation of the neural network represents a useful tool for programming, planning and management of the territory also in terms of human health.

How to cite: Viero, E., Gubiani, D., Basso, M., Marin, M., and Sgrazzutti, G.: Identification of asbestos roofing from hyperspectral images, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15565, https://doi.org/10.5194/egusphere-egu24-15565, 2024.

15:10–15:20
|
EGU24-8420
|
On-site presentation
Nikola Besic, Nicolas Picard, Cédric Vega, Jean-Pierre Renaud, Martin Schwartz, Milena Planells, and Philippe Ciais

The development of high resolution mapping models of forest attributes based on employing machine or deep learning techniques has increasingly accelerated in the last couple of years. The consequence of this is the widespread availability of multiple sources of information, which can either lead to a potential confusion, or to a possibility to get an "extended” insight into the state of our forests by interpreting these sources jointly. This contribution aims at addressing the latter, by relying on the Bayesian model averaging (BMA) approach.

BMA is a method that can be used in building a consensus from an ensemble of different model predictions. It can be seen as weighted mean of different predictions with weights reflecting the predictive performances of different models, or as a finite mixture model which estimates the probability that each observation from the independent validation dataset has been generated by one of the models belonging to the ensemble. BMA can thus be used to diagnose and understand the difference in the predictions and to possibly interpret them.

The predictions in our case are the forest canopy height estimations for the metropolitan France coming from 5 different AI models [1-5], while the independent validation dataset comes from the French National Forest Inventory (NFI) disposing with some 6000 plots per year, distributed across the territory of interest. For every plot we have several measurements/estimations of the forest canopy height out of which the following two are considered in this study: h_m – the maximum total height (from the tree's base level to the terminal bud of the tree's main stem) measured within the plot, and h_dom – the average height of the seven largest dominant trees per hectare.

In this contribution we present for every considered plot the dominant model with respect to both references i.e. the model having the highest probability to be the one generating measurements/estimations at NFI plot (h_m and h_dom). We present as well as the respective inter-model and the intra-model variance estimations, allowing us to propose a series of hypotheses concerning the established differences between predictions of individual models in function of their specificities.

[1] Schwartz, M., et al.: FORMS: Forest Multiple Source height, wood volume, and biomass maps in France at 10 to 30 m resolution based on Sentinel-1, Sentinel-2, and Global Ecosystem Dynamics Investigation (GEDI) data with a deep learning approach, Earth Syst. Sci. Data, 15, 4927–4945, 2023, https://doi.org/10.5194/essd-15-4927-2023

[2] Lang, N., et al.: A high-resolution canopy height model of the Earth, Nat Ecol Evol 7, 1778–1789, 2023. https://doi.org/10.1038/s41559-023-02206-6

[3] Morin, D. et al.: Improving Heterogeneous Forest Height Maps by Integrating GEDI-Based Forest Height Information in a Multi-Sensor Mapping Process, Remote Sens., 14, 2079. 2022, https://doi.org/10.3390/rs14092079

[4] Potapov, P., et al.: Mapping global forest canopy height through integration of GEDI and Landsat data, Remote Sensing of Environment, 253, 2021, https://doi.org/10.1016/j.rse.2020.112165.

[5] Liu, S. et al.: The overlooked contribution of trees outside forests to tree cover and woody biomass across Europe, Sci. Adv. 9, eadh4097, 2023, 10.1126/sciadv.adh4097.

How to cite: Besic, N., Picard, N., Vega, C., Renaud, J.-P., Schwartz, M., Planells, M., and Ciais, P.: Bayesian model averaging of AI models for the high resolution mapping of the forest canopy height, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8420, https://doi.org/10.5194/egusphere-egu24-8420, 2024.

15:20–15:30
|
EGU24-13166
|
ECS
|
On-site presentation
Agata Walicka, Jesper Bladt, and Jesper Erenskjold Moeslund

Deadwood is a vital part of the habitat for many threatened species of animals, plants and fungi. Thus, presence of deadwood is an important indicator for the probability that a given site harbors threatened species. Nowadays, field work is the most common method for monitoring dead trees. However, it is time consuming, costly and labor-intensive. Therefore, there is a need for an automatic method for mapping and monitoring deadwood. The combination of fine-resolution remote sensing and deep learning techniques have a potential to provide exactly this. Unfortunately, due to the typical location of lying deadwood under the canopy, this is a challenging task as the visibility of the lying trees is limited notably with optical remote sensing techniques. Therefore, laser scanning data seems to be the most appropriate for this purpose as it can penetrate the canopy to some extent and hence gather data from a forest floor.

In this work we aim at the development of methods enabling detection of lying deadwood at the national scale in protected forests and we focus on the presence of deadwood in 15-meter-radius circular plots. To achieve this goal, we use Airborne Laser Scanning (ALS) data that is publicly available for the whole Denmark and, as a reference, almost 6000 forestry plots acquired as a part of the Danish national habitats monitoring program. The binary classification into plots that contain deadwood and the ones that do not is performed using SparseCNN deep neural network. In this study we showed that it is possible to detect plots having deadwood with an overall accuracy of around 61%. However, the accuracy of the classifier depends on the volume of the deadwood present in a plot.  

How to cite: Walicka, A., Bladt, J., and Moeslund, J. E.: Remote sensing techniques for habitat condition mapping: deadwood monitoring using airborne laser scanning data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13166, https://doi.org/10.5194/egusphere-egu24-13166, 2024.

15:30–15:40
|
EGU24-1396
|
On-site presentation
Karem Chokmani, Haythem Zidi, Anas El Alem, and Jasmin Gill-Fortin

The study addresses the need for flood risk anticipation and planning, through the development of a flood zone mapping approach for different return periods, in order to best prevent and protect populations. Today, traditional methods are too costly, too slow or require too many requirements to be applied over large areas. As part of a project funded by the Canadian Space Agency, Geosapiens and the Institut National de la Recherche Scientifique set themselves the goal of designing an automatic process to generate water presence maps for different return periods at a resolution of 30 m, based on the historical database of Landsat missions from 1982 to the present day. This involved the design, implementation and training of a deep learning algorithm model based on the U-Net architecture for the detection of water pixels in Landsat imagery. The resulting maps were used as the basis for applying a frequency analysis model to fit a probability of occurrence function for the presence of water at each pixel. The frequency analysis data were then used to obtain maps of water occurrence at different return preiods such as 2, 5 and 20 years. 

How to cite: Chokmani, K., Zidi, H., El Alem, A., and Gill-Fortin, J.: Development of an approach based on historical Landsat data for delineating Canadian flood zones at different return periods, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1396, https://doi.org/10.5194/egusphere-egu24-1396, 2024.

Posters on site: Mon, 15 Apr, 10:45–12:30 | Hall X4

Display time: Mon, 15 Apr, 08:30–Mon, 15 Apr, 12:30
Chairpersons: Alexandre Hippert-Ferrer, Ewelina Rupnik
X4.91
|
EGU24-7635
Ilaria Fava, Alvaro Lopez Garcia, Dick Schaap, Tjerk Krijer, Gergely Sipos, and Valentin Kozlov

Aquatic ecosystems are vital in regulating climate and providing resources, but they face threats from global change and local stressors. Understanding their dynamics is crucial for sustainable use and conservation. The iMagine AI Platform offers a suite of AI-powered image analysis tools for researchers in aquatic sciences, facilitating a better understanding of scientific phenomena and applying AI and ML for processing image data.

The platform supports the entire machine learning cycle, from model development to deployment, leveraging data from underwater platforms, webcams, microscopes, drones, and satellites, and utilising distributed resources across Europe. With a serverless architecture and DevOps approach, it enables easy sharing and deployment of AI models. Four providers within the pan-European EGI federation power the platform, offering substantial computational resources for image processing.

Five use cases focus on image analytics services, which will be available to external researchers through Virtual Access. Additionally, three new use cases are developing AI-based image processing services, and two external use cases are kickstarting through recent Open Calls. The iMagine Competence Centre aids use case teams in model development and deployment, resulting in various models hosted on the iMagine AI Platform, including third-party models like YoloV8.

Operational best practices derived from the platform providers and use case developers cover data management, quality control, integration, and FAIRness. These best practices aim to harmonise approaches across Research Infrastructures and will be disseminated through various channels, benefitting the broader European and international scientific communities.

How to cite: Fava, I., Lopez Garcia, A., Schaap, D., Krijer, T., Sipos, G., and Kozlov, V.: iMagine, AI-supported imaging data and services for ocean and marine science, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7635, https://doi.org/10.5194/egusphere-egu24-7635, 2024.

X4.92
|
EGU24-11571
Lyndon Estes, Sam Khallaghi, Rahebe Abedi, Mary Asipunu, Nguyen Ha, Boka Luo, Cat Mai, Amos Wussah, Sitian Xiong, and Yao-Ting Yao

Tracking how agricultural systems are changing is critical to answering important questions related to socioeconomic (e.g. food security) and environmental sustainability (e.g. carbon emissions), particularly in rapidly changing regions such as Africa. Monitoring agricultural dynamics requires satellite-based approaches that can accurately map individual fields at frequent (e.g. annual) intervals over national to regional extents, yet mapping Africa's smallholder-dominated agricultural systems is difficult, as the small and indistinct nature of fields promotes mapping error, while frequent cloud cover leads to coverage gaps. Fortunately, the increasing availability of high spatio-temporal resolution imagery and the growing capabilities of deep learning models now make it possible to accurately map crop fields over large extents. However, the ability to make consistently reliable maps for more than one time point remains difficult, given the substantial domain shift between images collected in different seasons or years, which arises from variations in atmospheric and land surface conditions, and results in less accurate maps for times beyond those for which the model was trained. To cope with this domain shift, a model's parameters can be adjusted through fine-tuning on training data from the target time period, but collecting such data typically requires manual annotation of images, which is expensive and often impractical. Alternatively, the approach used to develop the model can be adjusted to improve its overall generalizability. Here we show how combining several fairly standard architectural and input techniques, including careful selection of the image normalization method, increasing the model's width, adding regularization techniques, using modern optimizers, and choosing an appropriate loss function, can significantly enhance the ability of a convolutional neural network to generalize across time, while eliminating the need to collect additional labels. A key component of this approach is the use of Monte Carlo dropout, a regularization technique applied during inference that provides a measure of model uncertainty while producing more robust predictions. We demonstrate this procedure by training an adapted U-Net, a widely used encoder-decoder architecture, with a relatively small number of labels (~5,000 224X224 image chips) collected from 3 countries on 3.7 m PlanetScope composite imagery collected primarily in 2018, and use the model, without fine-tuning, to make reliable maps of Ghana's  (240,000 km2) annual croplands for the years 2018-2023 on 4.8 m Planet basemap mosaics. We further show how this approach helps to track agricultural dynamics by providing a country-wide overview of cropping frequency, while highlighting hotspots of cropland expansion and intensification during the 6-year time period (2018-2023).

How to cite: Estes, L., Khallaghi, S., Abedi, R., Asipunu, M., Ha, N., Luo, B., Mai, C., Wussah, A., Xiong, S., and Yao, Y.-T.: Simple temporal domain adaptation techniques for mapping the inter-annual dynamics of smallholder-dominated croplands over large extents, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11571, https://doi.org/10.5194/egusphere-egu24-11571, 2024.

X4.93
|
EGU24-12472
|
ECS
Wen Zhou, Claudio Persello, and Alfred Stein

Effective urban planning, city digital twins, and informed policy formulation rely heavily on precise building use information. While existing research often focuses on broad categories of building use, there is a noticeable gap in the classification of buildings’ detailed use. This study addresses this gap by concurrently extracting both broad and detailed hierarchical information regarding building use. Our approach involves leveraging multiple data sources, including high spatial resolution remote sensing images (RS), digital surface models (DSM), street view images (SVI), and textual information from point of interest (POI) data. Given the complexity of mixed-use buildings, where different functions coexist, we treat building hierarchical use classification as a multi-label task, determining the presence of specific categories within a building. To maximize the utility of features across diverse modalities and their interrelationships, we introduce a novel multi-label multimodal Transformer-based feature fusion network. This network can simultaneously predict four broad categories and thirteen detailed categories, representing the first instance of utilizing these four modalities for building use classification. Experimental results demonstrate the effectiveness of our model, achieving a weighted average F1 score (WAF) of 91% for broad categories, 77% for detailed categories, and 84% for hierarchical categories. The macro average F1 scores (MAF) are 81%, 48%, and 56%, respectively. Ablation experiments highlight RS data as the cornerstone for hierarchical building use classification. DSM and POI provide slight supplementary information, while SVI data may introduce more noise than effective information. Our analysis of hierarchy consistency, supplementary, and exclusiveness between broad and detailed categories shows our model can effectively learn these relations. We compared two ways to obtain broad categories: classifying them directly and scaling up detailed categories, associating them with their broad counterparts. Experiments show that the WAF and MAF of the former are 3.8% and 6% higher than the latter. Notably, our research visualizes attention models for different modalities, revealing the synergy among them. Despite the model’s emphasis on SVI and POI data, the critical role of RS and DSM in building hierarchical use classification is underscored. By considering hierarchical use categories and accommodating mixed-use scenarios, our method provides more accurate and comprehensive insights into land use patterns.

How to cite: Zhou, W., Persello, C., and Stein, A.: Building hierarchical use classification based on multiple data sources with a multi-label multimodal transformer network, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12472, https://doi.org/10.5194/egusphere-egu24-12472, 2024.

X4.94
|
EGU24-12615
|
ECS
Wildness estimation of large rivers
(withdrawn)
Shuo Zong, Théophile Sanchez, Nicolas Mouquet, and Loïc Pellissier
X4.95
|
EGU24-12745
|
ECS
Wanting Yang, Xiaoye Tong, Sizhuo Li, Daniel Ortiz Gonzalo, and Rasmus Fensholt

Shifting cultivation, in which primary or secondary forest plots are converted into agriculture for one to two years and then left fallow, is often deemed responsible for tropical deforestation. However, the general attribution of deforestation to areas under shifting cultivation is debatable if considering also the component of forest regrowth during the fallow phase, which is an essential part of a mature shifting cultivation system. Yet, little is known about the extent of small-size cropped fields and fallow stages, which are needed to derive information about the temporal development between small-size cropped fields and fallow in shifting cultivation landscapes.

The primary objective of our study is to develop a deep learning-based framework to quantify land use intensity in tropical forest nations such as the Democratic Republic of Congo (DRC) using 4.7-m multi-temporal Planet Basemaps from 2015 to 2023. By employing a convolutional neural network image classification model, we first identified the shifting cultivation landscapes. Secondly, utilizing two-phase imagery, we delve into the temporal development of shifting cultivation, determining whether the landscape continues to be characterized by this practice. Thirdly, the shifting cultivation landscapes were segmented into cropped fields, young fallow, old fallow and old-growth forest/primary forest. Lastly, we used a deep learning regression model to quantify the intensity of shifting cultivation within identified areas. This last step adds depth to our analysis, by offering nuanced insights into the varying practices associated with shifting cultivation practices. Our study in DRC offers a detailed spatio-temporal dataset of the dynamics of shifting cultivation serving as a stepping stone to better understand its impacts on forest loss.

How to cite: Yang, W., Tong, X., Li, S., Ortiz Gonzalo, D., and Fensholt, R.: Mapping the extent and land use intensity of shifting cultivation with Planet Scope imagery and deep learning in the Democratic Republic of Congo , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12745, https://doi.org/10.5194/egusphere-egu24-12745, 2024.

X4.96
|
EGU24-15347
|
ECS
Shahabaldin Shojaeezadeh, Abdelrazek Elnashar, and Tobias Karl David Weber

Monitoring crop growth and development is important for agricultural management and policy interventions enhancing food security worldwide. Traditional methods of examining crop phenology (the timing of growth stages in plants) at large scales often are not sufficiently accurate to make informed decisions about crops. In this study, we proposed an approach that uses a satellite data fusion and Machine Learning (ML) modeling framework to predict crop phenology for eight major crops at field scales (30 meter) across all of Germany. The observed phenology used in this study is based on the citizen science data set of phenological observations covering all of Germany. By fusing satellite data from Landsat and Sentinel-2 images with radar data from Sentinel-1, our method effectively captures information from each publicly available Remote Sensing data source, resulting in precise estimations of phenology timing. Through a fusion analysis, results indicated that combining optical and radar images improves ML model ability to predict phenology with high accuracies with R2 > 0.95 and a mean absolute error of less than 2 days for all the crops. Further analysis of uncertainties confirmed that adding radar data together with optical images improves the modeling reliability of satellite-based predictions of crop phenology. These improvements are expected to be useful for crop model calibrations, facilitate informed agricultural decisions, and contribute to sustainable food production to address the increasing global food demand.

How to cite: Shojaeezadeh, S., Elnashar, A., and Weber, T. K. D.: Estimating Crop Phenology from Satellite Data using Machine Learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15347, https://doi.org/10.5194/egusphere-egu24-15347, 2024.

X4.97
|
EGU24-18546
Xin Xu, Xiaowei Tong, Martin Brandt, Yuemin Yue, Maurice Mugabowindekwe, Sizhuo Li, and Rasmus Fensholt

Due to the provisioning of essential ecosystem goods and services by forests, the monitoring of forests has attracted considerable attention within the academic community. However, the majority of remote sensing studies covering large areas primarily focus on tree cover due to resolution limitations. It is necessary to integrate innovative spatial methods and tools in the monitoring of forest ecosystems. Forest Structure Complexity, representing the spatial heterogeneity within forest structures, plays a pivotal role in influencing ecosystem processes and functions. In this study, we use multi-spectral remote sensing image data to extract the crown information of the single tree through deep learning technology; Subsequently, we analyze the relationship between each single tree and its neighboring trees, and explore the structural characteristics at tree level. Finally, we developed the canopy structural complexity index and applied it to Nordic forests, urban areas, savanna, rainforest, and the most complex tree plantations and natural forests in China Karst. This study aims to gain a deeper understanding of the forest structure complexity in diverse ecosystems and provide valuable information for sustainable forestry management and ecosystem conservation. The method developed in this study eliminates the need  for additional field measurement and radar data, offering robust tool support for extensive and efficient the monitoring of forest structure complexity, which has a wide application prospect.

How to cite: Xu, X., Tong, X., Brandt, M., Yue, Y., Mugabowindekwe, M., Li, S., and Fensholt, R.: A novel index for forest structure complexity mapping from single multispectral images, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18546, https://doi.org/10.5194/egusphere-egu24-18546, 2024.

X4.98
|
EGU24-18679
|
ECS
Gyula Mate Kovács, Xiaoye Tong, Dimitri Gominski, Stefan Oehmcke, Stéphanie Horion, and Rasmus Fensholt

Wetlands are crucial carbon sinks for climate change mitigation, yet historical land use changes have resulted in carbon losses and increased CO2 emissions. To combat this, the European Union aims to restore 30% of degraded wetlands in Europe by 2030. However, comprehensive continental-scale inventories are essential for prioritizing restoration and assessing high carbon stock wetlands, revealing the inadequacy of existing datasets. Leveraging 10-meter satellite data and machine learning, our study achieved 94±0.5% accuracy in mapping six wetland types across Europe in 2018. Our analysis identifies that over 40% of European wetlands experience anthropogenic disturbances, with 32.7% classified as highly disturbed due to urban and agricultural activities. Country-level assessments highlight an uneven distribution of restoration needs, emphasizing the urgent importance of data-informed approaches for meaningful restoration. This study underscores the critical need to address land use impact to preserve and enhance wetland carbon storage capabilities.

How to cite: Kovács, G. M., Tong, X., Gominski, D., Oehmcke, S., Horion, S., and Fensholt, R.: Large-scale satellite mapping unveils uneven wetland restoration needs across Europe, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18679, https://doi.org/10.5194/egusphere-egu24-18679, 2024.

X4.99
|
EGU24-19198
|
ECS
Niklas Sprengel, Martin Hermann Paul Fuchs, and Prof. Begüm Demir

Advances in hyperspectral imaging have led to a significant increase in the vol-
ume of hyperspectral image archives. Therefore, the development of efficient and
effective hyperspectral image compression methods is an important research topic in
remote sensing. Recent studies show that learning-based compression methods are
able to preserve the reconstruction quality of images at lower bitrates compared to
traditional methods [1]. Existing learning-based image compression methods usu-
ally employ spatial compression per image band or for all bands jointly. However,
hyperspectral images contain a high amount of spectral correlations which neces-
sitates more complex compression architectures that can reduce both spatial and
spectral correlations for a more efficient compression. To address this problem, we
propose a novel Spatio-Spectral Compression Network (S2C-Net).
S2C-Net is a flexible architecture to perform hyperspectral image compression,
exploiting both spatial and spectral dependencies of hyperspectral images. It com-
bines different spectral and spatial autoencoders into a joint model. To this end, a
learning-based pixel-wise spectral autoencoder is initially pre-trained. Then, a spa-
tial autoencoder network is added into the bottleneck of the spectral autoencoder for
further compression of the spatial correlations. This is done by applying the spatial
autoencoder to the output of the spectral encoder and then applying the spectral
decoder to the output of the spatial autoencoder. The model is then trained using
a novel mixed loss function that combines the loss of the spectral and the spatial
model. Since the spatial model is applied on the output of the spectral encoder,
the spatial compression methods that are optimised for 2D image compression can
be used in S2C-Net in the context of hyperspectral image compression.
In the experiments, we have evaluated our S2C-Net on HySpecNet-11k that is
a large-scale hyperspectral image dataset [2]. Experimental results show that S2C-
Net outperforms both spectral and spatial state of the art compression methods for
bitrates lower than 1 bit per pixel per channel (bpppc). Specifically, it can achieve
lower distortion for similar compression rates and offers the possibility to reach
much higher compression rates with only slightly reduced reconstruction quality.

References
[1] F. Zhang, C. Chen, and Y. Wan, “A survey on hyperspectral remote sensing
image compression,” in IEEE IGARSS, 2023, pp. 7400–7403..
[2] M. H. P. Fuchs and B. Demir, “Hyspecnet-11k: A large-scale hyperspectral
dataset for benchmarking learning-based hyperspectral image compression meth-
ods,” in IEEE IGARSS, 2023, pp. 1779–1782.

How to cite: Sprengel, N., Fuchs, M. H. P., and Demir, P. B.: Learning-Based Hyperspectral Image Compression Using A Spatio-Spectral Approach, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19198, https://doi.org/10.5194/egusphere-egu24-19198, 2024.

Posters virtual: Mon, 15 Apr, 14:00–15:45 | vHall X4

Display time: Mon, 15 Apr, 08:30–Mon, 15 Apr, 18:00
Chairperson: Nouri Sabo
vX4.21
|
EGU24-7160
|
ECS
Monitoring global 30-m impervious surface changes from 1985 to 2020 using dense time-series Landsat imagery
(withdrawn after no-show)
Xiao Zhang