ESSI1.1 | Strategies and Applications of AI and ML in a Spatiotemporal Context
PICO
Strategies and Applications of AI and ML in a Spatiotemporal Context
Co-organized by CL5/GI2/NP4/PS1
Convener: Christopher Kadow | Co-conveners: Jens Klump, Hanna Meyer
PICO
| Wed, 26 Apr, 14:00–15:45 (CEST)
 
PICO spot 2
Wed, 14:00
Modern challenges of climate change, disaster management, public health and safety, resources management, and logistics can only be addressed through big data analytics. A variety of modern technologies are generating massive volumes of conventional and non-conventional geospatial data at local and global scales. Most of this data includes geospatial data components and are analysed using spatial algorithms. Ignoring the geospatial component of big data can lead to an inappropriate interpretation of extracted information. This gap has been recognised and led to the development of new spatiotemporally aware strategies and methods.
This session discusses advances in spatiotemporal machine learning methods and the softwares and infrastructures to support them.

PICO: Wed, 26 Apr | PICO spot 2

14:00–14:05
AI/ML, Spatial Statistics, and Model Evaluation
14:05–14:07
|
PICO2.1
|
EGU23-8479
|
ECS
|
On-site presentation
|
Lily-belle Sweet, Christoph Müller, Mohit Anand, and Jakob Zscheischler

Machine learning models are able to capture highly complex, nonlinear relationships, and have been used in recent years to accurately predict crop yields at regional and national scales. This success suggests that the use of ‘interpretable’ or ‘explainable’ machine learning (XAI) methods may facilitate improved scientific understanding of the compounding interactions between climate, crop physiology and yields. However, studies have identified implausible, contradicting or ambiguous results from the use of these methods. At the same time, researchers in fields such as ecology and remote sensing have called attention to issues with robust model evaluation on spatiotemporal datasets. This suggests that XAI methods may produce misleading results when applied to spatiotemporal datasets, but the impact of model evaluation strategy on the results of such methods has not yet been examined.

In this study, machine learning models are trained to predict simulated crop yield, and the impact of model evaluation strategy on the interpretation and performance of the resulting models is assessed. Using data from a process-based crop model allows us to then comment on the plausibility of the explanations provided by common XAI methods. Our results show that the choice of evaluation strategy has an impact on (i) the interpretations of the model using common XAI methods such as permutation feature importance and (ii) the resulting model skill on unseen years and regions. We find that use of a novel cross-validation strategy based on clustering in feature-space results in the most plausible interpretations. Additionally, we find that the use of this strategy during hyperparameter tuning and feature selection results in improved model performance on unseen years and regions. Our results provide a first step towards the establishment of best practices for model evaluation strategy in similar future studies.

How to cite: Sweet, L., Müller, C., Anand, M., and Zscheischler, J.: Model evaluation strategy impacts the interpretation and performance of machine learning models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8479, https://doi.org/10.5194/egusphere-egu23-8479, 2023.

14:07–14:09
|
PICO2.2
|
EGU23-16096
|
ECS
|
On-site presentation
Jens Heinke, Christoph Müller, and Dieter Gerten

Machine learning algorithms have become popular tools for the analysis of spatial data. However, a number of studies have demonstrated that the application of machine learning algorithms in a spatial context has limitations. New geographic locations may lie outside of the data range for which the model was trained, and estimates of model performance may be too optimistic, when spatial autocorrelation of geographic data is not properly accounted for in cross-validation. We here use artificially created spatial data fields to conduct a series of experiments to further investigate the potential pitfalls of random forest regression applied to spatial data. We provide new insights on previously reported limitations and identify further limitations. We demonstrate that the same mechanism that leads to overoptimistic estimates of model performance (when based on ordinary random k-fold cross-validation) can also lead to a deterioration of model performance. When covariates contain sufficient information to deduce spatial coordinates, the model can reproduce any spatial pattern in the training data even if it is entirely or partly unrelated to the covariates. The presence of spatially correlated residuals in the training data changes how the model utilizes the information of the covariates and impedes the identification of the actual relationship between covariates and response. This reduces model performance when the model is applied to data with a different spatial structure. Under such conditions, machine learning methods that are sufficiently flexible to fit to autocorrelated residuals (such as random forest) may not be an optimal choice. Better models may be obtained using less flexible but more transparent approaches such as generalized linear models or additive models.

How to cite: Heinke, J., Müller, C., and Gerten, D.: Limitations of machine learning in a spatial context, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16096, https://doi.org/10.5194/egusphere-egu23-16096, 2023.

14:09–14:11
|
PICO2.3
|
EGU23-9437
|
On-site presentation
Mikhail Kanevski

Predictive learning from data usually is formulated as a problem of finding the best connection between input and output spaces by optimizing well-defined cost or risk functions.

In geo-environmental studies input space is usually constructed from the geographical coordinates and features generated from different sources of available information (feature engineering), by applying expert knowledge, using deep learning technologies and taking into account the objectives of the study. Often, it is not known in advance if the input space is complete or contains redundant features. Therefore, unsupervised learning (UL) is essential in environmental data analysis, modelling, prediction and visualization. UL also helps better understand the data and phenomena they describe as well as in interpreting/communicating modelling strategies and the results in the decision-making process.

The main objective of the present investigation is to review some important topics in unsupervised learning from environmental data: 1) quantitative description of the input space (“monitoring network”) structure using global and local topological and fractal measures, 2) dimensionality reduction, 3) unsupervised feature selection and clustering by applying a variety of machine learning algorithms (kernel-based, ensemble learning, self-organizing maps) and visualization tools.

Major attention is paid to the simulated and real spatial data (pollution, permafrost, geomorphological and wind fields data).  Considered case studies have different input space dimensionality/topology and number of measurements. It is confirmed that UL should be considered an integral part of a generic methodology of environmental data analysis. Comprehensive comparisons and discussions of the results conclude the research.

 

 

How to cite: Kanevski, M.: On Unsupervised Learning from Environmental Data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9437, https://doi.org/10.5194/egusphere-egu23-9437, 2023.

14:11–14:13
|
PICO2.4
|
EGU23-3267
|
ECS
|
On-site presentation
Ximeng Cheng, Jost Arndt, Emilia Marquez, and Jackie Ma

New models are emerging from Artificial Intelligence (AI) and its sub-fields, in particular, Machine Learning and Deep Learning that are being applied in different application areas including geography (e.g., land cover identification and traffic volume forecasting based on spatial data). Different from well-known datasets often used to develop AI models (e.g., ImageNet for image classification), spatial data has an intrinsic feature, i.e., spatial heterogeneity, which leads to varying relationships across different regions between the independent (i.e., the model input X) and dependent variables (i.e., the model output Y). This makes it difficult to conduct large-scale studies with a single robust AI model. In this study, we draw on the idea of modular learning, i.e., to decompose large-scale tasks into sub-tasks for specific sub-regions and use multiple AI models to achieve these sub-tasks. The decomposition is based on the spatial characteristics to ensure that the relationship between independent and dependent variables is similar in each sub-region. We explore this approach for forecasting COVID-19 cases in Germany using spatiotemporal data (e.g., weather data and human mobility data) as an example and compare the prediction tasks with a single model to the proposed decomposition learning procedure in terms of accuracy and efficiency. This study is part of the project DAKI-FWS which is funded by the Federal Ministry of Economic Affairs and Climate Action in Germany to develop an early warning system to stabilize the German economy.

How to cite: Cheng, X., Arndt, J., Marquez, E., and Ma, J.: Decomposition learning based on spatial heterogeneity: A case study of COVID-19 infection forecasting in Germany, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3267, https://doi.org/10.5194/egusphere-egu23-3267, 2023.

14:13–14:15
|
PICO2.5
|
EGU23-16768
|
Virtual presentation
Armita Davarpanah, Anthony.l Nguy Robertson, Monica Lipscomb, Jacob.w. McCord, and Amy Morris

Levee systems are designed to reduce the risk of water-related natural hazards (e.g., flooding) in areas behind levees. Most levees in the U.S. are designed to protect people and facilities against the impacts of the 100-year floods. However, the current climate change is increasing the probability of the occurrence of 500-year flood events that in turn increases the likelihood of economic loss, environmental damage, and fatality that disproportionately impacts communities of color and low-income groups facing socio-economic inequities in leveed areas. The increased frequency and intensity of flooding is putting extra pressure on emergency responders that often require diverse, multi-dimensional data originating from different sources to make sound decisions. Currently, the integration of these heterogeneous data acquired by diverse sensors and emergency agencies about environmental, hydrological, and demographic indicators requires costly and complex programming and analysis that hinder rapid disaster management efforts. Our domain ‘Levee System Ontology (LSO)’ resolves the data integration and software interoperability issues by semantically modeling the static aspects, dynamic processes, and information content of the levee systems by extending the well-structured, top-level Basic Formal Ontology (BFO) and mid-level Common Core Ontologies (CCO). LSO’s class and property names follow the terminology of the National Levee Database (NLD), allowing data scientists using NLD data to constrain their classifications based on the knowledge represented in LSO. In addition to modeling the information related to the characteristics and status of the structural components of the levee system, LSO represents the residual risk in leveed areas, economic and environmental losses, and damage to facilities in case of breaching and/or overtopping of levees. LSO enables reasoning to infer components and places along levees and floodwalls where the system requires inspection, maintenance, and repair based on the status of system components. The ontology also represents the impact of flood management activities on different groups of people from an environmental justice perspective, based on the principles of DEI (diversity, equity, inclusion) as defined by the U.N. Sustainable Development Goals.

How to cite: Davarpanah, A., Nguy Robertson, A. L., Lipscomb, M., McCord, J. w., and Morris, A.: Knowledge Representation of Levee Systems - an Environmental Justice Perspective, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16768, https://doi.org/10.5194/egusphere-egu23-16768, 2023.

AI and ML in Remote Sensing
14:15–14:17
|
PICO2.6
|
EGU23-14716
|
ECS
|
Virtual presentation
Jayan Wijesingha, Ilze Dzene, and Michael Wachendorf

To assess the impact of anthropogenic and natural causes on land use and land use cover change, mapping of spatial and temporal changes is increasingly applied. Due to the availability of satellite image archives, remote sensing (RS) data-based machine learning models are in particular suitable for mapping and analysing land use and land cover changes. Most often, models trained with current RS data are employed to estimate past land cover and land use using available RS data with the assumption that the trained model predicts past data values similar to the accuracy of present data. However, machine learning models trained on RS data from particular locations and times may not be well transferred to new locations and time datasets due to various reasons. This study aims to assess the spatial-temporal transferability of the RS data models in the context of agricultural land use mapping. The study was designed to map agricultural land use (5 classes: maize, grasslands, summer crops, winter crops, and mixed crops) in two regions in Germany (North Hesse and Weser Ems) between the years 2010 and 2018 using Landsat archive data (i.e., Landsat 5, 7, and 8). Three model transferability scenarios were evaluated, a) temporal - S1, b) spatial - S2 and c) spatial-temporal - S3. Two machine learning models (random forest - RF and Convolution Neural Network - CNN) were trained. For each transferability scenario, class-level F1 and macro F1 values were compared between the reference and targeted transferability systems. Moreover, to explain the results of transferability scenarios, transferability results were further explored using dissimilarity index and area of applicability (AOA) concepts. The average macro F1 value of the trained model for the reference scenario (no transferability) was 0.75. For assessed transferability scenarios, the average macro F1 values were 0.70, 0.65 and 0.60, for S1, S2, and S3 respectively. It shows that, when predicting data from different spatial-temporal contexts, the model performance is decreasing. In contrast, the average proportion of the data inside the AOA did not show a clear pattern for different scenarios. In the context of RS data-related model building, spatial-temporal transferability is essential because of the limited availability of the labelled data. Thus, the results from this case study provide an understanding of how model performance changes when the model is transferred to new settings with data from different temporal and spatial domains.

How to cite: Wijesingha, J., Dzene, I., and Wachendorf, M.: Spatial-temporal transferability assessment of remote sensing data models for mapping agricultural land use, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14716, https://doi.org/10.5194/egusphere-egu23-14716, 2023.

14:17–14:19
|
PICO2.7
|
EGU23-11601
|
On-site presentation
Francesco Nattino, Ou Ku, Meiert W. Grootes, Emma Izquierdo-Verdiguier, Serkan Girgin, and Raúl Zurita-Milla

Unsupervised classification techniques are becoming essential to extract information from the wealth of data that Earth observation satellites and other sensors currently provide. These datasets are inherently complex to analyze due to the extent across multiple dimensions - spatial, temporal, and often spectral or band dimension – their size, and the high resolution of current sensors. Traditional one-dimensional cluster analysis approaches, which are designed to find groups of similar elements in datasets such as rasters or time series, may come short of identifying patterns in these higher-dimensional datasets, often referred to as data cubes. In this context, we present our Clustering Geodata Cubes (CGC) software, an open-source Python package that implements a set of co- and tri-clustering algorithms to simultaneously group elements across two and three dimensions, respectively. The package includes different implementations to most efficiently tackle datasets that fit into the memory of a single machine as well as very large datasets that require cluster computing. A refining strategy to facilitate data pattern identification is also provided. We apply CGC to investigate gridded datasets representing the predicted day of the year when spring onset events (first leaf, first bloom) occur according to a well-established phenological model. Specifically, we consider spring indices computed at high spatial resolution (1 km) and continental scale (conterminous United States) for the last 40+ years and extract the main spatiotemporal patterns present in the data via CGC co-clustering functionality.  

How to cite: Nattino, F., Ku, O., Grootes, M. W., Izquierdo-Verdiguier, E., Girgin, S., and Zurita-Milla, R.: Clustering Geodata Cubes (CGC) and Its Application to Phenological Datasets, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11601, https://doi.org/10.5194/egusphere-egu23-11601, 2023.

14:19–14:21
|
PICO2.8
|
EGU23-2843
|
ECS
|
On-site presentation
Lukas Kondmann, Caglar Senaras, Yuki M. Asano, Akhil Singh Rana, Annett Wania, and Xiao Xiang Zhu

Increasing coverage of commercial and public satellites allows us to monitor the pulse of the Earth in ever-shorter frequency (Zhu et al., 2017). Together with the rise of deep learning in artificial intelligence (AI) (LeCun et al., 2015), the field of AI for Earth Observation (AI4EO) is growing rapidly. However, many supervised deep learning techniques are data-hungry, which means that annotated data in large quantities are necessary to help these algorithms reach their full potential. In many Earth Observation applications such as change detection, this is often infeasible because high-quality annotations require manual labeling which is time-consuming and costly.  

Self-supervised learning (SSL) can help tackle the issue of limited label availability in AI4EO. In SSL, an algorithm is pretrained with tasks that only require the input data without annotation. Notably, Masked Autoencoders (MAE) have shown promising performances recently where a Vision Transformer learns to reconstruct a full image with only 25% of it as input. We hypothesize that the success of MAEs also extends to satellite imagery and evaluate this with a change detection downstream task. In addition, we provide a multitemporal DINO baseline which is another widely successful SSL method. Further, we test a second version of MAEs, which we call GeoMAE. GeoMAE incorporates the location and date of the satellite image as auxiliary information in self-supervised pretraining. The coordinates and date information are passed as additional tokens to the MAE model similar to the positional encoding. 
The pretraining dataset used is the RapidAI4EO corpus which contains multi-temporal Planet Fusion imagery for a variety of locations across Europe. The dataset for the downstream task also uses Planet Fusion in pairs as input data. These are provided on a 600m * 600m patch level three months apart together with a classification if the respective patch has changed in this period. Self-supervised pretraining is done for up to 150 epochs where we take the model with the best validation performance on the downstream task as a starting point for the test set. 

We find that the regular MAE model scores the best on the test set with an accuracy of 81.54% followed by DINO with 80.63% and GeoMAE with 80.02%. Pretraining MAE with ImageNet data instead of satellite images results in a notable performance loss down to 71.36%. Overall, our current pretraining experiments can not yet confirm our hypothesis that GeoMAE is advantageous compared to regular MAE. However, in similar spirit, Cong et al. (2022) recently introduced SatMAE which outlines that for other remote sensing applications, the combination of auxiliary information and novel masking strategies is a key factor. Therefore, it seems that a combination of location and time inputs together with adapted masking may also hold the most potential for change detection. There is ample potential for future research in geo-specific applications of MAEs and we provide a starting point for this with our experimental results for change detection. 

How to cite: Kondmann, L., Senaras, C., Asano, Y. M., Rana, A. S., Wania, A., and Zhu, X. X.: Geography-Aware Masked Autoencoders for Change Detection in Remote Sensing, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2843, https://doi.org/10.5194/egusphere-egu23-2843, 2023.

14:21–14:23
|
PICO2.9
|
EGU23-12933
|
ECS
|
On-site presentation
Daniel Kinalczyk, Christine Wessollek, and Matthias Forkel

Land ecosystems dampen the increase of atmospheric CO2 by storing carbon in soils and vegetation. In order to estimate how long carbon stays in land ecosystems, a detailed knowledge about the distribution of carbon in different vegetation components is needed. Current Earth observation products provide estimates about total above-ground biomass but do not further separate between carbon stored in trees, understory vegetation, shrubs, grass, litter or woody debris. Here we present an approach in which we link several Earth observation products with a ground-based database to estimate biomass in various vegetation components. Therefore, we use information about the statistical distribution of biomass components provided by the North American Wildland Fuels Database (NAWFD), which are however not available as geocoded data. We use ESA CCI AGB version 3 data from 2010 as a proxy in order to link the NAWFD data to the spatial information from Earth observation products. The biomass and corresponding uncertainty from the ESA CCI AGB and a map of vegetation types are used to select the likely distribution of vegetation biomass components from the set of in-situ measurements of tree biomass. We then apply Isolation Forest outlier detection and bootstrapping for a robust comparison of both datasets and for uncertainty estimation. We use Random Forest and Gaussian Process regression to predict the biomass of trees, shrubs, snags, herbaceous vegetation, coarse and fine woody debris, duff and litter from ESA CCI AGB and land cover, GEDI canopy height, Sentinel-3 LAI and bioclimatic data. The regression models reach high predictive power and allow to also extrapolate to other regions. Our derived estimates of vegetation carbon stock components provide a more detailed view on the land carbon storage and contribute to an improved estimate of potential carbon emissions from respiration, disturbances and fires.

How to cite: Kinalczyk, D., Wessollek, C., and Forkel, M.: Estimating vegetation carbon stock components by linking ground databases with Earth observations, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12933, https://doi.org/10.5194/egusphere-egu23-12933, 2023.

AI and ML in Climate and Earth System Science
14:23–14:25
|
PICO2.10
|
EGU23-12773
|
On-site presentation
|
Anton Sokolov, Hervé Delbarre, Daniil Boldyriev, Tetiana Bulana, Bohdan Molodets, and Dmytro Grabovets

Industrial pollution remains a major challenge in spite of recent technological developments and purification procedures. To effectively monitor atmosphere contamination, data from air quality networks should be coupled with advanced spatiotemporal statistical methods.

Our previous studies showed that standard interpolation techniques (like inverse distance weighting, linear or spline interpolation, kernel-based Gaussian Process Regression, GPR) are quite limited for the simulation of a smoke-like narrow-directed industrial pollution in the vicinity of the source (a few tenths of kilometers). In this work, we try to apply GPR, based on statistically estimated covariances. These covariances are calculated using СALPUFF atmospheric pollution dispersion model for a one-year simulation in the Kryvyi Rih region. The application of GPR permits taking into account high correlations between pollution values in neighboring points revealed by modeling. The result of the GPR covariance-based technique is compared with other interpolation techniques. It can be used then in the estimation and optimization of air quality networks.

How to cite: Sokolov, A., Delbarre, H., Boldyriev, D., Bulana, T., Molodets, B., and Grabovets, D.: Industrial Atmospheric Pollution Estimation Using Gaussian Process Regression, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12773, https://doi.org/10.5194/egusphere-egu23-12773, 2023.

14:25–14:27
|
PICO2.11
|
EGU23-4929
|
On-site presentation
Ilaria Fava, Peter Thijsse, Gergely Sipos, and Dick Schaap

The iMagine project is devoted to developing and delivering imaging data and services for aquatic science. Started in September 2022, the project will provide a portfolio of image data collections, high-performance image analysis tools empowered with Artificial Intelligence, and best practice documents for scientific image analysis. These services and documentation will enable better and more efficient processing and analysis of imaging data in marine and freshwater research, accelerating our scientific insights about processes and measures relevant to healthy oceans, seas, and coastal and inland waters. By building on the European Open Science Cloud compute platform, iMagine delivers a generic framework for AI model development, training, and deployment, which researchers can adopt for refining their AI-based applications for water pollution mitigation, biodiversity and ecosystem studies, climate change analysis and beach monitoring, but also for developing and optimising other AI-based applications in this field. The iMagine AI development and testing framework offers neural networks, parallel post-processing of extensive data, and analysis of massive online data streams in distributed environments. The synergies among the eight aquatic use cases in the project will lead to common solutions in data management, quality control, performance, integration, provenance, and FAIRness and contribute to harmonisation across RIs. The resulting iMagine AI development and testing platform and the iMagine use case applications will provide another component to the European marine data management landscape, valid for the Digital Twin of the Ocean, EMODnet, Copernicus, and international initiatives. 

How to cite: Fava, I., Thijsse, P., Sipos, G., and Schaap, D.: Using AI and ML to support marine science research, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4929, https://doi.org/10.5194/egusphere-egu23-4929, 2023.

14:27–14:29
|
PICO2.12
|
EGU23-13196
|
ECS
|
On-site presentation
Maximilian Witte, Danai Filippou, Étienne Plésiat, Johannes Meuer, Hannes Thiemann, David Hall, Thomas Ludwig, and Christopher Kadow

High resolution in weather and climate was always a common and ongoing goal of the community. In this regards, machine learning techniques accompanied numerical and statistical methods in recent years. Here we demonstrate that artificial intelligence can skilfully downscale low resolution climate model data when combined with numerical climate model data. We show that recently developed image inpainting technique perform accurate super-resolution via transfer learning using the HighResMIP of CMIP6 (Coupled Model Intercomparison Project Phase 6) experiments. Its huge data base offers a unique training opportunity for machine learning approaches. The transfer learning purpose allows also to downscale other CMIP6 experiments and models, as well as observational data like HadCRUT5. Combined with the technology of Kadow et al. 2020 of infilling missing climate data, we gain a neural network which reconstructs and downscales the important observational data set (IPCC AR6) at the same time. We further investigate the application of our method to downscale quantities predicted from a numerical ocean model (ICON-O) to improve computation times. In this process we focus on the ability of the model to predict eddies from low-resolution data.

An extension to:

Kadow, C., Hall, D.M. & Ulbrich, U. Artificial intelligence reconstructs missing climate information. Nature Geoscience 13, 408–413 (2020). https://doi.org/10.1038/s41561-020-0582-5

How to cite: Witte, M., Filippou, D., Plésiat, É., Meuer, J., Thiemann, H., Hall, D., Ludwig, T., and Kadow, C.: From Super-Resolution to Downscaling - An Image-Inpainting Deep Neural Network for High Resolution Weather and Climate Models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13196, https://doi.org/10.5194/egusphere-egu23-13196, 2023.

14:29–14:31
|
PICO2.13
|
EGU23-6818
|
ECS
|
On-site presentation
Adili Abulaitijiang, Eike Bolmer, Ribana Roscher, Jürgen Kusche, and Luciana Fenoglio-Marc

Eddies are circular rotating water masses, which are usually generated near the large ocean currents, e.g., Gulf Stream. Monitoring eddies and gaining knowledge on eddy statistics over a large region are important for fishery, marine biology studies, and testing ocean models.

At mesoscale, eddies are observed in radar altimetry, and methods have been developed to identify, track and classify them in gridded maps of sea surface height derived from multi-mission data sets. However, this procedure has drawbacks since much information is lost in the gridded maps. Inevitably, the spatial and temporal resolution of the original altimetry data degrades during the gridding process. On the other hand, the task of identifying eddies has been a post-analysis process on the gridded dataset, which is, by far, not meaningful for near-real time applications or forecasts. In the EDDY project at the University of Bonn, we aim to develop methods for identifying eddies directly from along track altimetry data via a machine (deep) learning approach.

Since eddy signatures (eddy boundary and highs and lows on sea level anomaly, SLA) are not possible to extract directly from along track altimetry data, the gridded altimetry maps from AVISO are used to detect eddies. These will serve as the reference data for Machine Learning. The eddy detection on 2D grid maps is produced by open-source geometry-based approach (e.g., py-eddy-tracker, Mason et al., 2014) with additional constraints like Okubo-Weiss parameter. Later, Sea Surface Temperature (SST) maps of the same region and date (also available from AVISO) are used for manually cleaning the reference data. Noting that altimetry grid maps and SST maps have different temporal and spatial resolution, we also use the high resolution (~6 km) ocean modeling simulation dataset (e.g., FESOM, Finite Element Sea ice Ocean Model). In this case, the FESOM dataset provides a coherent, high-resolution SLA and SST, salinity maps for the study area and is a potential test basis to develop the deep learning network.

The single modal training via a Conventional Neural Network (CNN) for the 2D altimetry grid maps produced excellent dice score of 86%, meaning the network almost detects all eddies in the Gulf Stream, which are consistent with reference data. For the multi-modal training, two different training networks are developed for 1D along-track altimetry data and 2D grid maps from SLA and SST, respectively, and then they are combined to give the final classification output. A transformer model is deemed to be efficient for encoding the spatiotemporal information from 1D along track altimetry data, while CNN is sufficient for 2D grid maps from multi-sensors.

In this presentation, we show the eddy classification results from the multi-modal deep learning approach based on along track and gridded multi-source datasets for the Gulf stream area for the period between 2017 and 2019. Results show that multi-modal deep learning improve the classification by more than 20% compared to transformer model training on along-track data alone.

How to cite: Abulaitijiang, A., Bolmer, E., Roscher, R., Kusche, J., and Fenoglio-Marc, L.: Eddy identification from along-track altimeter data with multi-modal deep learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6818, https://doi.org/10.5194/egusphere-egu23-6818, 2023.

14:31–14:33
|
PICO2.14
|
EGU23-5379
|
ECS
|
Virtual presentation
An artificial intelligence reconstruction of global gridded surface winds
(withdrawn)
Lihong Zhou, Haofeng Liu, and Zhenzhong Zeng
14:33–15:45