ESSI1.5 | Digital Twins of the Earth
EDI
Digital Twins of the Earth
Convener: Rochelle SchneiderECSECS | Co-conveners: Mariana ClareECSECS, Simon Baillarin, Jacqueline Le Moigne, Matthew Chantry
Orals
| Wed, 26 Apr, 14:00–15:25 (CEST)
 
Room 0.51
Posters on site
| Attendance Thu, 27 Apr, 08:30–10:15 (CEST)
 
Hall X4
Posters virtual
| Attendance Thu, 27 Apr, 08:30–10:15 (CEST)
 
vHall ESSI/GI/NP
Orals |
Wed, 14:00
Thu, 08:30
Thu, 08:30
A Digital Twin of the Earth (DTE) is an interactive, dynamic digital replica of our planet that combines observations with simulations from physical models and advanced AI-based analysis. It aims to replicate the Earth's complex ecosystem, allowing us to estimate our planet’s response to changes under both the current climate state and future climate projections. A DTE is an emerging concept that is capable of simulating what-if scenarios before they occur, which is crucial for natural hazard mitigation and adaptation plans (e.g., floods, heatwaves, wildfires, droughts, etc.). A DTE calls for an advanced, federated, multi-computing architecture that works across organizations and agency boundaries. For a DTE to be successful, it needs to be open source and developed for and by the community. It needs to be an extendable framework that encourages continuous contributions, integration, and development of advanced AI solutions in order to have an accurate digital representation of the physical environment. This session welcomes presentations on current open-source frameworks and enabling technologies for DTE including, but not restricted to:
• Open-source DTE framework
• Computer infrastructure to move virtual data from DTE repositories to service platforms
• Surrogate models for missing observations and unresolved physical processes
• Hybrid AI / physics-based modelling
• Extreme value predictions
• Uncertainty quantification and representation
• Post-processing (event detection and downscaling/super-resolution)

Orals: Wed, 26 Apr | Room 0.51

Chairpersons: Rochelle Schneider, Jacqueline Le Moigne, Simon Baillarin
14:00–14:05
14:05–14:15
|
EGU23-15455
|
ESSI1.5
|
ECS
|
solicited
|
On-site presentation
Towards a Digital Twin of the Carbon Cycle in Europe
(withdrawn)
Cristina Ruiz Villena, Rob Parker, Tristan Quaife, Natalie Douglas, and Andy Wiltshire
14:15–14:25
|
EGU23-10138
|
ESSI1.5
|
Highlight
|
On-site presentation
Thomas Huang and the NASA AIST IDEAS and SCO FloodDAM Teams

An Earth System Digital Twin (ESDT) is a dynamic, interactive, digital replica of the state and temporal evolution of Earth systems. It integrates multiple models along with observational data, and connects them with analysis, AI, and visualization tools. Together, these enable users to explore the current state of the Earth system, predict future conditions, and run hypothetical scenarios to understand how the system would evolve under various assumptions. The NASA Advanced Information Systems Technology (AIST) program’s Integrated Digital Earth Analysis System (IDEAS) project partners with the Space for Climate Observatory (SCO) (https://www.spaceclimateobservatory.org/) FloodDAM Digital Twin effort led by CNES to establish an extensible open-source framework to develop digital twins of our physical environment for Earth Science with an initial focus on surface water hydrology in Earth’s rivers and lakes. The joint effort delivers an open-source system architecture with mechanisms for the outputs of one model to feed into others, for driving models with observation data, and for harmonizing observation data and model outputs for analysis. Water resource science is multidisciplinary in nature, and it not only assesses the impact from our changing climate using measurements and modeling, but it also offers opportunities for science-guided, data-driven decision support. The joint effort uses flood prediction and analysis as its primary use case. The work presents a multi-agency joint effort to define and develop a federated Earth System Digital Twin solution between NASA and CNES that powers advanced immersive science and custom user applications for scenario-based analysis.

How to cite: Huang, T. and the NASA AIST IDEAS and SCO FloodDAM Teams: Open-Source Framework For Earth System Digital Twins Applied to Surface Water Hydrology, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10138, https://doi.org/10.5194/egusphere-egu23-10138, 2023.

14:25–14:35
|
EGU23-13921
|
ESSI1.5
|
Highlight
|
On-site presentation
Jean-Marc Delvit, Pierr-Marie Brunet, Pierre Lassalle, Dimitri Lallement, and Simon Baillarin

The notion of digital twin can be ambiguous because it can be defined in various ways. These last months have seen the emergence of many global digital twin initiatives. The challenge of these global digital twins is to create a qualified digital replica model of our planet, making it possible to monitor, simulate and anticipate natural phenomena and human activities. The target users are either scientists or decision makers. Through the digital twin, they have access to a digital representation of an environment using all available spatial and non-spatial data accompanied with a set of physical and statistical models to calculate projections, replay past events or simulate future ones.

 

Refining and evaluating the accuracy of these projections is a major challenge for digital twins. In addition to the knowledge of physical modeling, suitable data must also be available. Complementary to the global approach, the notion of local and dated digital twins appears then to be essential. Considering a digital representation of a restricted geographical area of interest (an urban area, watershed, coastline, etc.) allows to access to very high-resolution "fresh" data in 2D and 3D, in-situ data and small-mesh physical model. This user-centered and naturally thematic approach responds more finely and more pragmatically to the objectives presented. These local, dated and thematic digital twins are by essence ephemeral: a way to meet a specific need.

 

The challenge is therefore to setup a Digital Twin Factory (DTF). This DTF relies on a data lake, a high computing capacity via clouds and/or HPC and has thematic algorithms and methodologies able to generate registered and coherent layers of information in order to enrich a datacube from which physical indicators can be computed spatially. Thanks to its thematic, local and on-demand characteristics, the DTF can mitigate the need to have an universal model of metadata. This datacube allows to apply local physical and artificial intelligence models. The overarching architecture of the DTF will be presented. Specific examples on coastal, urban and risk topics will also be presented. These digital twins rely on a large number of expertises in both data and modeling involving various French (CNES, IGN, SHOM, IRD, CEA, INRAE, METEOFRANCE, CERFACS, BRGM, etc.) or international organizations (ESA, NASA, NOAA,…).

For coastal areas, the goal is to well describe the bathymetry topography continuum by taking into account the intertidal zones and the specialized dynamic models together with 3D coastal land cover characterisation. For urban areas, the ambition is first to automatically produce a qualified 3D map together with its additional layers of information: 3D objects and related semantics (land cover and land use) including temporal dynamic, thermal information. Then, for issues related to the management of natural risks (such as floods or fires) similar data layers can be used. Finally, new hypothesis can be injected in these digital replica and multiple scenarios can be applied to assess causal relationship between hypothesis and prediction. Very promising results will also be presented.

How to cite: Delvit, J.-M., Brunet, P.-M., Lassalle, P., Lallement, D., and Baillarin, S.: Towards a local, dated and thematic digital twins factory, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13921, https://doi.org/10.5194/egusphere-egu23-13921, 2023.

14:35–14:45
|
EGU23-5443
|
ESSI1.5
|
On-site presentation
Ioannis Prapas, Ilektra Karasante, Akanksha Ahuja, Spyros Kondylatos, Eleanna Panagiotou, Charalampos Davalas, Lazaro Alonso, Rackhun Son, Michail Dimitrios, Nuno Carvalhais, and Ioannis Papoutsis

Due to climate change, we expect an exacerbation of fire in Europe and around the world, with major wildfire events extending to northern latitudes and boreal regions [1]. In this context, it is important to improve our capabilities to anticipate fire danger and understand its driving mechanisms at a global scale. As the earth is an interconnected system, large-scale processes can have an effect on the global climate and fire seasons. For example, extreme fires in Siberia have been linked to previous-year surface moisture conditions and anomalies in the Arctic Oscillation [2]. As part of the ESA-funded project SeasFire (https://seasfire.hua.gr), we gather and harmonize data related to seasonal fire drivers and develop deep learning models that are able to capture spatiotemporal associations with the goal to forecast burned area sizes on a seasonal scale, globally. We publish a global analysis-ready datacube for seasonal fire forecasting for the years 2001-2021 at a spatiotemporal resolution of 0.25 deg x 0.25 deg x 8 days [3]. The datacube includes a combination of variables describing the seasonal fire drivers , namely climate, vegetation, oceanic indices, human factors, land cover and the burned areas. We leverage the availability of big EO data and the advances in Deep Learning modeling [4, 5] to forecast global burned areas, capture the spatio-temporal interactions of the Earth System variables and identify potential teleconnections that determine wildfire regimes under the light of climate change. We present deep learning models that handle the Earth as a system, such as graph neural networks and transformer-based architectures. Applied on the prediction of wildfires at different temporal horizons we reveal that our deep learning models  skillfully predict burned area patterns. Exploring the explanation of the models, we reveal important spatio-temporal links.

Our approach, using AI to model the earth as a system and capture long spatio-temporal interactions, showcases the potential of an application-specific digital twin. The SeasFire datacube can be exploited as a baseline digital twin for modeling different natural hazards, including floods, heatwaves, and droughts. Thus, we will discuss insights and future directions for digital twins in anticipating climate extremes, inspired by our global wildfire prediction paradigm. 

 

[1] Wu, Chao, et al. "Historical and future global burned area with changing climate and human demography." One Earth 4.4 (2021): 517-530.

[2] Kim, Jin-Soo, et al. "Extensive fires in southeastern Siberian permafrost linked to preceding Arctic Oscillation." Science advances 6.2 (2020): eaax3308.

[3] Alonso, Lazaro, et al. Seasfire Cube: A Global Dataset for Seasonal Fire Modeling in the Earth System. Zenodo, 15 July 2022, p., doi:10.5281/zenodo.6834584.

[4] Kondylatos, Spyros et al. “Wildfire Danger Prediction and Understanding with Deep Learning.” Geophysical Research Letters, 2022.  doi: 10.1029/2022GL099368

[5] Prapas, Ioannis et al. “Deep Learning for Global Wildfire Forecasting.” NeurIPS 2022 workshop on Tackling Climate Change with Machine Learning, doi:  10.48550/arXiv.2211.00534

How to cite: Prapas, I., Karasante, I., Ahuja, A., Kondylatos, S., Panagiotou, E., Davalas, C., Alonso, L., Son, R., Dimitrios, M., Carvalhais, N., and Papoutsis, I.: Earth System Deep Learning towards a Global Digital Twin of Wildfires, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5443, https://doi.org/10.5194/egusphere-egu23-5443, 2023.

14:45–14:55
|
EGU23-8777
|
ESSI1.5
|
ECS
|
On-site presentation
Margarita Choulga, Tom Kimpson, Matthew Chantry, Gianpaolo Balsamo, Souhail Boussetta, Peter Dueben, and Tim Palmer

Ever increasing computing capabilities and crave for high-resolution numerical weather prediction and climate information are specially interesting for the representation of Earth surfaces. Knowledge of accurate and up-to-date surface state for ecosystems such as forest, agriculture, lakes and cities strongly influence the skin temperatures, turbulent latent and sensible heat fluxes providing the lower boundary conditions for energy and moisture availability near the surface. A quick and automatic tool to assess the benefits of updating different surface fields, that makes use of a neural network regression model trained to simulate satellite observed surface skin temperatures, was developed. This tool was deployed to determine the accuracy of several global datasets for lakes, forest, and urban distributions. Comparison results will be shown. The neural network regression model has proven to be useful and easily adaptable to assess unforeseen impacts of ancillary datasets, also detecting erroneous regional areas over the globe, proving to be a valuable support to model development. 

How to cite: Choulga, M., Kimpson, T., Chantry, M., Balsamo, G., Boussetta, S., Dueben, P., and Palmer, T.: Deep Learning for Verification of Earth's surfaces, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8777, https://doi.org/10.5194/egusphere-egu23-8777, 2023.

14:55–15:05
|
EGU23-2944
|
ESSI1.5
|
On-site presentation
Manil Maskey, Rahul Ramachandran, Tsengdar Lee, and Raghu Ganti

Foundation Models (FM) are AI models that are designed to replace a task or an application specific model. These FM can be applied to many different downstream applications. These FM are trained using self supervised techniques and can be built on any type of sequence data. The use of self supervised learning removes the hurdle for developing a large labeled dataset for training. Most FM use transformer architecture utilizes the notion of self attention which allows the network to model the influence of distant data points to each other both in space and time. The FM models exhibit emergent properties that are induced from the data.

 

FM can be an important tool for science. The scale of these models results in better performance for different downstream applications and these applications show better accuracy over models built from scratch. FM drastically reduces the cost of entry to build different downstream applications both in time and effort. FM for selected science datasets such as optical satellite data, can accelerate applications ranging from data quality monitoring, feature detection and prediction. FM can make it easier to infuse AI into scientific research by removing the training data bottleneck and increasing the use of science data.

How to cite: Maskey, M., Ramachandran, R., Lee, T., and Ganti, R.: Foundation AI Models for Science, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2944, https://doi.org/10.5194/egusphere-egu23-2944, 2023.

15:05–15:15
|
EGU23-10488
|
ESSI1.5
|
ECS
|
On-site presentation
Bing Gong, Yan Ji, Michael Langguth, and Martin Schultz

Accurate weather predictions are essential for many aspects of social society. Providing a reliable high-resolution precipitation field is essential to capture the finer scale of heavy precipitation events,  which is normally poorly represented in the numerical models. Statistical downscaling is an appealing tool since it is computationally inexpensive. Thus, it has been widely used over the last three decades. In recent years, super-resolution with deep learning has been successfully applied to generate high-resolution from low-resolution images in the computer vision domain. This task is somewhat analogous to downscaling in the meteorological domain.

Inspired by this, we explore the use of deep neural networks with a super-resolution approach for statistical precipitation downscaling. We apply the Swin transformer architecture (SwinIR) as well as convolutional neural network (U-Net) with a Generative Adversarial Network (GAN) and a diffusion component for probabilistic downscaling. We use short-range forecasts from the Integrated Forecast System (IFS) on a regular spherical grid with ΔxIFS=0.1° and map to the high-resolution observation radar data RADKLIM (ΔxRK=0.01°). The neural networks are fed with nine static and dynamic predictors, similar to the study by Harris et al., 2022. All the models are comprehensively evaluated by grid point-level errors as well as error metrics for spatial variability and the generated probability distribution. Our results demonstrate that the Swin Transformer model can improve accuracy with lower computation cost compared to the U-Net architecture.  The GAN and diffusion models both further help the model to capture the strong spatial variability from the observed data.   Our results encourage further development of DNNs that can be potentially leveraged to downscale other challenging Earth system data, such as cloud cover or wind. 

How to cite: Gong, B., Ji, Y., Langguth, M., and Schultz, M.: Statistical downscaling of precipitation with deep neural networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10488, https://doi.org/10.5194/egusphere-egu23-10488, 2023.

15:15–15:25
|
EGU23-5746
|
ESSI1.5
|
On-site presentation
|
Nicolas Dublé, François De Vieilleville, Adrien Lagrange, and Bertrand Le Saux

Most of the DNNs are designed to predict a class, a segmentation map or detections, no matter it is interpolation or extrapolation. Then, a confidence score answers to the need of having interpretable outputs and it could help an AI4EO end-user to take a decision.

The first investigated use case was binary classification of small Sentinel-2 tiles containing ships or not (with 2 classes “tile containing ship” or “tile not containing ship”). The database gathered 16,947 small 140x140 tiles extracted from 37 Sentinel-2 products. The ground truth was generated using Danish AIS data and then checked by human-eye. It was divided into several datasets for training, validation, testing, and active learning.

The second investigated use case was the classification of 10 geophysical phenomena from Sentinel-1 wave mode [Wang et al, 2018]. The database gathered 30,032 images with a quite balanced repartition between the 10 classes.

Classification networks (VGG16) were trained on the training datasets of both use cases, reaching high performances (>95% accuracy). We added several Out Of Distribution (OOD) examples for the ship classification use case, and used the test database provided for the Ocean Features use cases. Models reach around 70% accuracy on these 2 harder datasets so that regressing confidence could have an interest, with many examples of wrong classifications.

The solution developed used the confidNet approach developed by Corbière et al. Without retraining the classification DNN, we added a second DNN, composed by several dense layers, taking latent space from the classification network as input, which objective was to estimate a confidence score, by trying to approach the True Class Probability. It proved to be easy to train when enough failure examples are available in the database.

The main objective of the confidNet is to find the “ID”/”OOD” boundary, qualifying which examples the classifier should be able to predict (interpolation), and those it should fail to predict (extrapolation). An important work was done to try to qualify the quality of the predictions of the confidNet (confidence score) to ensure that it didn’t just learn to map the subset of the dataset where the classifier fails, and the one where the classifier was right. It presented interesting properties of generalization and turned out to be less “dataset-dependent” than a classical DNN.

21 different network configurations were tested, making the size of the architecture vary from 4k to 2.5M parameters. It showed that many of these configurations could reach similar results, and that the number of layers was more decisive than the number of parameters of the intermediate feature maps.

The main results obtained in this study are the relevance to utilize the confidNet approach in AI4EO scenarios, the possibility to reduce the network in an on-boarding interest, and a first warranty that the confidNet approach can learn in a different way from classification networks, with interesting properties of generalization. This study demonstrates the possibility to associate confidence scores to the predictions of a DNN in a satisfying way.

How to cite: Dublé, N., De Vieilleville, F., Lagrange, A., and Le Saux, B.: Confidence estimation of DNN predictions for on-board applications, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5746, https://doi.org/10.5194/egusphere-egu23-5746, 2023.

Posters on site: Thu, 27 Apr, 08:30–10:15 | Hall X4

Chairpersons: Mariana Clare, Matthew Chantry
X4.131
|
EGU23-10092
|
ESSI1.5
Justin Buck, Andrew Kingdon, John Siddorn, Gordon Blair, Alexandra Kokkinaki, John Blower, Matt Fry, Ben Marchant, Sam Pepler, John Watkins, and James Byrne

Environmental science is concerned with assessing the impacts of changing environmental conditions upon the state of the natural world. Environmental Digital Twins (EDT) are a new technology that enable environmental change scenarios for real systems to be modelled and their impacts visualised. They will be particularly effective with delivering understanding of these impacts on the natural environment to non-specialist stakeholders.

The UK Natural Environment Research Council (NERC) recently published its first digital strategy, which sets out a vision for digitally enabled environmental science for the next decade. This strategy places data and digital technologies at the heart of UK environmental science.

EDT have been made possible by the emergence of increasingly large, diverse, static data sources, networks of dynamic environmental data from sensor networks and time-variant process modelling. Once combined with visualisation capabilities these provide the basis of the digital twin technologies to enable the environmental scientists community to make a step-change in understanding of the environment. Components may be developed separately by a network but can be combined to improve understanding provided development follows agreed standards to facilitate data exchange and integration.

Replicating the behaviours of environmental systems is inevitably a multi-disciplinary activity. To enable this, an information management framework for Environmental digital twins (IMFe) is needed that establishes the components for effective information management within and across the EDT ecosystem. This must enable secure, resilient interoperability of data, and is a reference point to facilitate data use in line with security, legal, commercial, privacy and other relevant concerns. We present recommendations for developing an IMFe including the application of concepts such as an asset commons and balanced approach to standards to facilitate minimum interoperability requirements between twins while iteratively implementing an IMFe. Achieving this requires components to be developed that follow agreed standards to ensure that information can be trusted by the user, and that they are semantically interoperable so data can be shared. A digital Asset Register will be defined to provide access to and enable linking of such components.

This previously conceptual project has now been enhanced into the Pilot IMFe project aiming to define the architectures, technologies, standards and hardware infrastructure to develop a fully functioned environmental digital twin. During the project lifespan this will be tested with by construction of a pilot EDT for the Haig Fras Marine Conservation Zone (MCZ) that both enables testing of the proposed IMFe concepts and will provide a clear demonstration of the power of EDT to monitor and scenario test a complex environmental system for the benefit of stakeholders. 

How to cite: Buck, J., Kingdon, A., Siddorn, J., Blair, G., Kokkinaki, A., Blower, J., Fry, M., Marchant, B., Pepler, S., Watkins, J., and Byrne, J.: An Information Management Framework for Environmental Digital Twins (IMFe)  as a concept and pilot, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10092, https://doi.org/10.5194/egusphere-egu23-10092, 2023.

X4.132
|
EGU23-11489
|
ESSI1.5
Michael Langguth, Bing Gong, Yan Ji, Martin G. Schultz, and Olaf Stein

The representation of the atmospheric state at high spatial resolution is of particular relevance in various domains of Earth science. While global reanalysis datasets such as ERA5 provide comprehensive repositories of meteorological data, their spatial resolution (∆x≥25 km) is too coarse to capture relevant local features, mainly over complex terrain (e.g. cold pools in valleys, low-level jets, local heavy precipitation events).
Recently, various studies have started to apply deep neural networks adapted from computer vision to increase the spatial resolution of meteorological fields. Although these studies reveal great potential in the domain of statistical downscaling, intercomparison of the approaches is impeded due to a large variety of methods and deployed datasets. Comparisons to classical downscaling methods developed for decades in the meteorological community are also often underrepresented.

Inspired by the available benchmark datasets for various computer vision tasks and for weather forecasting (e.g. WeatherBench and WeatherBench Probability), our study aims to provide a benchmark dataset for statistical downscaling of meteorological fields. We choose the coarse-grained ERA5 reanalysis (∆xERA5≃30 km) and the fine-scaled COSMO-REA6 (∆xCREA6≃6km) as input and target datasets. Both datasets enable the formulation of a real downscaling task: super-resolve the data and correct for model biases.
The benchmark dataset provides a collection of predictors and predictands for a couple of standard downscaling tasks. These comprise downscaling of the 2m temperature, the surface irradiance, the near-surface wind field and precipitation. Along with the dataset, benchmark deep neural networks, namely variants of U-Nets and GANs, will be provided. Well-chosen sets of evaluation metrics including baseline scores of the benchmarked deep neural networks are presented to enable comparison between different methods.
The envisioned benchmark dataset will provide a comprehensive basis for comparing neural network approaches on statistical downscaling of meteorological fields. This, in turn, is considered to enhance confidence and transparency in the application of deep learning methods on Earth system problems.

How to cite: Langguth, M., Gong, B., Ji, Y., Schultz, M. G., and Stein, O.: Towards a benchmark dataset for statistical downscaling of meteorological fields, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11489, https://doi.org/10.5194/egusphere-egu23-11489, 2023.

X4.133
|
EGU23-1263
|
ESSI1.5
|
Highlight
Earthquake Early Warning: observe, analyse, deduce and act in seconds
(withdrawn)
Benoît Pirenne
X4.134
|
EGU23-2028
|
ESSI1.5
|
ECS
Personalizing sustainable agriculture with causal machine learning
(withdrawn)
Vasileios Sitokonstantinou, Georgios Giannarakis, Roxanne Suzette Lorilla, and Charalampos Kontoes
X4.135
|
EGU23-2909
|
ESSI1.5
Tsengdar Lee, Sujit Roy, Ankur Kumar, Rahul Ramachandran, and Udaysankar Nair

Transformers in general have shown great promise in the sequence modeling. Recently proposed vision transformer (ViT) by Dosovitskiy et al. has shown optimal performance in image recogining [1]. Fourier Neural operator based token mixer transformers keeping ViT as backbone was proposed by Guibas et. al. has been used for predicting wind and precipitation on ERA5 dataset[2,3]. Following the previous work, we trained the Fourcastnet from scratch on the MERRA2 data set with 3 verticle levels (z450, z500, z550) and 11 variables (adding u, v, and temp). We trained on data from 2005 to 2015 and made predictions by providing the initial conditions from 2017. The prediction was made for 7 days in advance. For the first 24 hours model prediction, mean correlation was 0.998. Root mean squared error (RMSE) 6 hours prediction was 8.779 and for 24 hours was 19.581 on a scale range of -575.6 to 330.6. The model was further tested on 11 variables on the same training data to evaluate prediction of major events like Hurricane. Initial condition for category 5 Hurricane Sep 28, 2016 – Oct 10, 2016 was given to the model. The model was able to predict the hurricane for 18 hours. Further work will be done in order to tune to model and increase more environment variables from MERRA2 to make the prediction more robust and for a longer period.

References:
1. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold,
G., Gelly, S. and Uszkoreit, J., 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv
preprint arXiv:2010.11929.
2. Guibas, J., Mardani, M., Li, Z., Tao, A., Anandkumar, A. and Catanzaro, B., 2021, September. Efficient Token Mixing for
Transformers via Adaptive Fourier Neural Operators. In International Conference on Learning Representations
3. Pathak, J., Subramanian, S., Harrington, P., Raja, S., Chattopadhyay, A., Mardani, M., Kurth, T., Hall, D., Li, Z.,
Azizzadenesheli, K. and Hassanzadeh, P., 2022. Fourcastnet: A global data-driven high-resolution weather model using
adaptive fourier neural operators. arXiv preprint arXiv:2202.11214.

How to cite: Lee, T., Roy, S., Kumar, A., Ramachandran, R., and Nair, U.: Long-Term Forecasting of Environment variables of MERRA2 based on Transformers, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2909, https://doi.org/10.5194/egusphere-egu23-2909, 2023.

X4.136
|
EGU23-6060
|
ESSI1.5
|
ECS
|
Gabriele Accarino, Donatello Elia, Davide Donno, Francesco Immorlano, and Giovanni Aloisio

In recent years, Climate Change has been leading to an exacerbation of Extreme Weather Events (EWEs), such as storms and wildfires, raising major concerns in terms of their increase of their intensity, frequency and duration. Detecting and predicting EWEs is challenging due to the rare occurrence of these events and consequently the lack of related historical data. Additionally, gathering of data when the event manifests is not a straightforward process, due to the intrinsic difficulty of positioning and using acquisition systems. Advances in Machine Learning (ML) can provide cutting-edge modeling techniques to deal with EWE detection and prediction tasks, offering cost-effective and fast-computing solutions which are strongly required by policy makers for taking timely and informed actions in the presence of EWEs.

Solutions based on ML could, thus, support studies of such extreme events, providing scientists, policy makers and also the general public with powerful and innovative data-driven tools. However, from an infrastructural point of view, supporting such types of applications requires a wide set of integrated software components including data gathering and harmonisation pipelines, data pre-processing and augmentation modules, computing platforms for model training, results visualization tools, etc.

A Digital Twin for the analysis of extreme weather events, focusing on storms and wildfires, is being developed in the context of the EU-funded InterTwin project. The InterTwin project aims at defining a Digital Twin Engine for supporting scientific applications from different fields. In particular, for the EWEs, neural networks are being adopted as modeling tools capable of learning the underlying mapping between drivers and outcomes from past data and generalizing it to future projection data. This contribution will present the early concept behind the design of this machine learning-powered Digital Twin for EWE studies.

How to cite: Accarino, G., Elia, D., Donno, D., Immorlano, F., and Aloisio, G.: A machine learning-powered Digital Twin for extreme weather events analysis, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6060, https://doi.org/10.5194/egusphere-egu23-6060, 2023.

X4.137
|
EGU23-9987
|
ESSI1.5
Geospatial Use Cases Enabling Interoperability of Digital Twins
(withdrawn)
Jill Saligoe-Simmel and Tamrat Belayneh
X4.138
|
EGU23-14998
|
ESSI1.5
Niels Drost, Peter Kalverla, Bart Schilperoort, Barbara Vreede, Sarah Alidoost, Stefan Verhoeven, Yang Liu, and Rolf Hut

Recently there’s been a lot of enthousiasm for the concepts of digital twins, virtual research environments, serious games, and other inspiring ideas to improve “the way we do science.” With eWaterCycle, we are no stranger to the cause. We’ve worked hard to build a platform that could make the scientific process – specifically, hydrological modelling – more accessible and engaging.  

eWaterCycle gives users access to a centralized platform where they can perform hydrological experiments: simulating how water flows through a catchment area of choice. Complete with data, a suite of models, an interactive scripting environment, and a graphical explorer to quickly setup an experiment. It shares many characteristics with what is commonly understood of a digital twin. But is it, really?  

Sadly, the concept of digital twins suffers from linguistic inflation. At a recent event on the topic, the main coffee chatter was along the lines of “but what actually is it?” In an arena filled with resonating buzz, a clear image can help to regain focus and a common frame of reference. Is eWaterCycle, as a platform that supports working with each other’s models and data, a digital twin? Or is it more an incubator of digital twins? Either way, eWaterCycle can help make things concrete and specific, because it already exists.  

With several new projects promising to build digital twins of all sorts, we hope our experience can feed into the discussions on and development of new digital twins. Therefore, at EGU, we would like to reflect on the essence of our platform and our experience in building it. What is it (not)? What’s in it for you? What challenges did we face? And what does that mean for open science and collaborative research? 

How to cite: Drost, N., Kalverla, P., Schilperoort, B., Vreede, B., Alidoost, S., Verhoeven, S., Liu, Y., and Hut, R.: Digital twincubator eWaterCycle, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14998, https://doi.org/10.5194/egusphere-egu23-14998, 2023.

Posters virtual: Thu, 27 Apr, 08:30–10:15 | vHall ESSI/GI/NP

Chairpersons: Mariana Clare, Matthew Chantry
vEGN.7
|
EGU23-15688
|
ESSI1.5
|
Joana Kollert, Martin Visbeck, and Ute Brönner

Recent advances in High Performance Computing and Earth System Model resolution have enabled the Earth Science community to envision Digital Twins as an innovative approach to global environmental problems. This is also true of the Ocean Science community.

A Digital Twin of the Ocean (DTO) merges marine system models with observational data and machine learning analytics to produce a digital replica of the real ocean. In addition to natural phenomena, DTOs can include socio-economic factors (e.g. ocean-use, pollution). Thus, DTOs can be used to monitor the current ocean state, but also to simulate future ‘what-if’ scenarios for varying human interventions. Another benefit of DTOs is that they can be used by a variety of stakeholders: by scientists to understand the ocean, by policymakers to make well-informed decisions, and by citizens to improve ocean literacy. As such, DTOs are a powerful tool in future-proofing sustainable development. Moreover, they provide strong motivation to improve the marine data landscape and build an interoperable system with agreed upon formats and standards. DTOs are tailored to a specific ocean area or purpose, such that a DTO framework is needed to implement data connectivity and interoperability, ease of access, standards and to highlight gaps. The UN Ocean Decade Program DITTO aims to provide such a framework.  Specifically, DITTO advances worldwide collaboration between scientists, data and IT experts to develop a common understanding of DTOs, to establish best practices in their development, and to advance a digital framework for DTOs to empower ocean professionals from all sectors around the world to effectively create their own digital twins.

DTOs offer the technology for building a social-ecologically integrated ocean ecosystem with observation- and modelling networks that support sustainable ocean governance. 

How to cite: Kollert, J., Visbeck, M., and Brönner, U.: Digital Twins of the Ocean – Opportunities to Inform Sustainable Ocean Governance, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15688, https://doi.org/10.5194/egusphere-egu23-15688, 2023.

vEGN.8
|
EGU23-13106
|
ESSI1.5
|
Ridvan Kuzu, Yi Wang, Octavian Dumitru, Leonardo Bagaglini, Giorgio Pasquali, Filippo Santarelli, Francesco Trillo, Sudipan Saha, and Xiao Xiang Zhu

Interferometric Synthetic Aperture Radar satellite measurements are an effective tool for monitoring ground motion with millimetric resolution over long periods of time. The Persistent Scatterer Pair method, developed in [1], is particularly useful for detecting differential displacements of buildings at multiple positions with few assumptions about the background environment. As a result, anomalous behaviours in building motion can be detected through PSP time series, which are commonly used to perform risk assessments in hazardous areas and diagnostic analyses after damage or collapse events. However, current autonomous early warning systems based on PSP-InSAR data are limited to detecting changes in linear trends and rely on sinusoidal and polynomial models [2]. This can be problematic if background signals exhibit more complex behaviours, as anomalous displacements may be difficult to identify. To address this issue, we propose an unsupervised anomaly detection method using Artificial Intelligence algorithms to identify potentially anomalous building motions based on PSP long time-series data.

To identify anomalous building motions, we applied two different AI algorithms based on Long Short-Term Memory Autoencoder inspired by [3] and a Graph Neural Network version of it. LSTM Autoencoder is an unsupervised representation learning framework that captures data representations by reconstructing the correct order of shuffled time series. Its encoder part is used to extract feature representations of a time series, while the decoder part is used to reconstruct the time series. By assuming that most stable samples exhibit similar temporal changes, this algorithm can be used for anomaly detection (as the reconstruction loss would be high for anomalous time series).

The data used in this study were provided by the European Ground Motion Service over a rectangular area surrounding the city of Rome and includes approximately 500.000 time-series aggregated over more than 80.000 buildings. The time period covered is from 2015 to 2020.

In our proposed approach, we first extract deep feature representations for each timestamp of a non-anomalous time series. The feature sequence is then shuffled and passed through an LSTM encoder-decoder network. By learning to reconstruct the feature sequence with the correct order, the network is able to recognize high-level representations of the time series. In the second step, the pre-trained network is used to reconstruct another time series. If the time series is non-anomalous, the correct order can be reconstructed with high confidence; otherwise, it is difficult to reconstruct the correct order. By selecting an appropriate threshold, anomalies can be detected with high reconstruction losses.

Overall, our proposed AI-based approach shows promising results for identifying anomalous building motions in PSP long time-series data. The use of unsupervised learning allows for more accurate statistical representations of the data and more reliable detection of anomalous behaviours. This approach has the potential to improve autonomous early warning systems for risk assessments and diagnostic analyses in dangerous areas.

This work is part of the RepreSent project funded by the European Space Agency (NO:4000137253/22/I-DT).

[1] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4779025
[2] https://www.mdpi.com/2072-4292/10/11/1816/pdf
[3] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9307226

How to cite: Kuzu, R., Wang, Y., Dumitru, O., Bagaglini, L., Pasquali, G., Santarelli, F., Trillo, F., Saha, S., and Zhu, X. X.: An Unsupervised Anomaly Detection Problem in Urban InSAR-PSP Long Time-series, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13106, https://doi.org/10.5194/egusphere-egu23-13106, 2023.

vEGN.9
|
EGU23-12333
|
ESSI1.5
|
ECS
|
Javier Martinez Amaya, Cristina Radin, Veronica Nieves, Nicolas Longépé, and Jordi Muñoz-Marí

Hurricanes, and more generally tropical cyclones, are among the most destructive natural hazards, and are arguably changing under climate change influences. Applying the power of AI to predict the extreme behavior of these events could be key to helping minimize hurricane damage. AI tools are a significant opportunity to: 1) identify non-linear relationships between changing hurricane-related characteristics and tropical storm intensification, and 2) anticipate responses to these changes. Another key part of this AI-based system is uncertainty quantification for decision-making processes. In this context, we present an improved ML hybrid model for predicting the development of extreme hurricane events, which includes effective information on spatio-temporal evolution variations of structural parameters extracted from IR satellite images. This approach, which combines Convolutional Neural Networks (CNNs) and a Random Forest (RF) classification framework, has been trained/tested with data from 1995 over the North Atlantic and NorthEast Pacific regions. Results from the CNN-RF model shows a performance of 80% or better considering lead-times of up to three days ahead (every 6 hours). With the proposed configuration, the overall precision has increased by at least 8%. This model could be yet further improved with the inclusion of new variables linked to environmental factors to be progressively explored. 

How to cite: Martinez Amaya, J., Radin, C., Nieves, V., Longépé, N., and Muñoz-Marí, J.: An AI hybrid predictive tool for extreme hurricane forecasting, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12333, https://doi.org/10.5194/egusphere-egu23-12333, 2023.