ITS1.3/CL0.1.18 | Interfacing machine learning and numerical modelling - challenges, successes, and lessons learned.
EDI
Interfacing machine learning and numerical modelling - challenges, successes, and lessons learned.
Convener: Jack AtkinsonECSECS | Co-conveners: Julien Le Sommer, Alessandro Rigazzi, Filippo GattiECSECS, Will ChapmanECSECS, Nishtha SrivastavaECSECS, Emily Shuckburgh
Orals
| Fri, 19 Apr, 08:30–10:15 (CEST)
 
Room N2
Posters on site
| Attendance Fri, 19 Apr, 10:45–12:30 (CEST) | Display Fri, 19 Apr, 08:30–12:30
 
Hall X5
Posters virtual
| Attendance Fri, 19 Apr, 14:00–15:45 (CEST) | Display Fri, 19 Apr, 08:30–18:00
 
vHall X5
Orals |
Fri, 08:30
Fri, 10:45
Fri, 14:00
Machine learning (ML) is being used throughout the geophysical sciences with a wide variety of applications.
Advances in big data, deep learning, and other areas of artificial intelligence (AI) have opened up a number of new approaches.

Many fields (climate, ocean, NWP, space weather etc.) make use of large numerical models and are now seeking to enhance these by combining them with scientific ML/AI.
Examples include ML emulation of computationally intensive processes, training on high resolution models or data-driven parameterisations for sub-grid processes, and Bayesian optimisation of model parameters and ensembles amongst several others.

Doing this brings a number of unique challenges, however, including but not limited to:
- enforcing physical compatibility and conservation laws, and incorporating physical intuition into ML models,
- ensuring numerical stability,
- coupling of numerical models to ML frameworks and language interoperation,
- handling computer architectures and data transfer,
- adaptation/generalisation to different models/resolutions/climatologies,
- explaining, understanding, and evaluating model performance and biases.

Addressing these requires knowledge of several areas and builds on advances already made in domain science, numerical simulation, machine learning, high performance computing, data assimilation etc.

We solicit talks that address any topics relating to the above.
Anyone working to combine machine learning techniques with numerical modelling is encouraged to participate in this session.

Orals: Fri, 19 Apr | Room N2

Chairpersons: Jack Atkinson, Filippo Gatti, Julien Le Sommer
08:30–08:35
Understanding and evaluating
08:35–08:45
|
EGU24-10087
|
ECS
|
On-site presentation
Ségolène Crossouard, Masa Kageyama, Mathieu Vrac, Thomas Dubos, Soulivanh Thao, and Yann Meurdesoif

Atmospheric general circulation models include two main distinct components: the dynamical one solves the Navier-Stokes equations to provide a mathematical representation of atmospheric movements while the physical one includes parameterizations representing small-scale phenomena such as turbulence and convection (Balaji et al., 2022). However, computational demands of the parameterizations limit the numerical efficiency of the models. The burgeoning field of machine learning techniques opens new horizons by producing accurate, robust and fast emulators of parts of a climate model. In particular, they can reliably reproduce physical processes, thus providing an efficient alternative to traditional process representation. Indeed, some pioneering studies (Gentine et al., 2018; Rasp et al., 2018) have shown that these emulators can replace one or more parameterizations that are computationally expensive and so, have the potential to enhance numerical efficiency.

Our research work aligns with these perspectives, since it involves exploiting the potential of developing an emulator of the physical parameterizations of the IPSL climate model, and more specifically of the ICOLMDZOR atmospheric model (for DYNAMICO, the dynamic solver using an icosahedral grid - LMDZ, the atmospheric component - ORCHIDEE, the surface component). The emulator could improve performance, as currently almost half of the total computing time is given to the physical part of the model.

We have developed two initial offline emulators of the physical parameterizations of our standard model, in an idealized aquaplanet configuration, to reproduce profiles of tendencies of the key variables - zonal wind, meridional wind, temperature, humidity and water tracers - for each atmospheric column. The results of these emulators, based on a dense neural network or a convolutional neural network, have begun to show their potential for use, since we easily obtain good performances in terms of the mean of the predicted tendencies. Nevertheless, their variability is not well captured, and the variance is underestimated, posing challenges for our application. A study of physical processes has revealed that turbulence was at the root of the problem. Knowing how turbulence is parameterized in the model, we show that incorporating physical knowledge through latent variables as predictors into the learning process, leading to a significant improvement of the variability.

Future plans involve an online physics emulator, coupled with the atmospheric model to provide a better assessment of the learning process (Yuval et al., 2021).

How to cite: Crossouard, S., Kageyama, M., Vrac, M., Dubos, T., Thao, S., and Meurdesoif, Y.: Contribution of latent variables to emulate the physics of the IPSL model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10087, https://doi.org/10.5194/egusphere-egu24-10087, 2024.

08:45–08:55
|
EGU24-2691
|
Highlight
|
On-site presentation
Tamsin Edwards, Fiona Turner, Jonathan Rougier, and Jeremy Rohmer and the EU PROTECT project

In the EU Horizon 2020 project PROTECT, we have performed around 5000 simulations of the Greenland and Antarctic ice sheets and the world’s glaciers to predict the land ice contribution to sea level rise up to 2300. Unlike previous international model intercomparison projects (Edwards et al., 2021; IPCC Sixth Assessment Report, 2021), this is a "grand ensemble" sampling every type of model uncertainty – plausible structures, parameters and initial conditions – and is performed under many possible boundary conditions (climate change projected by multiple global and regional climate models). The simulations also start in the past, unlike the previous projects, to assess the impact of these uncertainties on historical changes.

We use probabilistic machine learning to emulate the relationships between model inputs (climate change; ice sheet and glacier model choices) and outputs (sea level contribution), so we can make predictions for any climate scenario and sample model uncertainties more thoroughly than with the original physical models. We try multiple machine learning methods that have different strengths in terms of speed, smoothness, interpretability, and performance for categorical uncertainties (Gaussian Processes, random forests).

The design of the grand ensemble allows the influence of all these uncertainties to be captured explicitly, rather than treating them as simple noise, and the earlier start date allows formal calibration (Bayesian or history matching) with observed ice sheet and glacier changes, to improve confidence (and typically reduce uncertainties) in the projections. Here we show preliminary projections for global mean sea level rise up to 2300 using these advances, and describe challenges and solutions found along the way.

How to cite: Edwards, T., Turner, F., Rougier, J., and Rohmer, J. and the EU PROTECT project: Grand designs: quantifying many kinds of model uncertainty to improve projections of sea level rise , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2691, https://doi.org/10.5194/egusphere-egu24-2691, 2024.

Tools and Techniques
08:55–09:05
|
EGU24-14744
|
ECS
|
Highlight
|
On-site presentation
Said Ouala, Bertrand Chapron, Fabrice Collard, Lucile Gaultier, and Ronan Fablet

Artificial intelligence and deep learning are currently reshaping numerical simulation frameworks by introducing new modeling capabilities. These frameworks are extensively investigated in the context of model correction and parameterization where they demonstrate great potential and often outperform traditional physical models. Most of these efforts in defining hybrid dynamical systems follow offline learning strategies in which the neural parameterization (called here sub-model) is trained to output an ideal correction. Yet, these hybrid models can face hard limitations when defining what should be a relevant sub-model response that would translate into a good forecasting performance. End-to-end learning schemes, also referred to as online learning, could address such a shortcoming by allowing the deep learning sub-models to train on historical data. However, defining end-to-end training schemes for the calibration of neural sub-models in hybrid systems requires working with an optimization problem that involves the solver of the physical equations. Online learning methodologies thus require the numerical model to be differentiable, which is not the case for most modeling systems. To overcome this difficulty and bypass the differentiability challenge of physical models, we present an efficient and practical online learning approach for hybrid systems. The method, called EGA for Euler Gradient Approximation, assumes an additive neural correction to the physical model, and an explicit Euler approximation of the gradients. We demonstrate that the EGA converges to the exact gradients in the limit of infinitely small time steps. Numerical experiments are performed on various case studies, including prototypical ocean-atmosphere dynamics. Results show significant improvements over offline learning, highlighting the potential of end-to-end online learning for hybrid modeling.

How to cite: Ouala, S., Chapron, B., Collard, F., Gaultier, L., and Fablet, R.: End-to-end Learning in Hybrid Modeling Systems: How to Deal with Backpropagation Through Numerical Solvers, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14744, https://doi.org/10.5194/egusphere-egu24-14744, 2024.

09:05–09:15
|
EGU24-16149
|
ECS
|
On-site presentation
|
Alistair White, Niki Kilbertus, Maximilian Gelbrecht, and Niklas Boers

Neural differential equations (NDEs) provide a powerful and general framework for interfacing machine learning with numerical modeling. However, constraining NDE solutions to obey known physical priors, such as conservation laws or restrictions on the allowed state of the system, has been a challenging problem in general. We present stabilized NDEs (SNDEs) [1], the first method for imposing arbitrary explicit constraints in NDE models. Alongside robust theoretical guarantees, we demonstrate the effectiveness of SNDEs across a variety of settings and using diverse classes of constraints. In particular, SNDEs exhibit vastly improved generalization and stability compared to unconstrained baselines. Building on this work, we also present constrained NDEs (CNDEs), a novel and complementary method with fewer hyperparameters and stricter constraints. We compare and contrast the two methods, highlighting their relative merits and offering an intuitive guide to choosing the best method for a given application.

[1] Alistair White, Niki Kilbertus, Maximilian Gelbrecht, Niklas Boers. Stabilized neural differential equations for learning dynamics with explicit constraints. In Advances in Neural Information Processing Systems, 2023.

How to cite: White, A., Kilbertus, N., Gelbrecht, M., and Boers, N.: Two Methods for Constraining Neural Differential Equations, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16149, https://doi.org/10.5194/egusphere-egu24-16149, 2024.

09:15–09:25
|
EGU24-17852
|
Highlight
|
On-site presentation
Dominic Orchard, Elliott Kasoar, Jack Atkinson, Thomas Meltzer, Simon Clifford, and Athena Elafrou

Across geoscience, numerical models are used for understanding, experimentation, and prediction of complex systems. Many of these models are computationally intensive and involve sub-models for certain processes, often known as parameterisations. Such parameterisations may capture unresolved sub-grid processes, such as turbulence, or represent fast-moving dynamics, such as gravity waves, or provide a combination of the two, such as microphysics schemes.

Recently there has been significant interest in incorporating machine learning (ML) methods
into these parameterisations. Two of the main drivers are the emulation of computationally intensive processes, thereby reducing computational resources required, and the development of data-driven parameterisation schemes that could improve accuracy through capturing ‘additional physics’.

Integrating ML sub-models in the context of numerical modelling brings a number of challenges, some of which are scientific, others computational. For example, many numerical models are written in Fortran, whilst the majority of machine learning is conducted using Python-based frameworks such as PyTorch that provide advanced ML modelling capabilities. As such there is a need to leverage ML models developed externally to Fortran, rather than the error-prone approach of writing neural networks directly in Fortran, missing the benefits of highly-developed libraries.

Interoperation of the two languages requires care, and increases the burden on researchers and developers. To reduce these barriers we have developed the open-source FTorch library [1] for coupling PyTorch models to Fortran. The library is designed to streamline the development process, offering a Fortran interface mimicking the style of the Python library whilst abstracting away the complex details of interoperability to provide a computationally efficient interface.

A significant benefit of this approach is that it enables inference to be performed on either CPU or GPU, enabling deployment on a variety of architectures with low programmer effort. We will report on the performance characteristics of our approach, both in the CPU and GPU settings and include a comparison with alternative approaches.

This approach has been deployed on two relevant case studies in the geoscience context: a gravity-wave parameterisation in an intermediate complexity atmospheric model (MiMA) based on Espinosa et al. [2], and a convection parameterisation in a GCM (CAM/CESM) based on Yuval et al. [3]. We will report on these applications and lessons learned from their development. 

[1] FTorch https://github.com/Cambridge-ICCS/FTorch
[2] Espinosa et al., Machine Learning Gravity Wave Parameterization Generalizes to Capture the QBO and Response to Increased CO2, GRL 2022 https://doi.org/10.1029/2022GL098174
[3] Yuval et al., Use of Neural Networks for Stable, Accurate and Physically Consistent Parameterization of Subgrid Atmospheric Processes With Good Performance at Reduced Precision, GRL 2021 https://doi.org/10.1029/2020GL091363

How to cite: Orchard, D., Kasoar, E., Atkinson, J., Meltzer, T., Clifford, S., and Elafrou, A.: FTorch - lowering the technical barrier of incorporating ML into Fortran models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17852, https://doi.org/10.5194/egusphere-egu24-17852, 2024.

Examples of application
09:25–09:35
|
EGU24-3520
|
On-site presentation
Steven Hardiman, Adam Scaife, Annelize van Niekerk, Rachel Prudden, Aled Owen, Samantha Adams, Tom Dunstan, Nick Dunstone, and Sam Madge

Use of machine learning algorithms in climate simulations requires such algorithms to replicate certain aspects of the physics in general circulation models.  In this study, a neural network is used to mimic the behavior of one of the subgrid parameterization schemes used in global climate models, the nonorographic gravity wave scheme.  Use of a one-dimensional mechanistic model is advocated, allowing neural network hyperparameters to be chosen based on emergent features of the coupled system with minimal computational cost, and providing a testbed prior to coupling to a climate model. A climate model simulation, using the neural network in place of the existing parameterization scheme, is found to accurately generate a quasi-biennial oscillation of the tropical stratospheric winds, and correctly simulate the nonorographic gravity wave variability associated with the El Niño–Southern Oscillation and stratospheric polar vortex variability. These internal sources of variability are essential for providing seasonal forecast skill, and the gravity wave forcing associated with them is reproduced without explicit training for these patterns.

How to cite: Hardiman, S., Scaife, A., van Niekerk, A., Prudden, R., Owen, A., Adams, S., Dunstan, T., Dunstone, N., and Madge, S.: Machine Learning for Nonorographic Gravity Waves in a Climate Model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3520, https://doi.org/10.5194/egusphere-egu24-3520, 2024.

09:35–09:45
|
EGU24-7455
|
ECS
|
On-site presentation
Blanka Balogh, David Saint-Martin, Olivier Geoffroy, Mohamed Aziz Bhouri, and Pierre Gentine

Interfacing challenges continue to impede the implementation of neural network-based parameterizations into numerical models of the atmosphere, particularly those written in Fortran. In this study, we leverage a specialized interfacing tool to successfully implement a neural network-based parameterization for both deep and shallow convection within the General Circulation Model, ARPEGE-Climat. Our primary objective is to not only evaluate the performance of this data-driven parameterization but also assess the numerical stability of ARPEGE-Climat when coupled with a convection parameterization trained on data from a different high-resolution model, namely SPCAM 5. 

The performance evaluation encompasses both offline and online assessments of the data-driven parameterization within this framework. The data-driven parameterization for convection is designed using a multi-fidelity approach and is adaptable for use in a stochastic configuration. Challenges associated with this approach include ensuring consistency between variables in ARPEGE-Climat and the parameterization based on data from SPCAM 5, as well as managing disparities in geometry (e.g., horizontal and vertical resolutions), which are crucial factors affecting the intermodel parameterization transferability.

How to cite: Balogh, B., Saint-Martin, D., Geoffroy, O., Bhouri, M. A., and Gentine, P.: Assessment of ARPEGE-Climat using a neural network convection parameterization based upon data from SPCAM 5, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7455, https://doi.org/10.5194/egusphere-egu24-7455, 2024.

09:45–09:55
|
EGU24-16148
|
ECS
|
On-site presentation
Alexis Barge and Julien Le Sommer

The combination of Machine Learning (ML) with geoscientific models is an active area of research with a wide variety of applications. A key practical question for those models is to define how high level languages ML components can be encoded and maintained into pre-existing legacy solvers, written in low level abstraction languages (as Fortran). We address this question through the strategy of creating pipes between a geoscientific code and ML components executed in their own separate scripts. The main advantage of this approach is the possibility to easily share the inference models within the community without keeping them bound to one code with its specific numerical methods. Here, we chose to focus on OASIS (https://oasis.cerfacs.fr/en/), which is a Fortran coupling library that performs field exchanges between coupled executables. It is commonly used in the numerical geoscientific community to couple different codes and assemble earth-system models. Last releases of OASIS provided C and Python APIs, which enable coupling between non-homogeneously written codes. We seek to take advantage of those new features and the presence of OASIS in the community codes, and propose a Python library (named Eophis) that facilitates the deployment of inference models for coupled execution. Basically, Eophis allows to: (i) wrap an OASIS interface to exchange data with a coupled earth-system code, (ii) wrap inference models into a simple in/out interface, and (iii) emulate time evolution to synchronize connexions between earth-system and models. We set up a demonstration case with the European numerical code NEMO in which the pre-existing OASIS interface has been slightly modified. A forced global ocean model simulation is performed with regular exchanges of 2D and 3D fields with Eophis. Received data are then sent to inference models that are not implemented in NEMO. Performances of the solution will finally be assessed with references.

How to cite: Barge, A. and Le Sommer, J.: Online deployment of pre-trained machine learning components within Earth System models via OASIS, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16148, https://doi.org/10.5194/egusphere-egu24-16148, 2024.

09:55–10:05
|
EGU24-10749
|
ECS
|
On-site presentation
Simon Driscoll, Alberto Carrassi, Julien Brajard, Laurent Bertino, Marc Bocquet, Einar Olason, and Amos Lawless

Sea ice plays an essential role in global ocean circulation and in regulating Earth's climate and weather, and melt ponds that form on the ice have a profound impact on the Arctic's climate by altering the ice albedo. Melt pond evolution is complex, sub grid scale and poorly understood - and melt ponds are represented in sea ice models as parametrisations. Parametrisations of these physical processes are based on a number of assumptions and can include many uncertain parameters that have a substantial effect on the simulated evolution of the melt ponds. 

We have shown, using Sobol sensitivity analysis and through investigating perturbed parameter ensembles (PPEs), that a state-of-the-art sea ice column model, Icepack, demonstrates substantial sensitivity to its uncertain melt pond parameters. These PPEs demonstrate that perturbing melt pond parameters (within known ranges of uncertainty) cause predicted sea ice thickness over the Arctic Ocean to differ by many metres after only a decade of simulation. Understanding the sources of uncertainty, improving parametrisations and fine tuning the parameters is a paramount, but usually very complex and difficult task. Given this uncertainty, we propose to replace the sub grid scale melt pond parametrisation (MPP) in Icepack with a machine learning emulator. 

Building and replacing the MPP with a machine learning emulator has been done in two broad steps that contain multiple computational challenges. The first is generating a melt pond emulator using 'perfect' or 'model' data. Here we demonstrate a proof of concept and show how we achieve numerically stable simulations of Icepack when embedding an emulator in place of the MPP - with Icepack running stably for the whole length of the simulations (over a decade) across the Arctic. 

Secondly, we develop offline an emulator from observational data that faithfully predicts observed sea ice albedo and melt pond fraction given climatological input variables. Embedding an observational emulator can require different challenges as compared to using model data, such as not all variables needed by the host model being observable/observed for an emulator to predict. We discuss how we achieve online simulations interfacing this emulator with the Icepack model.

Our focus on using column models ensures that our observational emulator of sea ice albedo and melt pond fraction can readily be used in sea ice models around the world, irrespective of grid resolutions and mesh specifications, and offers one approach for creating general emulators that can be used by many climate models. 

How to cite: Driscoll, S., Carrassi, A., Brajard, J., Bertino, L., Bocquet, M., Olason, E., and Lawless, A.: Replacing parametrisations of melt ponds on sea ice with machine learning emulators, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10749, https://doi.org/10.5194/egusphere-egu24-10749, 2024.

10:05–10:15
|
EGU24-5048
|
On-site presentation
Steven J. Gibbons, Erlend Briseid Storrøsten, Naveen Ramalingam, Stefano Lorito, Manuela Volpe, Carlos Sánchez-Linares, and Finn Løvholt

Predicting coastal tsunami impact requires the computation of inundation metrics such as maximum inundation height or momentum flux at all locations of interest. The high computational cost of inundation modelling, in both long term tsunami hazard assessment and urgent tsunami computing, comes from two major factors: (1) the high number of simulations needed to capture the source uncertainty and (2) the need to solve the nonlinear shallow water equations on high-resolution grids. We seek to mitigate the second of these factors using machine learning. The offshore tsunami wave is far cheaper to calculate than the full inundation map, and an emulator able to predict an inundation map with acceptable accuracy from simulated offshore wave height time-series would allow both more rapid hazard estimates and the processing of greater numbers of scenarios. The procedure would necessarily be specific to one stretch of coastline and a complete numerical simulation is needed for each member of the training set. Success of an inundation emulator would demand an acceptable reduction in time-to-solution, a modest number of training scenarios, an acceptable accuracy in inundation predictions, and good performance for high impact, low probability, scenarios. We have developed a convolutional encoder-decoder based neural network and applied it to a dataset of high-resolution inundation simulations for the Bay of Catania in Sicily, calculated for almost 28000 subduction earthquake scenarios in the Mediterranean Sea. We demonstrate encouraging performance in this case study for relatively small training sets (of the order of several hundred scenarios) provided that appropriate choices are made in the setting of model parameters, the loss function, and training sets. Scenarios with severe inundation need to be very well represented in the training sets for the ML-models to perform sufficiently well for the most tsunamigenic earthquakes. The importance of regularization and model parameter choices increases as the size of the training sets decrease.

How to cite: Gibbons, S. J., Briseid Storrøsten, E., Ramalingam, N., Lorito, S., Volpe, M., Sánchez-Linares, C., and Løvholt, F.: Emulators for Predicting Tsunami Inundation Maps at High Resolution, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5048, https://doi.org/10.5194/egusphere-egu24-5048, 2024.

Posters on site: Fri, 19 Apr, 10:45–12:30 | Hall X5

Display time: Fri, 19 Apr, 08:30–Fri, 19 Apr, 12:30
X5.167
|
EGU24-11880
|
ECS
Oriol Pomarol Moya, Derek Karssenberg, Walter Immerzeel, Madlene Nussbaum, and Siamak Mehrkanoon

Machine learning (ML) models have become popular in the Earth Sciences for improving predictions based on observations. Beyond pure prediction, though, ML has a large potential to create surrogates that emulate complex numerical simulation models, considerably reducing run time, hence facilitating their analysis.

The behaviour of eco-geomorphological systems is often examined using minimal models, simple equation-based expressions derived from expert knowledge. From them, one can identify complex system characteristics such as equilibria, tipping points, and transients. However, model formulation is largely subjective, thus disputable. Here, we propose an alternative approach where a ML surrogate of a high-fidelity numerical model is used instead, conserving suitability for analysis while incorporating the higher-order physics of its parent model. The complexities of developing such an ML surrogate for understanding the co-evolution of vegetation, hydrology, and geomorphology on a geological time scale are presented, highlighting the potential of this approach to capture novel, data-driven scientific insights.

To obtain the surrogate, the ML models were trained on a data set simulating a coupled hydrological-vegetation-soil system. The rate of change of the two variables describing the system, soil depth and biomass, was used as output, taking their value at the previous time step and the pre-defined grazing pressure as inputs. Two popular ML methods, random forest (RF) and fully connected neural network (NN), were used. As proof of concept and to configure the model setup, we first trained the ML models on the output of the minimal model described in [1], comparing the ML responses at gridded inputs with the derivative values predicted by the minimal model. While RF required less tuning to achieve competitive results, a relative root mean squared error (rRMSE) of 5.8% and 0.04% for biomass and soil depth respectively, NN produced better-behaved outcome, reaching a rRMSE of 2.2% and 0.01%. Using the same setup, the ML surrogates were trained on a high-resolution numerical model describing the same system. The study of the response from this surrogate provided a more accurate description of the dynamics and equilibria of the hillslope ecosystem, depicting, for example, a much more complex process of hillslope desertification than captured by the minimal model.

It is thus concluded that the use of ML models instead of expert-based minimal models may lead to considerably different findings, where ML models have the advantage that they directly rely on system functioning embedded in their parent numerical simulation model.

How to cite: Pomarol Moya, O., Karssenberg, D., Immerzeel, W., Nussbaum, M., and Mehrkanoon, S.: Understanding geoscientific system behaviour from machine learning surrogates, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11880, https://doi.org/10.5194/egusphere-egu24-11880, 2024.

X5.168
|
EGU24-20863
|
ECS
Marieke Wesselkamp, Matthew Chantry, Maria Kalweit, Ewan Pinnington, Margarita Choulga, Joschka Boedecker, Carsten Dormann, Florian Pappenberger, and Gianpaolo Balsamo

While forecasting of climate and earth system processes has long been a task for numerical models, the rapid development of deep learning applications has recently brought forth competitive AI systems for weather prediction. Earth system models (ESMs), even though being an integral part of numerical weather prediction have not yet caught that same attention. ESMs forecast water, carbon and energy fluxes and in the coupling with an atmospheric model, provide boundary and initial conditions. We set up a comparison of different deep learning approaches for improving short-term forecasts of land surface and ecosystem states on a regional scale. Using simulations from the numerical model and combining them with observations, we will partially emulate an existing land surface scheme, conduct a probabilistic forecasts of core ecosystem processes and determine forecast horizons for all variables.

How to cite: Wesselkamp, M., Chantry, M., Kalweit, M., Pinnington, E., Choulga, M., Boedecker, J., Dormann, C., Pappenberger, F., and Balsamo, G.: Partial land surface emulator forecasts ecosystem states at verified horizons, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20863, https://doi.org/10.5194/egusphere-egu24-20863, 2024.

X5.169
|
EGU24-14957
|
ECS
Ayush Prasad, Ioanna Merkouriadi, and Aleksi Nummelin

Snow is a crucial element of the sea ice system, impacting various environmental and climatic processes. SnowModel is a numerical model that is developed to simulate the evolution of snow depth and density, blowing-snow redistribution and sublimation, snow grain size, and thermal conductivity, in a spatially distributed, multi-layer snowpack framework. However, SnowModel faces challenges with slow processing speeds and the need for high computational resources. To address these common issues in high-resolution numerical modeling, data-driven emulators are often used. They aim to replicate the output of complex numerical models like SnowModel but with greater efficiency. However, these emulators often face their own set of problems, primarily a lack of generalizability and inconsistency with physical laws. A significant issue related to this is the phenomenon of concept drift, which may arise when an emulator is used in a region or under conditions that differ from its training environment. For instance, an emulator trained on data from one Arctic region might not yield accurate results if applied in another region with distinct snow properties or climatic conditions. In our study, we address these challenges with a physics-guided approach in developing our emulator. By integrating physical laws that govern changes in snow density due to compaction, we aim to create an emulator that is efficient while also adhering to essential physical principles. We evaluated this approach by comparing four machine learning models: Long Short-Term Memory (LSTM), Physics-Guided LSTM, Gradient Boosting Machines, and Random Forest, across five distinct Arctic regions. Our evaluations indicate that all models achieved high accuracy, with the Physics-Guided LSTM model demonstrating the most promising results in terms of accuracy and generalizability. This approach offers a computationally faster way to emulate the SnowModel with high fidelity. 

How to cite: Prasad, A., Merkouriadi, I., and Nummelin, A.: Exploring data-driven emulators for snow on sea ice , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-14957, https://doi.org/10.5194/egusphere-egu24-14957, 2024.

X5.170
|
EGU24-7581
Yongling Zhao, Zhi Wang, Dominik Strebel, and Jan Carmeliet

Urban warming in cities is increasingly exacerbated by the escalation of more frequent and severe heat extremes. Effectively mitigating overheating necessitates the adoption of a comprehensive, whole-system approach that integrates various heat mitigation measures to generate rapid and sustained efficacy in mitigation efforts. However, there remains a significant gap in the exploration of how to quantify the efficacy of mitigation strategies at the city-scale.

We address this research question by leveraging mesoscale Weather Research Forecasting (WRF) models alongside machine-learning (ML) techniques. As a showcase, ML models have been established for Zurich and Basel, Switzerland, utilizing seven WRF-output-based features, including shortwave downward radiation (SWDNB), hour of the day (HOUR), zenith angle (COSZEN), rain mix ratio (QRAIN), longwave downward radiation (LWDNB), canopy water content (CANWAT), and planetary boundary layer height (PBLH). Impressively, the resultant median R2 values for T2 (2m temperature) predictions during heatwave and non-heatwave periods are notably high at 0.94 and 0.91 respectively.

Within the perspective of the whole-system approach, we quantify the impacts of reducing shortwave radiation absorption at ground surfaces, a potential result of a combination of both shading and reflective coating-based mitigation measures, through the utilization of ML models. Remarkably, a 5% reduction in the absorption of radiation at ground surfaces in Zurich could lead to a reduction in T2 by as much as 3.5 °C in the city center. During a heatwave in Basel, the potential for cooling is even more pronounced, with temperature decreases of up to 5 °C. These case studies in Zurich and Basel underscore the efficacy of utilizing WRF feature-trained ML models to quantify heat mitigation strategies at the city-scale.

How to cite: Zhao, Y., Wang, Z., Strebel, D., and Carmeliet, J.: Blending machine-learning and mesoscale numerical weather prediction models to quantify city-scale heat mitigation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7581, https://doi.org/10.5194/egusphere-egu24-7581, 2024.

X5.171
|
EGU24-19352
|
ECS
Pascal Nieters, Maximilian Berthold, and Rahel Vortmeyer-Kley

Non-linear, dynamic patterns are the rule rather than the exception in ecosystems. Predicting such patterns would allow an improved understanding of energy and nutrient flows in such systems. The Scientific Machine Learning approach Universal Differential Equation (UDE) by Rackauckas et al. (2020) tries to extract the underlying dynamical relations of state variables directly from their time series in combination with some knowledge on the dynamics of the system. This approach makes this kind of tool a promising approach to support classical modeling when precise knowledge of dynamical relationships is lacking, but measurement data of the phenomenon to be modeled is available.

We applied the UDE approach to a 22-year data set of the southern Baltic Sea coast, which constituted six different phytoplankton bloom types. The data set contained the state variables chlorophyll and different dissolved and total nutrients. We learned the chlorophyll:nutrient interactions from the data with additional forcing of external temperature, salinity and light attenuation dynamics as drivers. We used a neural network as a universal function approximator that provided time series of the state variables and their derivatives.

Finally, we recovered algebraic relationships between the variables chlorophyll, dissolved and total nutrients and the external drivers temperature, salinity and light attenuation using Sparse Identification of nonlinear Dynamics (SinDy) by Brunton et al. (2016).

The gained algebraic relationships differed in their importance of the different state variables and drivers for the six phytoplankton bloom types in accordance with general mechanisms reported in literature for the southern Baltic Sea coast. Our approach may be a viable option to guide ecosystem management decisions based on those algebraic relationships.

Rackauckas et al. (2020), arXiv preprint arXiv:2001.04385.

Brunton et al. (2016), PNAS 113.15: 3932-3937.

How to cite: Nieters, P., Berthold, M., and Vortmeyer-Kley, R.: Learning phytoplankton bloom patterns - A long and rocky road from data to equations , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19352, https://doi.org/10.5194/egusphere-egu24-19352, 2024.

X5.172
|
EGU24-21545
Eric Laloy and Vanessa Montoya and the EURAD-DONUT Team

Thanks to the recent progress in numerical methods, the application fields of artificial intelligence (AI) and machine learning methods (ML) are growing at a very fast pace. The EURAD (European Joint Programme on Radioactive Waste Management) community has recently started using ML for a) acceleration of numerical simulations, b) improvement of multiscale and multiphysics couplings efficiency, c) uncertainty quantification and sensitivity analysis. A number of case studies indicate that use of ML based approaches leads to overall acceleration of geochemical and reactive transport simulations from one to four orders of magnitude. The achieved speed-up depends on the chemical system, simulation code, problem formulation and the research question to be answered. Within EURAD-DONUT (Development and Improvement Of Numerical methods and Tools for modelling coupled processes), a benchmark is on-going to coordinate the relevant activities and to test a variety of ML techniques for geochemistry and reactive transport simulations in the framework of radioactive waste disposal. It aims at benchmarking several widely used geochemical codes, at generating high-quality geochemical data for training/validation of existing/new methodologies, and at providing basic guidelines about the benefits, drawbacks, and current limitations of using ML techniques.

A joint effort has resulted in the definition of benchmarks of which one is presented here. The benchmark system is relevant to the sorption of U in claystone formations (e.g. Callovo-Oxfordian, Opalinus or Boom clay). Regarding the chemical complexity, a system containing Na-Cl-U-H-O is considered as the base case, and a more complex system with the addition of calcium and carbonate (CO2) to change aqueous speciation of U. Parameters of interest, among others, are the resulting concentrations of U sorbed on edges (surface complexes), of U on ion exchange sites, and the amount of metaSchoepite, with the resulting Kd’s. Following aspects are discussed: (i) Streamline the production of high-quality consistent training datasets, using the most popular geochemical solvers (PHREEQC, ORCHESTRA and GEMS). (ii) The use of different methods (e.g. Deep Neural Networks, Polynomial Chaos Expansion, Gaussian Processes, Active Learning, and other techniques to learn from the generated data. (iii) Setup appropriate metrics for the critical evaluation of the accuracy of ML models. (iv) Testing the accuracy of predictions for geochemical and reactive transport calculations. 

How to cite: Laloy, E. and Montoya, V. and the EURAD-DONUT Team: Machine learning based metamodels for geochemical calculations in reactive transport models: Benchmark within the EURAD Joint Project, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-21545, https://doi.org/10.5194/egusphere-egu24-21545, 2024.

X5.173
|
EGU24-5852
Christian Folberth, Artem Baklanov, Nikolay Khabarov, Thomas Oberleitner, Juraj Balkovic, and Rastislav Skalsky

Global gridded crop models (GGCMs) have become state-of-the-art tools in large-scale climate impact and adaptation assessments. Yet, these combinations of large-scale spatial data frameworks and plant growth models have limitations in the volume of scenarios they can address due to computational demand and complex software structures. Emulators mimicking such models have therefore become an attractive option to produce reasonable predictions of GGCMs’ crop productivity estimates at much lower computational costs. However, such emulators’ flexibility is thus far typically limited in terms of crop management flexibility and spatial resolutions among others. Here we present a new emulator pipeline CROp model Machine learning Emulator Suite (CROMES) that serves for processing climate features from netCDF input files, combining these with site-specific features (soil, topography), and crop management specifications (planting dates, cultivars, irrigation) to train machine learning emulators and subsequently produce predictions. Presently built around the GGCM EPIC-IIASA and employing a boosting algorithm, CROMES is capable of producing predictions for EPIC-IIASA’s crop yield estimates with high accuracy and very high computational efficiency. Predictions require for a first used climate dataset about 45 min and 10 min for any subsequent scenario based on the same climate forcing in a single thread compared to approx. 14h for a GGCM simulation on the same system.

Prediction accuracy is highest if modeling the case when crops receive sufficient nutrients and are consequently most sensitive to climate. When training an emulator on crop model simulations for rainfed maize and a single global climate model (GCM), the yield prediction accuracy for out-of-bag GCMs is R2=0.93-0.97, RMSE=0.5-0.7, and rRMSE=8-10% in space and time. Globally, the best agreement between predictions and crop model simulations occurs in (sub-)tropical regions, the poorest is in cold, arid climates where both growing season length and water availability limit crop growth. The performance slightly deteriorates if fertilizer supply is considered, more so at low levels of nutrient inputs than at the higher end.

Importantly, emulators produced by CROMES are virtually scale-free as all training samples, i.e., pixels, are pooled and hence treated as individual locations solely based on features provided without geo-referencing. This allows for applications on increasingly available high-resolution climate datasets or in regional studies for which more granular data may be available than at global scales. Using climate features based on crop growing seasons and cardinal growth stages enables also adaptation studies including growing season and cultivar shifts. We expect CROMES to facilitate explorations of comprehensive climate projection ensembles, studies of dynamic climate adaptation scenarios, and cross-scale impact and adaptation assessments.

 

How to cite: Folberth, C., Baklanov, A., Khabarov, N., Oberleitner, T., Balkovic, J., and Skalsky, R.: CROMES - A fast and efficient machine learning emulator pipeline for gridded crop models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5852, https://doi.org/10.5194/egusphere-egu24-5852, 2024.

X5.174
|
EGU24-21069
|
ECS
Stochastic LSTM network for flood hazard map forecast
(withdrawn after no-show)
Sreenath Vemula, Filippo Gatti, and Pierre Jehel
X5.175
|
EGU24-7681
|
ECS
Roberto Bentivoglio, Elvin Isufi, Sebastian Nicolaas Jonkman, and Riccardo Taormina

Deep learning models emerged as viable alternatives to rapid and accurate flood mapping, overcoming the computational burden of numerical methods. In particular, hydraulic-based graph neural networks present a promising avenue, offering enhanced transferability to domains not used for the model training. These models exploit the analogy between finite-volume methods and graph neural networks to describe how water moves in space and time across neighbouring cells. However, existing models face limitations, having been exclusively tested on regular meshes and necessitating initial conditions from numerical solvers. This study proposes an extension of hydraulic-based graph neural networks to accommodate time-varying boundary conditions, showcasing its efficacy on irregular meshes. For this, we employ multi-scale methods that jointly model the flood at different scales. To remove the necessity of initial conditions, we leverage ghost cells that enforce the solutions at the boundaries. Our approach is validated on a dataset featuring irregular meshes, diverse topographies, and varying input hydrograph discharges. Results highlight the model's capacity to replicate flood dynamics across unseen scenarios, without any input from the numerical model, emphasizing its potential for realistic case studies.

How to cite: Bentivoglio, R., Isufi, E., Jonkman, S. N., and Taormina, R.: Multi-scale hydraulic-based graph neural networks: generalizing spatial flood mapping to irregular meshes and time-varying boundary condition, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-7681, https://doi.org/10.5194/egusphere-egu24-7681, 2024.

X5.176
|
EGU24-19502
|
ECS
Nishtha Srivastava, Wei Li, Megha Chakraborty, Claudia Quinteros Cartaya, Jonas Köhler, Johannes Faber, and Georg Rümpker

Seismology has witnessed significant advancements in recent years with the application of deep
learning methods to address a broad range of problems. These techniques have demonstrated their
remarkable ability to effectively extract statistical properties from extensive datasets, surpassing the
capabilities of traditional approaches to an extent. In this study, we present SAIPy, an open-source
Python package specifically developed for fast data processing by implementing deep learning.
SAIPy offers solutions for multiple seismological tasks, including earthquake detection, magnitude
estimation, seismic phase picking, and polarity identification. We introduce upgraded versions
of previously published models such as CREIME_RT capable of identifying earthquakes with an
accuracy above 99.8% and a root mean squared error of 0.38 unit in magnitude estimation. These
upgraded models outperform state-of-the-art approaches like the Vision Transformer network. SAIPy
provides an API that simplifies the integration of these advanced models, including CREIME_RT,
DynaPicker_v2, and PolarCAP, along with benchmark datasets. The package has the potential to be
used for real-time earthquake monitoring to enable timely actions to mitigate the impact of seismic
events.

How to cite: Srivastava, N., Li, W., Chakraborty, M., Cartaya, C. Q., Köhler, J., Faber, J., and Rümpker, G.: SAIPy: A Python Package for single station Earthquake Monitoring using Deep Learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19502, https://doi.org/10.5194/egusphere-egu24-19502, 2024.

X5.177
|
EGU24-2443
|
ECS
Filippo Gatti, Fanny Lehmann, Hugo Gabrielidis, Michaël Bertin, Didier Clouteau, and Stéphane Vialle

Estimating the seismic hazard in earthquake-prone regions, in order to assess the risk associated to nuclear facilities, must take into account a large number of uncertainties, and in particular our limited knowledge of the geology. And yet, we know that certain geological features can create site effects that considerably amplify earthquake ground motion. In this work, we provide a quantitative assessment of how largely can earthquake ground motion simulation benefit from deep learning approaches, quantifying the influence of geological heterogeneities on the spatio-temporal nature of the earthquake-induced site response. Two main frameworks are addressed: conditional generative approaches with diffusion models and neural operators. On one hand, generative adversarial learning and diffusions models are compared in a time-series super-resolution context [1]. The main task is to improve the outcome of 3D fault-to-site earthquake numerical simulations (accurate up to 5 Hz [2, 3]) at higher frequencies (5-30 Hz), by learning the low-to-high frequency mapping from seismograms recorded worldwide [1]. The generation is conditioned by the numerical simulation synthetic time-histories, in a one-to-many setup that enables site-specific probabilistic hazard assessment. On the other hand, the successful use of Factorized Fourier Neural Operator (F-FNO) to entirely replace cumbersome 3D elastodynamic numerical simulations is described [4], showing how this approach can pave the way to real-time large-scale digital twins of earthquake prone regions. The trained neural operator learns the relationship between 3D heterogeneous geologies and surface ground motions generated by the propagation of seismic wave through these geologies. The F-FNO is trained on the HEMEW-3D (https://github.com/lehmannfa/HEMEW3D/releases) database, comprising 30000 high-fidelity numerical simulations of earthquake ground motion through generic geologies, performed by employing the high-performance code SEM3D [4]. Next, a smaller database was built specifically for the Teil region (Ardèche, France), where a MW 4.9 moderate shallow earthquake occurred in November 2019 [4]. The F-FNO is then specialized on this database database with just 250 examples. Transfer learning improved the prediction error by 22 %. According to seismological Goodness-of-Fit (GoF) metrics, 91% of predictions have an excellent GoF for the phase (and 62% for the envelope). Ground motion intensity measurements are, on average, slightly underestimated.

[1] Gatti, F.; Clouteau, D. Towards Blending Physics-Based Numerical Simulations and Seismic Databases Using Generative Adversarial Network. Computer Methods in Applied Mechanics and Engineering 2020, 372, 113421.
https://doi.org/10.1016/j.cma.2020.113421.

[2] Touhami, S.; Gatti, F.; Lopez-Caballero, F.; Cottereau, R.; de Abreu Corrêa, L.;Aubry, L.; Clouteau, D. SEM3D: A 3D High-Fidelity Numerical Earthquake Sim-ulator for Broadband (0–10 Hz) Seismic Response Prediction at a Regional Scale.Geosciences 2022, 12 (3), 112. https://doi.org/10.3390/geosciences12030112. https://github.com/sem3d/SEM

[3] Gatti, F.; Carvalho Paludo, L. D.; Svay, A.; Lopez-Caballero, F.-; Cottereau, R.;Clouteau, D. Investigation of the Earthquake Ground Motion Coherence in Het-erogeneous Non-Linear Soil Deposits. Procedia Engineering 2017, 199, 2354–2359.https://doi.org/10.1016/j.proeng.2017.09.232.[4] Lehmann, F.; Gatti, F.; Bertin, M.; Clouteau, D. Machine Learning Opportunities to Conduct High-Fidelity Earthquake Simulations in Multi-Scale Heterogeneous Geology.Front. Earth Sci. 2022, 10, 1029160. https://doi.org/10.3389/feart.2022.1029160.

[4] Lehmann, F.; Gatti, F.; Bertin, M.; Clouteau, D. Fourier Neural Operator Sur-rogate Model to Predict 3D Seismic Waves Propagation. arXiv April 20, 2023.http://arxiv.org/abs/2304.10242 (accessed 2023-04-21).

How to cite: Gatti, F., Lehmann, F., Gabrielidis, H., Bertin, M., Clouteau, D., and Vialle, S.: Deep learning generative strategies to enhance 3D physics-based seismic wave propagation: from diffusive super-resolution to 3D Fourier Neural Operators., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2443, https://doi.org/10.5194/egusphere-egu24-2443, 2024.

X5.178
|
EGU24-15914
Marisol Monterrubio-Velasco, Rut Blanco, Scott Callaghan, Cedric Bhihe, Marta Pienkowska, Jorge Ejarque, and Josep de la Puente

The Machine Learning Estimator for Ground Shaking Maps (MLESmaps) harnesses the ground shaking inference capability of Machine Learning (ML) models trained on physics-informed earthquake simulations. It infers intensity measures, such as RotD50, seconds after a significant earthquake has occurred given its magnitude and location. 

Our methodology incorporates both offline and online phases in a comprehensive workflow. It begins with the generation of a synthetic training data set, progresses through the extraction of predictor characteristics, proceeds to the validation and learning stages, and yields a learned inference model. 

MLESmap results can complement empirical Ground Motion Models (GMMs), in particular in data-poor areas, to assess post-earthquake hazards rapidly and accurately, potentially improving disaster response in earthquake-prone regions. Learned models incorporate physical features such as directivity, topography, or resonance at a speed comparable to that of the empirical GMMs. 

In this work, we present an overview of the MLESmap methodology and its application to two distinct study areas: southern California and southern Iceland

 

How to cite: Monterrubio-Velasco, M., Blanco, R., Callaghan, S., Bhihe, C., Pienkowska, M., Ejarque, J., and de la Puente, J.: Machine Learning Estimator for Ground-Shaking maps, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15914, https://doi.org/10.5194/egusphere-egu24-15914, 2024.

X5.179
|
EGU24-18444
|
ECS
Fatme Ramadan, Bill Fry, and Tarje Nissen-Meyer

Physics-based simulations of earthquake ground motions prove invaluable, particularly in regions where strong ground motion recordings remain scarce. However, the computational demands associated with these simulations limit their applicability in tasks that necessitate large-scale computations of a wide range of possible earthquake scenarios, such as those required in physics-based probabilistic seismic hazard analyses. To address this challenge, we propose a neural-network approach that enables the rapid computation of earthquake ground motions in the spectral domain, alleviating a significant portion of the computational burden. To illustrate our approach, we generate a database of ground motion simulations in the San Francisco Bay Area using AxiSEM3D, a 3D seismic wave simulator. The database includes 30 double-couple sources with varying depths and horizontal locations. Our simulations explicitly incorporate the effects of topography and viscoelastic attenuation and are accurate up to frequencies of 0.5 Hz. Preliminary results demonstrate that the trained neural network almost instantaneously produces estimates of peak ground displacements as well as displacement waveforms in the spectral domain that align closely with those obtained from the wave propagation simulations. Our approach also extends to predicting ground motions for ‘unsimulated’ source locations, ultimately providing a comprehensive resolution of the source space in our chosen physical domain. This advancement paves the way for a cost-effective simulation of numerous seismic sources, and enhances the feasibility of physics-based probabilistic seismic hazard analyses. 

How to cite: Ramadan, F., Fry, B., and Nissen-Meyer, T.: Rapid Computation of Physics-Based Ground Motions in the Spectral Domain using Neural Networks, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-18444, https://doi.org/10.5194/egusphere-egu24-18444, 2024.

X5.180
|
EGU24-19255
Chiara P Montagna, Deepak Garg, Martina Allegra, Flavio Cannavò, Gilda Currenti, Rebecca Bruni, and Paolo Papale

At active volcanoes, surface deformation is often a reflection of subsurface magma activity that is associated with pressure variations in magma sources. Magma dynamics cause a change of stress in the surrounding rocks. Consequently, the deformation signals propagate through the rocks and arrive at the surface where the monitoring network records them.

It is invaluable to have an automated tool that can instantly analyze the surface signals and give information about the evolution of the location and magnitude of pressure variations in case of volcanic unrest. Inverse methods employed for this often suffer from ill-posedness of the problem and non-uniqueness of solutions.

To this end, we are developing a digital twin to use on Mount Etna volcano, combining the capability of numerical simulations and AI. Our digital twin is composed of two AI models: the first AI model (AI1) will be trained on multi-parametric data to recognize unrest situations, and the second AI model (AI2) will be trained on a large number (order 10^5 - 10^6) of 3D elastostatic numerical simulations for dike intrusions with the real topography and best available heterogeneous elastic rock properties of Mount Etna Volcano using a forward modeling approach. Numerical simulations will be performed on Fenix HPC resources using the advanced open-source multi-physics finite element software Gales.

Both AI modules will be developed and trained independently and then put into use together. After activation, AI1 will analyze the streaming of monitoring data and activate AI2 in case of a volcanic crisis. AI2 will provide information about the acting volcanic source.

The software will be provided as an open-source package to allow replication on other volcanoes. The tool will serve as an unprecedented prototype for civil protection authorities to manage volcanic crises.

How to cite: Montagna, C. P., Garg, D., Allegra, M., Cannavò, F., Currenti, G., Bruni, R., and Papale, P.: A digital twin for volcanic deformation merging 3D numerical simulations and AI, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19255, https://doi.org/10.5194/egusphere-egu24-19255, 2024.

Posters virtual: Fri, 19 Apr, 14:00–15:45 | vHall X5

Display time: Fri, 19 Apr, 08:30–Fri, 19 Apr, 18:00
vX5.20
|
EGU24-6622
|
ECS
Jiye Lee, Dongho Kim, Seokmin Hong, Daeun Yun, Dohyuck Kwon, Robert Hill, Yakov Pachepsky, Feng Gao, Xuesong Zhang, Sangchul Lee, and KyungHwa Cho

Simulating nitrate fate and transport in freshwater is an essential part in water quality management. Numerical and data-driven models have been used for it. The numerical model SWAT simulates daily nitrate loads using simulated flow rate. Data-driven models are more flexible compared to SWAT as they can simulate nitrate load and flow rate independently. The objective of this work was evaluating the performance of SWAT and a deep learning model in terms of nutrient loads in cases when deep learning model is used in (a) simulating flow rate and nitrate concentration independently and (b) simulating both flow rate and nitrate concentration. The deep learning model was built using long-short-term-memory and three-dimensional convolutional networks. The input data, weather data and image data including leaf area index and land use, were acquired at the Tuckahoe Creek watershed in Maryland, United States. The SWAT model was calibrated with data over the training period (2014-2017) and validated with data over the testing period (2019) to simulate flow rate and nitrate load. The Nash-Sutcliffe efficiency was 0.31 and 0.40 for flow rate and -0.26 and -0.18 for the nitrate load over training and testing periods, respectively. Three data-driven modeling scenarios were generated for nitrate load. Scenario 1 included the flow rate observation and nitrate concentration simulation, scenario 2 included the flow rate simulation and nitrate concentration observation, and scenario 3 included the flow rate and nitrate concentration simulations. The deep learning model outperformed SWAT in all three scenarios with NSE from 0.49 to 0.58 over the training period and from 0.28 to 0.80 over the testing period. Scenario 1 showed the best results for nitrate load. The performance difference between SWAT and the deep learning model was most noticeable in fall and winter seasons. The deep learning modeling can be an efficient alternative to numerical watershed-scale models when the regular high frequency data collection is provided.

How to cite: Lee, J., Kim, D., Hong, S., Yun, D., Kwon, D., Hill, R., Pachepsky, Y., Gao, F., Zhang, X., Lee, S., and Cho, K.: Comparison of SWAT and a deep learning model in nitrate load simulation at the Tuckahoe creek watershed in the United States, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6622, https://doi.org/10.5194/egusphere-egu24-6622, 2024.