ITS1.13/AS5.2 | Machine learning for Earth System modeling
EDI
Machine learning for Earth System modeling
Co-organized by CR2/ESSI1/NP4/SM8
Convener: Julien Brajard | Co-conveners: Alejandro Coca-Castro, Redouane Lguensat, Francine Schevenhoven, Maike Sonnewald
Orals
| Mon, 24 Apr, 08:30–12:30 (CEST), 14:00–15:45 (CEST)
 
Room N1
Posters on site
| Attendance Mon, 24 Apr, 16:15–18:00 (CEST)
 
Hall X5
Posters virtual
| Attendance Mon, 24 Apr, 16:15–18:00 (CEST)
 
vHall AS
Orals |
Mon, 08:30
Mon, 16:15
Mon, 16:15
Unsupervised, supervised, semi-supervised as well as reinforcement learning are now increasingly used to address Earth system-related challenges for the atmosphere, the ocean, the land surface, or the sea ice.
Machine learning could help extract information from numerous Earth System data, such as in-situ and satellite observations, as well as improve model prediction through novel parameterizations or speed-ups. This session invites submissions spanning modeling and observational approaches towards providing an overview of state-of-the-art applications of these novel methods for predicting and monitoring the Earth System from short to decadal time scales. This includes (but is not restricted to):
- The use of machine learning to reduce or estimate model uncertainty
- Generate significant speedups
- Design new parameterization schemes
- Emulate numerical models
- Fundamental process understanding

Please consider submitting abstracts focused on ML applied to observations and modeling of the climate and its constituent processes to the companion "ML for Climate Science" session.

Orals: Mon, 24 Apr | Room N1

Chairpersons: Alejandro Coca-Castro, Julien Brajard
08:30–08:35
Parametrization / hybrid
08:35–08:45
|
EGU23-3256
|
On-site presentation
Matthew Chantry, Peter Ukkonen, Robin Hogan, and Peter Dueben

Machine learning, and particularly neural networks, have been touted as a valuable accelerator for physical processes. By training on data generated from an existing algorithm a network may theoretically learn a more efficient representation and accelerate the computations via emulation. For many parameterized physical processes in weather and climate models this being actively pursued. Here, we examine the value of this approach for radiative transfer within the IFS, an operational numerical weather prediction model where both accuracy and speed are vital. By designing custom, physics-informed, neural networks we achieve outstanding offline accuracy for both longwave and shortwave processes. In coupled testing we find minimal changes to forecast scores at near operational resolutions. We carry out coupled inference on GPUs to maximise the speed benefits from the emulator approach.

How to cite: Chantry, M., Ukkonen, P., Hogan, R., and Dueben, P.: Emulating radiative transfer in a numerical weather prediction model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3256, https://doi.org/10.5194/egusphere-egu23-3256, 2023.

08:45–08:55
|
EGU23-13771
|
On-site presentation
Jui-Yuan Christine Chiu, Chen-Kuang Kevin Yang, Jake J. Gristey, Graham Feingold, and William I. Gustafson

Clouds play an important role in determining the Earth’s radiation budget. Despite their complex and three-dimensional (3D) structures, their interactions with radiation in models are often simplified to one-dimensional (1D), considering the time required to compute radiative transfer. Such a simplification ignores cloud Inhomogeneity and horizontal photon transport in radiative processes, which may be an acceptable approximation for low-resolution models, but can lead to significant errors and impact cloud evolution predictions in high-resolution simulations. Since model developments and operations are heading toward a higher resolution that is more susceptible to radiation errors, a fast and accurate 3D radiative transfer scheme becomes important and necessary. To address the need, we develop a machine-learning-based 3D radiative transfer emulator to provide surface radiation, shortwave fluxes at all layers, and heating rate profiles. The emulators are trained for highly heterogeneous shallow cumulus under different solar positions. We will discuss the performance of the emulators in accuracy and efficiency and discuss their potential applications.

How to cite: Chiu, J.-Y. C., Yang, C.-K. K., Gristey, J. J., Feingold, G., and Gustafson, W. I.: Machine Learning Emulation of 3D Shortwave Radiative Transfer for Shallow Cumulus Cloud Fields, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13771, https://doi.org/10.5194/egusphere-egu23-13771, 2023.

08:55–09:00
09:00–09:10
|
EGU23-10959
|
On-site presentation
|
Donifan Barahona, Katherine Breen, and Heike Kalesse-Los

Small-scale fluctuations in vertical wind velocity, unresolved by climate and weather forecast models play a particularly important role in determining vapor and tracer fluxes, turbulence and cloud formation. Fluctuations in vertical wind velocity are challenging to represent since they depend on orography, large scale circulation features, convection and wind shear. Parameterizations developed using data retrieved at specific locations typically lack generalization and may introduce error when applied on a wide range of different conditions. Retrievals of vertical wind velocity are also difficult and subject to large uncertainty. This work develops a new data-driven, neural network representation of subgrid scale variability in vertical wind velocity. Using a novel deep learning technique, the new parameterization merges data from high-resolution global cloud resolving model simulations with high frequency Radar and Lidar retrievals.  Our method aims to reproduce observed statistics rather than fitting individual measurements. Hence it is resilient to experimental uncertainty and robust to generalization. The neural network parameterization can be driven by weather forecast and reanalysis products to make real time estimations. It is shown that the new parameterization generalizes well outside of the training data and reproduces much better the statistics of vertical wind velocity than purely data-driven models.

How to cite: Barahona, D., Breen, K., and Kalesse-Los, H.: Deep learning parameterization of small-scale vertical velocity variability for atmospheric models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10959, https://doi.org/10.5194/egusphere-egu23-10959, 2023.

09:10–09:20
|
EGU23-5523
|
ECS
|
On-site presentation
Yvonne Ruckstuhl, Raphael Kriegmair, Stephan Rasp, and George Craig

Machine learning represents a potential method to cope with the gray zone problem of representing motions in dynamical systems on scales comparable to the model resolution. Here we explore the possibility of using a neural network to directly learn the error caused by unresolved scales. We use a modified shallow water model which includes highly nonlinear processes mimicking atmospheric convection. To create the training dataset, we run the model in a high- and a low-resolution setup and compare the difference after one low-resolution time step, starting from the same initial conditions, thereby obtaining an exact target. The neural network is able to learn a large portion of the difference when evaluated on single time step predictions on a validation dataset. When coupled to the low-resolution model, we find large forecast improvements up to 1 d on average. After this, the accumulated error due to the mass conservation violation of the neural network starts to dominate and deteriorates the forecast. This deterioration can effectively be delayed by adding a penalty term to the loss function used to train the ANN to conserve mass in a weak sense. This study reinforces the need to include physical constraints in neural network parameterizations.

How to cite: Ruckstuhl, Y., Kriegmair, R., Rasp, S., and Craig, G.: Using weak constrained neural networks to improve simulations in the gray zone, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5523, https://doi.org/10.5194/egusphere-egu23-5523, 2023.

09:20–09:25
09:25–09:35
|
EGU23-3321
|
ECS
|
On-site presentation
Zikang He, Julien Brajard, Yiguo Wang, Xidong Wang, and Zheqi Shen

Dynamical models used in climate prediction often have systematic errors that can bias the predictions. In this study, we utilized machine learning to address this issue. Machine learning was applied to learn the error corrected by data assimilation and thus build a data-driven model to emulate the dynamical model error. A hybrid model was constructed by combining the dynamical and data-driven models. We tested the hybrid model using synthetic observations generated by a simplified high-resolution coupled ocean-atmosphere model (MAOOAM, De Cruz et al., 2016) and compared its performance to that of a low-resolution version of the same model used as a standalone dynamical model.

To evaluate the forecast skill of the hybrid model, we produced ensemble predictions based on initial conditions determined through data assimilation. The results show that the hybrid model significantly improves the forecast skill for both atmospheric and oceanic variables compared to the dynamical model alone. To explore what affects short-term forecast skills and long-term forecast skills, we built two other hybrid models by correcting errors either only atmospheric or only oceanic variables. For short-term atmospheric forecasts, the results show that correcting only oceanic errors has no effect on atmosphere variables forecasts but correcting only atmospheric variables shows similar forecast skill to correcting both atmospheric and oceanic errors. For the long-term forecast of oceanic variables, correcting the oceanic error can improve the forecast skill, but correcting both atmospheric and oceanic errors can obtain the best forecast skill. The results indicate that for the long-term forecast of oceanic variables, bias correction of both oceanic and atmospheric components can have a significant effect.

How to cite: He, Z., Brajard, J., Wang, Y., Wang, X., and Shen, Z.: Using machine learning to improve dynamical predictions in a coupled model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3321, https://doi.org/10.5194/egusphere-egu23-3321, 2023.

09:35–09:45
|
EGU23-10351
|
ECS
|
On-site presentation
William Gregory, Mitchell Bushuk, Alistair Adcroft, and Yongfei Zhang

Data assimilation is often viewed as a framework for correcting short-term error growth in dynamical climate model forecasts. When viewed on the time scales of climate however, these short-term corrections, or analysis increments, closely mirror the systematic bias patterns of the dynamical model. In this work, we show that Convolutional Neural Networks (CNNs) can be used to learn a mapping from model state variables to analysis increments, thus promoting the feasibility of a data-driven model parameterization which predicts state-dependent model errors. We showcase this problem using an ice-ocean data assimilation system within the fully coupled Seamless system for Prediction and EArth system Research (SPEAR) model at the Geophysical Fluid Dynamics Laboratory (GFDL), which assimilates satellite observations of sea ice concentration. The CNN then takes inputs of data assimilation forecast states and tendencies, and makes predictions of the corresponding sea ice concentration increments. Specifically, the inputs are sea ice concentration, sea-surface temperature, ice velocities, ice thickness, net shortwave radiation, ice-surface skin temperature, and sea-surface salinity. We show that the CNN is able to make skilful predictions of the increments, particularly between December and February in both the Arctic and Antarctic, with average daily spatial pattern correlations of 0.72 and 0.79, respectively. Initial investigation of implementation of the CNN into the fully coupled SPEAR model shows that the CNN can reduce biases in retrospective seasonal sea ice forecasts by emulating a data assimilation system, further suggesting that systematic sea ice biases could be reduced in a free-running climate simulation.

How to cite: Gregory, W., Bushuk, M., Adcroft, A., and Zhang, Y.: Deep learning of systematic sea ice model errors from data assimilation increments, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10351, https://doi.org/10.5194/egusphere-egu23-10351, 2023.

09:45–09:50
09:50–10:00
|
EGU23-9285
|
ECS
|
On-site presentation
Alistair White and Niklas Boers

Neural Differential Equations (NDEs) provide a powerful framework for hybrid modeling. Unfortunately, the flexibility of the neural network component of the model comes at the expense of potentially violating known physical invariants, such as conservation laws, during inference. This shortcoming is especially critical for applications requiring long simulations, such as climate modeling, where significant deviations from the physical invariants can develop over time. It is hoped that enforcing physical invariants will help address two of the main barriers to adoption for hybrid models in climate modeling: (1) long-term numerical stability, and (2) generalization to out-of-sample conditions unseen during training, such as climate change scenarios. We introduce Stabilized Neural Differential Equations, which augment an NDE model with compensating terms that ensure physical invariants remain approximately satisfied during numerical simulations. We apply Stabilized NDEs to the double pendulum and Hénon–Heiles systems, both of which are conservative, chaotic dynamical systems possessing a time-independent Hamiltonian. We evaluate Stabilized NDEs using both short-term and long-term prediction tasks, analogous to weather and climate prediction, respectively. Stabilized NDEs perform at least as well as unstabilized models at the “weather prediction” task, that is, predicting the exact near-term state of the system given initial conditions. On the other hand, Stabilized NDEs significantly outperform unstabilized models at the “climate prediction” task, that is, predicting long-term statistical properties of the system. In particular, Stabilized NDEs conserve energy during long simulations and consequently reproduce the long-term dynamics of the target system with far higher accuracy than non-energy conserving models. Stabilized NDEs also remain numerically stable for significantly longer than unstabilized models. As well as providing a new and lightweight method for combining physical invariants with NDEs, our results highlight the relevance of enforcing conservation laws for the long-term numerical stability and physical accuracy of hybrid models.

How to cite: White, A. and Boers, N.: Stabilized Neural Differential Equations for Hybrid Modeling with Conservation Laws, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9285, https://doi.org/10.5194/egusphere-egu23-9285, 2023.

10:00–10:10
|
EGU23-12403
|
ECS
|
On-site presentation
Maximilian Gelbrecht and Niklas Boers

When predicting complex systems such as parts of the Earth system, one typically relies on differential equations which often can be incomplete, missing unknown influences or include errors through their discretization. To remedy those effects, we present PseudoSpectralNet (PSN): a hybrid model that incorporates both a knowledge-based part of an atmosphere model and a data-driven part, an artificial neural network (ANN). PSN is a neural differential equation (NDE): it defines the right-hand side of a differential equation, combining a physical model with ANNs and is able to train its parameters inside this NDE. Similar to the approach of many atmosphere models, part of the model is computed in the spherical harmonics domain, and other parts in the grid domain. The model consists of ANN layers in each domain, information about derivatives, and parameters such as the orography. We demonstrate the capabilities of PSN on the well-studied Marshall Molteni Quasigeostrophic Model.

How to cite: Gelbrecht, M. and Boers, N.: PseudoSpectralNet: A hybrid neural differential equation for atmosphere models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12403, https://doi.org/10.5194/egusphere-egu23-12403, 2023.

10:10–10:15
Coffee break
Chairpersons: Francine Schevenhoven, Sophie Giffard-Roisin
Applications
10:45–10:50
10:50–11:00
|
EGU23-4337
|
Highlight
|
On-site presentation
|
Pratik Rao, Richard Dwight, Deepali Singh, Jin Maruhashi, Irene Dedoussi, Volker Grewe, and Christine Frömming

While efforts have been made to curb CO2 emissions from aviation, the more uncertain non-CO2 effects that contribute about two-thirds to the warming in terms of radiative forcing (RF), still require attention. The most important non-CO2 effects include persistent line-shaped contrails, contrail-induced cirrus clouds and nitrogen oxide (NOx) emissions that alter the ozone (O3) and methane (CH4) concentrations, both of which are greenhouse gases, and the emission of water vapour (H2O). The climate impact of these non-CO2 effects depends on emission location and prevailing weather situation; thus, it can potentially be reduced by advantageous re-routing of flights using Climate Change Functions (CCFs), which are a measure for the climate effect of a locally confined aviation emission. CCFs are calculated using a modelling chain starting from the instantaneous RF (iRF) measured at the tropopause that results from aviation emissions. However, the iRF is a product of computationally intensive chemistry-climate model (EMAC) simulations and is currently restricted to a limited number of days and only to the North Atlantic Flight Corridor. This makes it impossible to run EMAC on an operational basis for global flight planning. A step in this direction lead to a surrogate model called algorithmic Climate Change Functions (aCCFs), derived by regressing CCFs (training data) against 2 or 3 local atmospheric variables at the time of emission (features) with simple regression techniques and are applicable only in parts of the Northern hemisphere. It was found that in the specific case of O3 aCCFs, which provide a reasonable first estimate for the short-term impact of aviation NOx on O3 warming using temperature and geopotential as features, can be vastly improved [1]. There is aleatoric uncertainty in the full-order model (EMAC), stemming from unknown sources (missing features) and randomness in the known features, which can introduce heteroscedasticity in the data. Deterministic surrogates (e.g. aCCFs) only predict point estimates of the conditional average, thereby providing an incomplete picture of the stochastic response. Thus, the goal of this research is to build a new surrogate model for iRF, which is achieved by :

1. Expanding the geographical coverage of iRF (training data) by running EMAC simulations in more regions (North & South America, Eurasia, Africa and Australasia) at multiple cruise flight altitudes,

2. Following an objective approach to selecting atmospheric variables (feature selection) and considering the importance of local as well as non-local effects,

3. Regressing the iRF against selected atmospheric variables using supervised machine learning techniques such as homoscedastic and heteroscedastic Gaussian process regression.

We present a new surrogate model that predicts iRF of aviation NOx-O3 effects on a regular basis with confidence levels, which not only improves our scientific understanding of NOx-O3 effects, but also increases the potential of global climate-optimised flight planning.

References

[1] Rao, P.; et al. Case Study for Testing the Validity of NOx-Ozone Algorithmic Climate Change Functions for Optimising Flight Trajectories. Aerospace 20229, 231. https://doi.org/10.3390/aerospace9050231

How to cite: Rao, P., Dwight, R., Singh, D., Maruhashi, J., Dedoussi, I., Grewe, V., and Frömming, C.: Towards a new surrogate model for predicting short-term NOx-O3 effects from aviation using Gaussian processes, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4337, https://doi.org/10.5194/egusphere-egu23-4337, 2023.

11:00–11:10
|
EGU23-12355
|
ECS
|
On-site presentation
Ajit Pillai, Ian Ashton, Jiaxin Chen, and Edward Steele

Machine learning is increasingly being applied to ocean wave modelling. Surrogate modelling has the potential to reduce or bypass the large computational requirements, creating a low computational-cost model that offers a high level of accuracy. One approach integrates in-situ measurements and historical model runs to achieve the spatial coverage of the model and the accuracy of the in-situ measurements. Once operational, such a system requires very little computational power, meaning that it could be deployed to a mobile phone, operational vessel, or autonomous vessel to give continuous data. As such, it makes a significant change to the availability of met-ocean data with potential to revolutionise data provision and use in marine and coastal settings.

This presentation explores the impact that an underlying physics-based model can have in such a machine learning driven framework; comparing training the system on a bespoke regional SWAN wave model developed for wave energy developments in the South West of the UK against training using the larger North-West European Shelf long term hindcast wave model run by the UK Met Office. The presentation discusses the differences in the underlying NWP models, and the impacts that these have on the surrogate wave models’ accuracy in both nowcasting and forecasting wave conditions at areas of interest for renewable energy developments. The results identify the importance in having a high quality, validated, NWP model for training such a system and the way in which the machine learning methods can propagate and exaggerate the underlying model uncertainties.

How to cite: Pillai, A., Ashton, I., Chen, J., and Steele, E.: Comparison of NWP Models Used in Training Surrogate Wave Models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12355, https://doi.org/10.5194/egusphere-egu23-12355, 2023.

11:10–11:15
11:15–11:25
|
EGU23-10135
|
ECS
|
On-site presentation
Brian Groenke, Moritz Langer, Guillermo Gallego, and Julia Boike

Permafrost, i.e. ground material that remains perennially frozen, plays a key role in Arctic ecosystems. Monitoring the response of permafrost to rapid climate change remains difficult due to the sparse availability of long-term, high quality measurements of the subsurface. Numerical models are therefore an indispensable tool for understanding the evolution of Arctic permafrost. However, large scale simulation of the hydrothermal processes affecting permafrost is challenging due to the highly nonlinear effects of phase change in porous media. The resulting computational cost of such simulations is especially prohibitive for sensitivity analysis and parameter estimation tasks where a large number of simulations may be necessary for robust inference of quantities such as temperature, water fluxes, and soil properties. In this work, we explore the applicability of recently developed physics-informed machine learning (PIML) methods for accelerating numerical models of permafrost hydrothermal dynamics. We present a preliminary assessment of two possible applications of PIML in this context: (1) linearization of the nonlinear PDE system according to Koopman operator theory in order to reduce the computational burden of large scale simulations, and (2) efficient parameterization of the surface energy balance and snow dynamics on the subsurface hydrothermal regime. By combining the predictive power of machine learning with the underlying conservation laws, PIML can potentially enable researchers and practitioners interested in permafrost to explore complex process interactions at larger spatiotemporal scales.

How to cite: Groenke, B., Langer, M., Gallego, G., and Boike, J.: Exploring physics-informed machine learning for accelerated simulation of permafrost processes, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10135, https://doi.org/10.5194/egusphere-egu23-10135, 2023.

11:25–11:35
|
EGU23-11906
|
ECS
|
On-site presentation
Basil Kraft, Gregory Duveiller, Markus Reichstein, and Martin Jung

Ecosystems are affected by extreme climate conditions such as droughts worldwide but we still lack understanding of the involved dynamics. Which factors render an ecosystem more resilient, and on which temporal scales do weather patterns affect vegetation state and physiology? Traditional approaches to tackle such questions involve assumption-based land surface modeling or inversions. Machine learning (ML) methods can provide a complementary perspective on how ecosystems respond to climate in a more data-driven and assumption-free manner. However, ML depends heavily on data, and commonly used observations of vegetation at best contain one observation per day, but most products are provided at 16-daily to monthly temporal resolution. This masks important processes at sub-monthly time scales. In addition, ML models are inherently difficult to interpret, which still limits their applicability for process understanding.

In the present study, we combine modern deep learning models in the time domain with observations from the geostationary Meteosat Second Generation (MSG) satellite, centered over Africa. We model fractional vegetation cover (representing vegetation state) and land surface temperature (as a proxy for water stress) from MSG as a function of meteorology and static geofactors. MSG collects observations at sub-daily frequency, rendering it into an excellent tool to study short- to mid-term land surface processes. Furthermore, we use methods from explainable ML for post-hoc model interpretation to identify meteorological drivers of vegetation dynamics and their interaction with key geofactors.

From the analysis, we expect to gather novel insights into ecosystem response to droughts with high temporal fidelity. Drought response of vegetation can be highly diverse and complex especially in arid to semi-arid regions prevalent in Africa. Also, we assess the potential of explainable machine learning to discover new linkages and knowledge and discuss potential pitfalls of the approach. Explainable machine learning, combined with potent deep learning approaches and modern Earth observation products offers the opportunity to complement assumption-based modeling to predict and understand ecosystem response to extreme climate.

How to cite: Kraft, B., Duveiller, G., Reichstein, M., and Jung, M.: Untapping the potential of geostationary EO data to understand drought impacts with XAI, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11906, https://doi.org/10.5194/egusphere-egu23-11906, 2023.

11:35–11:40
11:40–11:50
|
EGU23-15183
|
ECS
|
On-site presentation
Margrethe Kvale Loe and John Bjørnar Bremnes

The Norwegian Meteorological Institute has for many years applied a CFD model to downscale operational NWP forecasts to 100-200m spatial resolution for wind and turbulence forecasting for about 20 Norwegian airports. Due to high computational costs, however, the CFD model can only be run twice per day, each time producing a 12-hour forecast. An approximate approach requiring far less compute resources using deep learning has therefore been developed. In this, the relation between relevant NWP forecast variables at grids of 2.5 km spatial resolution and wind and turbulence from the CFD model has been approximated using neural networks with basic convolutional and dense layers. The deep learning models have been trained on approximately two year of the data separately for each airport. The results show that the models are to a large extent able to capture the characteristics of their corresponding CDF simulations, and the method is in due time intended to fully replace the current operational solution. 

How to cite: Loe, M. K. and Bremnes, J. B.: Deep learning approximations of a CFD model for operational wind and turbulence forecasting, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15183, https://doi.org/10.5194/egusphere-egu23-15183, 2023.

11:50–12:00
|
EGU23-6287
|
On-site presentation
Elnaz Azmi, Jörg Meyer, Marcus Strobl, Michael Weimer, and Achim Streit

Accurate forecasts of the atmosphere demand large-scale simulations with high spatio-temporal resolution. Atmospheric chemistry modeling, for example, usually requires solving a system of hundreds of coupled ordinary partial differential equations. Due to the computational complexity, large high performance computing resources are required, which is a challenge as the spatio-temporal resolution increases. Machine learning methods and specially deep learning can offer an approximation of the simulations with some factor of speed-up while using less compute resources. The goal of this study is to investigate the feasibility, opportunities but also challenges and pitfalls of replacing the compute-intensive chemistry of a state-of-the-art atmospheric chemistry model with a trained neural network model to forecast the concentration of trace gases at each grid cell and to reduce the computational complexity of the simulation. In this work, we introduce a neural network model (ICONET) to forecast trace gas concentrations without executing the traditional compute-intensive atmospheric simulations. ICONET is equipped with a multifeature Long Short Term Memory (LSTM) model to forecast atmospheric chemicals iteratively in time. We generated the training and test dataset, our ground truth for ICONET, by execution of an atmospheric chemistry simulation in ICON-ART. Applying the ICONET trained model to forecast a test dataset results in a good fit of the forecast values compared to our ground truth dataset. We discuss appropriate metrics to evaluate the quality of models and present the quality of the ICONET forecasts with RMSE and KGE metrics. The variety in the nature of trace gases limits the model's learning and forecast skills according to the variable. In addition to the quality of the ICONET forecasts, we described the computational efficiency of ICONET as its run time speed-up in comparison to the run time of the ICON-ART simulation. The ICONET forecast showed a speed-up factor of 3.1 over the run time of the atmospheric chemistry simulation of ICON-ART, which is a significant achievement, especially when considering the importance of ensemble simulations.

How to cite: Azmi, E., Meyer, J., Strobl, M., Weimer, M., and Streit, A.: Approximation and Optimization of Atmospheric Simulations in High Spatio-Temporal Resolution with Neural Networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6287, https://doi.org/10.5194/egusphere-egu23-6287, 2023.

12:00–12:05
12:05–12:15
|
EGU23-8288
|
On-site presentation
Quentin Febvre, Ronan Fablet, Julien Le Sommer, Clément Ubelmann, and Simon Benaïchouche

In oceanography, altimetry products are used to measure the height of the ocean surface, and ocean modeling is used to understand and predict the behavior of the ocean. There are two main types of gridded altimetry products: operational sea level products, such as DUACS, which are used for forecasting and reconstruction, and ocean model reanalyses, such as Glorys 12, which are used to forecast seasonal trends and assess physical characteristics. However, advances in ocean modeling do not always directly benefit operational forecast or reconstruction products.

In this study, we investigate the potential for deep learning methods, which have been successfully applied in simulated setups, to leverage ocean modeling efforts for improving operational altimetry products. Specifically, we ask under what conditions the knowledge learned from ocean simulations can be applied to real-world operational altimetry mapping. We consider the impact of simulation grid resolution, observation data reanalysis, and physical processes modeled on the performance of a deep learning model.

Our results show that the deep learning model outperforms current operational methods on a regional domain around the Gulfstream, with a 50km improvement in resolved scale. This improvement has the potential to enhance the accuracy of operational altimetry products, which are used for a range of important applications, such as climate monitoring and understanding mesoscale ocean dynamics.

How to cite: Febvre, Q., Fablet, R., Le Sommer, J., Ubelmann, C., and Benaïchouche, S.: Learning operational altimetry mapping from ocean models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8288, https://doi.org/10.5194/egusphere-egu23-8288, 2023.

12:15–12:25
|
EGU23-11687
|
ECS
|
On-site presentation
Carola Trahms, Yannick Wölker, and Arne Biastoch

Determining the number of existing water masses and defining their boundaries is subject to ongoing discussion in physical oceanography. Traditionally, water masses are defined manually by experts setting constraints based on experience and previous knowledge about the hydrographic properties describing them. In recent years, clustering, an unsupervised machine learning approach, has been introduced as a tool to determine clusters, i.e., volumes, with similar hydrographic properties without explicitly defining their hydrographic constraints. However, the exact number of clusters to be looked for is set manually by an expert up until now. 

We propose a method that determines a fitting number of clusters for hydrographic clusters in a data driven way. In a first step, the method averages the data in different-sized slices along the time or depth axis as the structure of the hydrographic space changes strongly either in time or depth. Then the method applies clustering algorithms on the averaged data and calculates off-the-shelf evaluation scores (Davies-Bouldin, Calinski-Harabasz, Silhouette Coefficient) for several predefined numbers of clusters. In the last step, the optimal number of clusters is determined by analyzing the cluster evaluation scores across different numbers of clusters for optima or relevant changes in trend. 

For validation we applied this method to the output for the subpolar North Atlantic between 1993 and 1997 of the high-resolution Atlantic Ocean model VIKING20X, in direct exchange with domain experts to discuss the resulting clusters. Due to the change from strong to weak deep convection in these years, the hydrographic properties vary strongly in the time and depth dimension, providing a specific challenge to our methodology. 

Our findings suggest that it is possible to identify an optimal number of clusters using the off-the-shelf cluster evaluation scores that catch the underlying structure of the hydrographic space. The optimal number of clusters identified by our data-driven method agrees with the optimal number of clusters found by expert interviews. These findings contribute to aiding and objectifying water mass definitions across multiple expert decisions, and demonstrate the benefit of introducing data science methods to analyses in physical oceanography.

How to cite: Trahms, C., Wölker, Y., and Biastoch, A.: Objectively Determining the Number of Similar Hydrographic Clusters with Unsupervised Machine Learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11687, https://doi.org/10.5194/egusphere-egu23-11687, 2023.

12:25–12:30
Emulation and representation
Lunch break
Chairpersons: Alejandro Coca-Castro, Francine Schevenhoven
14:00–14:05
14:05–14:15
|
EGU23-3128
|
ECS
|
Highlight
|
On-site presentation
Philipp Hess, Stefan Lange, and Niklas Boers

Numerical Earth system models (ESMs) are our primary tool for projecting future climate scenarios. Their simulation output is used by impact models that assess the effect of anthropogenic global warming, e.g., on flood events, vegetation changes or crop yields. Precipitation, an atmospheric variable with arguably one of the largest socio-economic impacts, involves various processes on a wide range of spatial-temporal scales. However, these cannot be completely resolved in ESMs due to the limited discretization of the numerical model. 
This can lead to biases in the ESM output that need to be corrected in a post-processing step prior to feeding ESM output into impact models, which are calibrated with observations [1]. While established post-processing methods successfully improve the modelled temporal statistics for each grid cell individually, unrealistic spatial features that require a larger spatial context are not addressed.
Here, we apply a cycle-consistent generative adversarial network (CycleGAN) [2] that is physically constrained to the precipitation output from Coupled Model Intercomparison Project phase 6 (CMIP6)  ESMs to correct both temporal distributions and spatial patterns. The CycleGAN can be naturally trained on daily ESM and reanalysis fields that are unpaired due to the deviating trajectories of the ESM and observation-based ground truth. 
We evaluate our method against a state-of-the-art bias adjustment framework (ISIMIP3BASD) [3] and find that it outperforms it in correcting spatial patterns and achieves comparable results on temporal distributions. We further discuss the representation of extreme events and suitable metrics for quantifying the realisticness of unpaired precipitation fields.

 [1] Cannon, A.J., et al. "Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes?." Journal of Climate 28.17 (2015): 6938-6959.

[2] Zhu, J.-Y., et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." Proceedings of the IEEE international conference on computer vision. 2017.

[3] Lange, S. "Trend-preserving bias adjustment and statistical downscaling with ISIMIP3BASD (v1.0)." Geoscientific Model Development 12.7 (2019): 3055-3070.

How to cite: Hess, P., Lange, S., and Boers, N.: Improving global CMIP6 Earth system model precipitation output with generative adversarial networks for unpaired image-to-image translation, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3128, https://doi.org/10.5194/egusphere-egu23-3128, 2023.

14:15–14:25
|
EGU23-16281
|
On-site presentation
Seasonal Forecasting using Machine Learning Algorithms for the continental Europe
(withdrawn)
Alper Ünal, Mehmet Ilıcak, Busra Asan, and Gozde Unal
14:25–14:30
14:30–14:40
|
EGU23-3117
|
On-site presentation
Christian Lessig, Ilaria Luise, and Martin Schultz

The AtmoRep project asks if one can train one neural network that represents and describes all atmospheric dynamics. AtmoRep’s ambition is hence to demonstrate that the concept of large-scale representation learning, whose principle feasibility and potential was established by large language models such as GPT-3, is also applicable to scientific data and in particular to atmospheric dynamics. The project is enabled by the large amounts of atmospheric observations that have been made in the past as well as advances on neural network architectures and self-supervised learning that allow for effective training on petabytes of data. Eventually, we aim to train on all of the ERA5 reanalysis and, furthermore, fine tune on observational data such as satellite measurements to move beyond the limits of reanalyses.

We will present the theoretical formulation of AtmoRep as an approximate representation for the atmosphere as a stochastic dynamical system. We will also detail our transformer-based network architecture and the training protocol for self-supervised learning so that unlabelled data such as reanalyses, simulation outputs and observations can be employed for training and re-fining the network. Results will be presented for the performance of AtmoRep for downscaling, precipitation forecasting, the prediction of tropical convection initialization, and for model correction. Furthermore, we also demonstrate that AtmoRep has substantial zero-short skill, i.e., it is capable to perform well on tasks it was not trained for. Zero- and few-shot performance (or in context learning) is one of the hallmarks of large-scale representation learning and to our knowledge has never been demonstrated in the geosciences.

How to cite: Lessig, C., Luise, I., and Schultz, M.: AtmoRep: Large Scale Representation Learning for Atmospheric Data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3117, https://doi.org/10.5194/egusphere-egu23-3117, 2023.

14:40–14:50
|
EGU23-1825
|
ECS
|
On-site presentation
Jieyu Chen, Kevin Höhlein, and Sebastian Lerch

Weather forecasts today are typically issued in the form of ensemble simulations based on multiple runs of numerical weather prediction models with different perturbations in the initial states and the model physics. In light of the continuously increasing spatial resolutions of operational weather models, this results in large, high-dimensional datasets that nonetheless contain relevant spatial and temporal structure, as well as information about the predictive uncertainty. We propose invariant variational autoencoder (iVAE) models based on convolutional neural network architectures to learn low-dimensional representations of the spatial forecast fields. We specifically aim to account for the ensemble character of the input data and discuss methodological questions about the optimal design of suitable dimensionality reduction methods in this setting. Thereby, our iVAE models extend previous work where low-dimensional representations of single, deterministic forecast fields were learned and utilized for incorporating spatial information into localized ensemble post-processing methods based on neural networks [1], which were able to improve upon model utilizing location-specific inputs only [2]. By additionally incorporating the ensemble dimension and learning representation for probability distributions of spatial fields, we aim to enable a more flexible modeling of relevant predictive information contained in the full forecast ensemble. Additional potential applications include data compression and the generation of forecast ensembles of arbitrary size.

We illustrate our methodological developments based on a 10-year dataset of gridded ensemble forecasts from the European Centre for Medium-Range Weather Forecasts of several meteorological variables over Europe. Specifically, we investigate alternative model architectures and highlight the importance of tailoring the loss function to the specific problem at hand.

References:

[1] Lerch, S. & Polsterer, K.L. (2022). Convolutional autoencoders for spatially-informed ensemble post-processing. ICLR 2022 AI for Earth and Space Science Workshop, https://arxiv.org/abs/2204.05102.

[2] Rasp, S. & Lerch, S. (2018). Neural networks for post-processing ensemble weather forecasts. Monthly Weather Review, 146, 3885-3900.

How to cite: Chen, J., Höhlein, K., and Lerch, S.: Spatial representation learning for ensemble weather simulations using invariant variational autoencoders, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1825, https://doi.org/10.5194/egusphere-egu23-1825, 2023.

14:50–14:55
14:55–15:05
|
EGU23-10810
|
ECS
|
On-site presentation
Suyash Bire, Björn Lütjens, Dava Newman, and Chris Hill

Adjoints have become a staple of the oceanic and atmospheric numerical modeling community over the past couple of decades as they are useful for tuning of dynamical models, sensitivity analyses, and data assimilation. One such application is generation of reanalysis datasets, which provide an optimal record of our past weather, climate, and ocean. For example, the state-of-the-art ocean-ice renanalysis dataset, ECCO, is created by optimally combining a numerical ocean model with heterogeneous observations through a technique called data assimilation. Data assimilation in ECCO minimizes the distance between model and observations by calculating adjoints, i.e., gradients of the loss w.r.t. simulation forcing fields (wind and surface heat fluxes). The forcing fields are iteratively updated and the model is rerun until the loss is minimized to ensure that the numerical model does not drastically deviate from the observations. Calculating adjoints, however, either requires  disproportionately high computational resources  or rewriting the dynamical model code to be autodifferentiable. 

Therefore, we ask if deep learning-based emulators can provide fast and accurate adjoints. Ocean data is smooth, high-dimensional, and has complex spatiotemporal correlations. Therefore, as an initial foray into ocean emulators, we leverage a combination of neural operators and transformers. Specifically, we have adapted the FourCastNet architecture, which has successfully emulated ERA5 weather data in seconds rather than hours, to emulate an idealized ocean simulation.

We generated a ground-truth dataset by simulating a double-gyre, an idealized representation of the North Atlantic Ocean, using MITgcm, a state-of-the-art dynamical model. The model was forced by zonal wind at the surface and relaxation to a meridional profile of temperature — warm/cold temperatures at low/high latitudes. This simulation produced turbulent western boundary currents embedded in the large-scale gyre circulation. We performed 4 additional simulations by modifying the magnitude of SST relaxation and wind forcing to introduce diversity in the dataset. From these simulations, we used 4 state variables (meridional and zonal surface velocities, pressure, and temperature) as well as the forcing fields (zonal wind velocity and relaxation SST profile) sampled in 10-day steps. The dataset was split into training, validation, and test datasets such that validation and test datasets were unseen during training. These datasets provide an ideal testbed for evaluating and comparing the performance of data-driven ocean emulators.

We used this data to train and evaluate Oceanfourcast. Our initial results in the following figure show that our model, Oceanfourcast, can successfully predict the streamfunction and pressure for a lead time of 1 month. 

We are currently working on generating adjoints from Oceanfourcast.  We expect the adjoint calculation to require significantly less compute time than that from a full-scale dynamical model like MITgcm.  Our work shows a promising path towards deep-learning augmented data assimilation and uncertainty quantification.

How to cite: Bire, S., Lütjens, B., Newman, D., and Hill, C.: Oceanfourcast: Emulating Ocean Models with Transformers for Adjoint-based Data Assimilation, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10810, https://doi.org/10.5194/egusphere-egu23-10810, 2023.

15:05–15:15
|
EGU23-3340
|
ECS
|
On-site presentation
Rachel Furner, Peter Haynes, Dan(i) Jones, Dave Munday, Brooks Paige, and Emily Shuckburgh

Data-driven models are becoming increasingly competent at tasks fundamental to weather and climate prediction. Relative to machine learning (ML) based atmospheric models, which have shown promise in short-term forecasting, ML-based ocean forecasting remains somewhat unexplored. In this work, we present a data-driven emulator of an ocean GCM and show that performance over a single predictive step is skilful across all variables under consideration. Iterating such data-driven models poses additional challenges, with many models suffering from over-smoothing of fields or instabilities in the predictions. We compare a variety of methods for iterating our data-driven emulator and assess them by looking at how well they agree with the underlying GCM in the very short term and how realistic the fields remain for longer-term forecasts. Due to the chaotic nature of the system being forecast, we would not expect any model to agree with the GCM accurately over long time periods, but instead we expect fields to continue to exhibit physically realistic behaviour at ever increasing lead times. Specifically, we expect well-represented fields to remain stable whilst also maintaining the presence and sharpness of features seen in both reality and in GCM predictions, with reduced emphasis on accurately representing the location and timing of these features. This nuanced and temporally changing definition of what constitutes a ‘good’ forecast at increasing lead times generates questions over both (1) how one defines suitable metrics for assessing data-driven models, and perhaps more importantly, (2) identifying the most promising loss functions to use to optimise these models.

How to cite: Furner, R., Haynes, P., Jones, D., Munday, D., Paige, B., and Shuckburgh, E.: An iterative data-driven emulator of an ocean general circulation model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3340, https://doi.org/10.5194/egusphere-egu23-3340, 2023.

15:15–15:20
15:20–15:30
|
EGU23-16936
|
ECS
|
Virtual presentation
Said Ouala, Bertrand Chapron, Fabrice Collard, Lucile Gaultier, and Ronan Fablet

Sea surface temperature (SST) is a critical parameter in the global climate system and plays a vital role in many marine processes, including ocean circulation, evaporation, and the exchange of heat and moisture between the ocean and atmosphere. As such, understanding the variability of SST is important for a range of applications, including weather and climate prediction, ocean circulation modeling, and marine resource management.

The dynamics of SST is the compound of multiple degrees of freedom that interact across a continuum of Spatio-temporal scales. A first-order approximation of such a system was initially introduced by Hasselmann. In his pioneering work, Hasselmann (1976) discussed the interest in using a two-scale stochastic model to represent the interactions between slow and fast variables of the global ocean, climate, and atmosphere system. In this paper, we examine the potential of machine learning techniques to derive relevant dynamical models of Sea Surface Temperature Anomaly (SSTA) data in the Mediterranean Sea. We focus on the seasonal modulation of the SSTA and aim to understand the factors that influence the temporal variability of SSTA extremes. Our analysis shows that the variability of the SSTA can indeed well be decomposed into slow and fast components. The dynamics of the slow variables are associated with the seasonal cycle, while the dynamics of the fast variables are linked to the SSTA response to rapid underlying processes such as the local wind variability. Based on these observations, we approximate the probability density function of the SSTA data using a stochastic differential equation parameterized by a neural network. In this model, the drift function represents the seasonal cycle and the diffusion function represents the envelope of the fast SSTA response.

 

How to cite: Ouala, S., Chapron, B., Collard, F., Gaultier, L., and Fablet, R.: Analysis of marine heat waves using machine learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16936, https://doi.org/10.5194/egusphere-egu23-16936, 2023.

15:30–15:40
|
EGU23-5766
|
ECS
|
On-site presentation
Caroline Arnold, Shivani Sharma, Tobias Weigel, and David Greenberg

In recent years, machine learning (ML) based parameterizations have become increasingly common in Earth System Models (ESM). Sub-grid scale physical processes that would be computationally too expensive, e.g., atmospheric chemistry and cloud microphysics, can be emulated by ML algorithms such as neural networks.

Neural networks are trained first on simulations of the sub-grid scale process that is to be emulated. They are then used in so-called inference mode to make predictions during the ESM run, replacing the original parameterization. Training usually requires GPUs, while inference may be done on CPU architectures.

At first, neural networks are evaluated offline, i.e., independently of the ESM on appropriate datasets. However, their performance can ultimately only be evaluated in an online setting, where the ML algorithm is coupled to the ESM, including nonlinear interactions.

We want to shorten the time spent in neural network development and offline testing and move quickly to online evaluation of ML components in our ESM of choice, ICON (Icosahedral Nonhydrostatic Weather and Climate Model). Since ICON is written in Fortran, and modern ML algorithms are developed in the Python ecosystem, this requires efficient bridges between the two programming languages. The Fortran-Python bridge must be flexible to allow for iterative development of the neural network. Changes to the ESM codebase should be as few as possible, and the runtime overhead should not limit development.

In our contribution we explore three strategies to call the neural network inference from within Fortran using (i) embedded Python code compiled in a dynamic library, (ii) pipes, and (iii) MPI using the ICON coupler YAC. We provide quantitative benchmarks for the proposed Fortran-Python bridges and assess their overall suitability in a qualitative way to derive best practices. The Fortran-Python bridge enables scientists and developers to evaluate ML components in an online setting, and can be extended to other parameterizations and ESMs.

How to cite: Arnold, C., Sharma, S., Weigel, T., and Greenberg, D.: Best Practices for Fortran-Python Bridges to Integrate Neural Networks in Earth System Models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5766, https://doi.org/10.5194/egusphere-egu23-5766, 2023.

15:40–15:45

Posters on site: Mon, 24 Apr, 16:15–18:00 | Hall X5

Chairperson: Julien Brajard
X5.74
|
EGU23-10256
|
ECS
Tobias Milz, Marte Hofsteenge, Marwan Katurji, and Varvara Vetrova

Foehn winds are accelerated, warm and dry winds that can have significant environmental impacts as they descend into the lee of a mountain range. For example, in the McMurdo Dry Valleys in Antarctica, foehn events can cause ice and glacial melt and destabilise ice shelves, which if lost, resulting in a rise in sea level. Consequently, there is a strong interest in a deeper understanding of foehn winds and their meteorological signatures. Most current automatic detection methods rely on rule-based methodologies that require static thresholds of meteorological parameters. However, the patterns of foehn winds are hard to define and differ between alpine valleys around the world. Consequently, data-driven solutions might help create more accurate detection and prediction methodologies. 

State-of-the-art machine learning approaches to this problem have shown promising results but follow a supervised learning paradigm. As such, these approaches require accurate labels, which for the most part, are being created by imprecise static rule-based algorithms. Consequently, the resulting machine-learning models are trained to recognise the same static definitions of the foehn wind signatures. 

In this paper, we introduce and compare the first unsupervised machine-learning approaches for detecting foehn wind events. We focus on data from the Mc Murdo Dry Valleys as an example, however, due to the unsupervised nature of these approaches, our solutions can recognise a more dynamic definition of foehn wind events and are therefore, independent of the location. The first approach is based on multivariate time-series clustering, while the second utilises a deep autoencoder-based anomaly detection method to identify foehn wind events. Our best model achieves an f1-score of 88%, matching or surpassing previous machine-learning methods while providing a more flexible and inclusive definition of foehn events. 

How to cite: Milz, T., Hofsteenge, M., Katurji, M., and Vetrova, V.: Foehn Wind Analysis using Unsupervised Deep Anomaly Detection, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10256, https://doi.org/10.5194/egusphere-egu23-10256, 2023.

X5.75
|
EGU23-13013
|
ECS
|
Markus Rosenberger, Manfred Dorninger, and Martin Weißmann

Clouds of all kinds play a large role in many atmospheric processes including, e.g. radiation and moisture transport, and their type allows an insight into the dynamics going on in the atmosphere. Hence, the observation of clouds from Earth's surface has always been important to analyse the current weather and its evolution during the day. However, cloud observations by human observers are labour-intensive and hence also costy. In addition to this, cloud classifications done by human observers are always subjective to some extent. Finding an efficient method for automated observations would solve both problems. Although clouds have already been operationally observed using satellites for decades, observations from the surface shed a light on a different set of characteristics. Moreover, the WMO also defined their cloud classification standards according to visual cloud properties when observations are done at the Earth’s surface. Thus, in this work a utilization of machine learning methods to classify clouds from RGB pictures taken at the surface is proposed. Explicitly, a conditional Generative Adversarial Network (cGAN) is trained to discriminate between 30 different categories, 10 for each cloud level - low, medium and high; Besides showing robust results in different image classification problems, an additional advantage of using a GAN instead of a classical convolutional neural network is that its output can also artificially enhance the size of the training data set. This is especially useful if the number of available pictures is unevenly distributed among the different classes. Additional background observations like cloud cover and cloud base height can also be used to further improve the performance of the cGAN. Together with a cloud camera, a properly trained cGAN can observe and classify clouds with a high temporal resolution of the order of seconds, which can be used, e.g. for model verification or to efficiently monitor the current status of the weather as well as its short-time evolution. First results will also be presented.

How to cite: Rosenberger, M., Dorninger, M., and Weißmann, M.: Using cGAN for cloud classification from RGB pictures, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13013, https://doi.org/10.5194/egusphere-egu23-13013, 2023.

X5.76
|
EGU23-11293
|
ECS
|
Ting Li, Oliver López Valencia, Kasper Johansen, and Matthew McCabe

Driven in large part by policy initiatives designed to increase food security and realized via the construction of thousands of center-pivot irrigation fields since the 1970s, agriculture development in Saudi Arabia has undergone tremendous changes. However, little is known about the accurate number, acreage, and the changing dynamics of the fields. To bridge the knowledge gap between the political drivers and in-field response, we leveraged a hybrid machine learning framework by implementing Density-Based Spatial Clustering of Applications with Noise, Convolutional Neural Networks, and Spectral Clustering in a stepwise manner to delineate the center-pivot fields on a national scale in Saudi Arabia using historical Landsat imagery since 1990. The framework achieved producer's and user's accuracies larger than  83.7% and 90.2%, respectively, when assessed against 28,000 manually delineated fields collected from different regions and periods. We explored multi-decadal dynamics of the agricultural development in Saudi Arabia by quantifying the number, acreage, and size distribution of center-pivot fields, along with the first and last detection year of the fields since 1990. The agricultural development in Saudi Arabia experienced four stages, including an initialization stage before 1990, a contraction stage from 1990 to 2010, an expansion stage from 2010 to 2016, and an ongoing contraction stage since 2016. Most of the fields predated 1990, representing over 8,800 km2 in that year, as a result of the policy initiatives to stimulate wheat production, promoting Saudi Arabia as the sixth largest exporter of wheat in the 1980s. A decreasing trend was observed from 1990 to 2010, with an average of 8,011 km2 of fields detected during those two decades, which was a response to the policy initiative implemented to phase-out wheat after 1990. As a consequence of planting fodder crops to promote the dairy industry, the number and extent of fields increased rapidly from 2010 to 2015 and reached its peak in 2016, with 33,961 fields representing 9,400 km2. Agricultural extent has seen a continuous decline since 2016 to a level lower than 1990 values in 2020. This decline has been related to sustainable policy initiatives implemented for the Saudi Vision 2030. There is some evidence of an uptick in 2021 — also observed in an ongoing analysis for 2022 — which might be in response to global influences, such as the COVID-19 pandemic and the more recent conflict in the Ukraine, which has disrupted the international supply of agricultural products. The results provide a historical account of agricultural activity throughout the Kingdom and provide a basis for informed decision-making on sustainable irrigation and agricultural practices, helping to better protect and manage the nation's threatened groundwater resources, and providing insights into the resilience and elasticity of the Saudi Arabian food system to global perturbations.

How to cite: Li, T., López Valencia, O., Johansen, K., and McCabe, M.: National scale agricultural development dynamics under socio-political drivers in Saudi Arabia since 1990, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11293, https://doi.org/10.5194/egusphere-egu23-11293, 2023.

X5.77
|
EGU23-15756
|
ECS
|
Thomas Rieutord, Geoffrey Bessardon, and Emily Gleeson

The next generation of numerical weather prediction model (so-called digital twin engines) will reach hectometric scale, for which the existing physiography databases are insufficient. Our work leverages machine learning and open-access data to produce a more accurate and higher resolution physiography database. One component to improve is the land cover map. The reference data gathers multiple high-resolution thematic maps thanks to an agreement-based decision tree. The input data are taken from the Sentinel-2 satellite. Then, the land cover map generation is made with image segmentation. This work implements and compares several algorithms of different families to study their suitability to the land cover classification problem. The sensitivity to the data quality will also be studied. Compared to existing work, this work is innovative in the reference map construction (both leveraging existing maps and fit for end-user purpose) and the diversity of algorithms to produce our land cover map comparison.

How to cite: Rieutord, T., Bessardon, G., and Gleeson, E.: Physiography improvements in numerical weather prediction digital twin engines, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15756, https://doi.org/10.5194/egusphere-egu23-15756, 2023.

X5.78
|
EGU23-13367
|
ECS
Eike Bolmer, Adili Abulaitijiang, Luciana Fenoglio-Marc, Jürgen Kusche, and Ribana Roscher

Mesoscale eddies are gyrating currents in the ocean and have horizontal scales from 10 km up to 100 km and above. They transport water mass, heat, and nutrients and therefore are of interest among others to marine biologists, oceanographers, and geodesists. Usually, gridded sea level anomaly maps, processed from several radar altimetry missions, are used to detect eddies. However, operational processors create multi-mission (processing level 4) SLA grid maps with an effective spatiotemporal resolution far lower than their grid spacing and temporal resolution. 

This drawback leads to erroneous eddy detection. We, therefore, investigate if the higher-resolution along-track data could be used instead to solve the problem of classifying the SLA observations into cyclonic, anticyclonic, or no eddies in a more accurate way than using processed SLA grid map products. With our framework, we aim to infer a daily two-dimensional segmentation map of classified eddies. Due to repeat cycles between 10 and 35 days and cross-track spacing of a few 10 km to a few 100 km, ocean eddies are clearly visible in altimeter observations but are typically covered only by a few ground tracks where the spatiotemporal context within the input data is highly variable each day. However conventional convolutional neural networks (CNNs) rely on data without varying gaps or jumps in time and space in order to use the intrinsic spatial or temporal context of the observations. Therefore, this is a challenge that needs to be addressed with a deep neural network that on the one hand utilizes the spatiotemporal context information within the modality of along-track data and on the other hand is able to output a two-dimensional segmentation map from data of varying sparsity. Our approach with our architecture Teddy is to use a transformer module to encode and process the spatiotemporal information along with the ground track's sea level anomaly data that produces a sparse feature map. This will then be fed into a sparsity invariant convolutional neural network in order to infer a two-dimensional segmentation map of classified eddies. Reference data that is used to train Teddy is produced by an open-source geometry-based approach (py-eddy-tracker [1]). 

The focus of this presentation is on how we implemented this approach in order to derive two-dimensional segmentation maps of classified eddies with our deep neural network architecture Teddy from along-track altimetry. We show results and limitations for the classification of eddies using only along-track SLA data from the multi-mission level 3 product of the Copernicus Marine Environment Monitoring Service (CMEMS) within the 2017 - 2019 period for the Gulf Stream region. We find that using our methodology, we can create two-dimensional maps of classified eddies from along-track data without using preprocessed SLA grid maps.

[1] Evan Mason, Ananda Pascual, and James C. McWilliams, “A new sea surface height–based code for oceanic mesoscale eddy tracking,” Journal of Atmospheric and Oceanic Technology, vol. 31, no. 5, pp. 1181–1188, 2014.

How to cite: Bolmer, E., Abulaitijiang, A., Fenoglio-Marc, L., Kusche, J., and Roscher, R.: Framework for creating daily semantic segmentation maps of classified eddies using SLA along-track altimetry data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13367, https://doi.org/10.5194/egusphere-egu23-13367, 2023.

X5.79
|
EGU23-7561
|
ECS
|
Thea Quistgaard, Peter L. Langen, Tanja Denager, Raphael Schneider, and Simon Stisen

A course of action to combat the emission of greenhouse gasses (GHG) in a Danish context is to re-wet previously drained peatlands and thereby return them to their natural hydrological state acting as GHG sinks. GHG emissions from peatlands are known to be closely coupled to the hydrological dynamics through the groundwater table depth (WTD). To understand the effect of a changing and variable climate on the spatio-temporal dynamics of hydrological processes and the associated uncertainties, we aim to produce a high-resolution local-scale climate projection ensemble from the global-scale CMIP6 projections.

With focus on hydrological impacts, uncertainties and possible extreme endmembers, this study aims to span the full ensemble of local-scale climate projections in the Danish geographical area corresponding to the CMIP6-ensemble of Global Climate Models (GCMs). Deep learning founded statistical downscaling methods are applied bridge the gap from GCMs to local-scale climate change and variability, which in turn will be used in field-scale hydrological modeling. The approach is developed to specifically accommodate the resolutions, event types and conditions relevant for assessing the impacts on peatland GHG emissions through their relationship with WTD dynamics by applying stacked conditional generative adversarial networks (CGANs) to best downscale precipitation, temperature, and evaporation. In the future, the approach is anticipated to be extended to directly assess the impacts of climate change and ensemble uncertainty on peatland hydrology variability and extremes.

How to cite: Quistgaard, T., Langen, P. L., Denager, T., Schneider, R., and Stisen, S.: Deep Learning guided statistical downscaling of climate projections for use in hydrological impact modeling in Danish peatlands, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7561, https://doi.org/10.5194/egusphere-egu23-7561, 2023.

X5.80
|
EGU23-15892
|
ECS
|
Elena Fillola, Raul Santos-Rodriguez, and Matt Rigby

Lagrangian particle dispersion models (LPDMs) have been used extensively to calculate source-receptor relationships (“footprints”) for use in greenhouse gas (GHG) flux inversions. However, because a backward-running model simulation is required for each data point, LPDMs do not scale well to very large datasets, which makes them unsuitable for use in GHG inversions using high-resolution satellite instruments such as TROPOMI. In this work, we demonstrate how Machine Learning (ML) can be used to accelerate footprint production, by first presenting a proof-of-concept emulator for ground-based site observations, and then discussing work in progress to create an emulator suitable to satellite observations. In Fillola et al (2023), we presented a ML emulator for NAME, the Met Office’s LPDM, which outputs footprints for a small region around an observation point using purely meteorological variables as inputs. The footprint magnitude at each grid cell in the domain is modelled independently using gradient-boosted regression trees. The model is evaluated for seven sites, producing a footprint in 10ms, compared to around 10 minutes for the 3D simulator, and achieving R2 values between 0.6 and 0.8 for CH4 concentrations simulated at the sites when compared to the timeseries generated by NAME. Following on from this work, we demonstrate how this same emulator can be applied to satellite data to reproduce footprints immediately around any measurement point in the domain, evaluating this application with data for Brazil and North Africa and obtaining R2 values of around 0.5 for simulated CH4 concentrations. Furthermore, we propose new emulator architectures for LPDMs applied to satellite observations. These new architectures should tackle some of the weaknesses in the existing approach, for example, by propagating information more flexibly in space and time, potentially improving accuracy of the derived footprints and extending the prediction capabilities to bigger domains.

How to cite: Fillola, E., Santos-Rodriguez, R., and Rigby, M.: Towards emulated Lagrangian particle dispersion model footprints for satellite observations, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15892, https://doi.org/10.5194/egusphere-egu23-15892, 2023.

X5.81
|
EGU23-7368
|
ECS
|
Christof Schötz

Some results from the DEEB (Differential Equation Estimation Benchmark) are presented. In DEEB, we compare different machine learning approaches and statistical methods for estimating nonlinear dynamics from data. Such methods constitute an important building block for purely data-driven earth system models as well as hybrid models which combine physical knowledge with past observations.

Specifically, we examine approaches for solving the following problem: Given time-state-observations of a deterministic ordinary differential equation (ODE) with measurement noise in the state, predict the future evolution of the system. Of particular interest are systems with chaotic behavior - like Lorenz 63 - and nonparametric settings, in which the functional form of the ODE is completely unknown (in particular, not restricted to a polynomial of low order). To create a fair comparison of methods, a benchmark database was created which includes datasets of simulated observations from different dynamical systems with different complexity and varying noise levels. The list of methods we compare includes: echo state networks, Gaussian processes, Neural ODEs, SINDy, thin plate splines, and more.

Although some methods consistently perform better than others throughout different datasets, there seems to be no silver bullet.

How to cite: Schötz, C.: Comparison of Methods for Learning Differential Equations from Data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7368, https://doi.org/10.5194/egusphere-egu23-7368, 2023.

X5.82
|
EGU23-16287
|
ECS
Carolina Natel de Moura, David Martin Belda, Peter Antoni, and Almut Arneth

Forests are a significant carbon sink of the total carbon dioxide (CO2) emitted by humans. Climate change is expected to impact forest systems, and their role in the terrestrial carbon cycle in several ways – for example, the fertilization effect of increased atmospheric CO2, and the lengthening of the growing season in northern temperate and boreal areas may increase forest productivity, while more frequent extreme climate events such as storms and windthrows or drought spells, as well as wildfires might reduce disturbances return period, hence increasing forest land loss and reduction of the carbon stored in the vegetation and soils. In addition, forest management in response to an increased demand for wood products and fuel can affect the carbon storage in ecosystems and wood products. State-of-the-art Dynamic Global Vegetation Models (DGVMs) simulate the forest responses to environmental and human processes, however running these models globally for many climate and management scenarios becomes challenging due to computational restraints. Integration of process-based models and machine learning methods through emulation allows us to speed up computationally expensive simulations. In this work, we explore the use of machine learning to surrogate the LPJ-GUESS DGVM. This emulator is spatially-aware to represent forests across the globe in a flexible spatial resolution, and consider past climate and forest management practices to account for legacy effects. The training data for the emulator is derived from dedicated runs of the DGVM sampled across four dimensions relevant to forest carbon and yield: atmospheric CO2 concentration, air Temperature, Precipitation, and forest Management (CTPM). The emulator can capture relevant forest responses to climate and management in a lightweight form, and will support the development of the coupled socio-economic/ecologic model of the land system, namely LandSyMM (landsymm.earth). Other relevant scientific applications include the analysis of optimal forestry protocols under climate change, and the forest potential in climate change mitigation.

 

How to cite: Natel de Moura, C., Belda, D. M., Antoni, P., and Arneth, A.: A machine learning emulator for forest carbon stocks and fluxes, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16287, https://doi.org/10.5194/egusphere-egu23-16287, 2023.

X5.83
|
EGU23-582
|
ECS
Swarnalee Mazumder and Ayush Prasad

The terrestrial carbon cycle is one of the largest sources of uncertainty in climate projections. The terrestrial carbon sink which removes a quarter of anthropogenic CO2 emissions; is highly variable in time and space depending on climate. Previous studies have found that data-driven models such as random forest, artificial neural networks and long short-term memory networks can be used to accurately model Net Ecosystem Exchange (NEE) and Gross Primary Productivity (GPP) accurately, which are two important metrics to quantify the direction and magnitude of CO2 transfer between the land surface and the atmosphere. Recently, a new class of machine learning models called transformers have gained widespread attention in natural language processing tasks due to their ability to learn from large volumes of sequential data. In this work, we use Transformers to model NEE and GPP from 1996-2022 at 39 Flux stations in the ICOS Europe network using ERA5 reanalysis data. We can compare our results with traditional machine learning approaches to evaluate the generalisability and predictive performance of transformers for carbon flux modelling.

How to cite: Mazumder, S. and Prasad, A.: Modeling the Variability of Terrestrial Carbon Fluxes using Transformers, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-582, https://doi.org/10.5194/egusphere-egu23-582, 2023.

X5.84
|
EGU23-12458
|
ECS
Vadim Zinchenko and David Greenberg

Data Assimilation (DA) is a challenging and expensive computational problem targetting hidden variables in high-dimensional spaces. 4DVar methods are widely used in weather forecasting to fit simulations to sparse observations by optimization over numerical model input. The complexity of this inverse problem and the sequential nature of common 4DVar approaches lead to long computation times with limited opportunity for parallelization. Here we propose using machine learning (ML) algorithms to replace the entire 4DVar optimization problem with a single forward pass through a neural network that maps from noisy and incomplete observations at multiple time points to a complete system state estimate at a single time point. We train the neural network using a loss function derived from the weak-constraint 4DVar objective, including terms incorporating errors in both model and data. In contrast to standard 4DVar approaches, our method amortizes the computational investment of training to avoid solving optimization problems for each assimilation window, and its non-sequential nature allows for easy parallelization along the time axis for both training and inference. In contrast to most previous ML-based data assimilation methods, our approach does not require access to complete, noise-free simulations for supervised learning or gradient-free approximations such as Ensemble Kalman filtering. To demonstrate the potential of our approach, we show a proof-of-concept on the chaotic Lorenz'96 system, using a novel "1.5D Unet" architecture combining 1D and 2D convolutions.

How to cite: Zinchenko, V. and Greenberg, D.: Training Deep Data Assimilation Networks on Sparse and Noisy Observations, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12458, https://doi.org/10.5194/egusphere-egu23-12458, 2023.

X5.85
|
EGU23-15994
|
ECS
Nicolas Lafon, Philippe Naveau, and Ronan Fablet

The spatio-temporal reconstruction of a dynamical process from some observationaldata is at the core of a wide range of applications in geosciences. This is particularly true for weather forecasting, operational oceanography and climate studies. However, the re35 construction of a given dynamic and the prediction of future states must take into ac36 count the uncertainties that affect the system. Thus, the available observational measurements are only provided with a limited accuracy. Besides, the encoded physical equa38 tions that model the evolution of the system do not capture the full complexity of the real system. Finally, the numerical approximation generates a non-negligible error. For these reasons, it seems relevant to calculate a probability distribution of the state system rather than the most probable state. Using recent advances in machine learning techniques for inverse problems, we propose an algorithm that jointly learns a parametric distribution of the state, the dynamics governing the evolution of the parameters, and a solver. Experiments conducted on synthetic reference datasets, as well as on datasets describing environmental systems, validate our approach.

How to cite: Lafon, N., Naveau, P., and Fablet, R.: Uncertainty quantification in variational data assimilation with deep learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15994, https://doi.org/10.5194/egusphere-egu23-15994, 2023.

X5.86
|
EGU23-7391
|
ECS
|
Martin Brolly
Many practical problems in fluid dynamics demand an empirical approach, where statistics estimated from data inform understanding and modelling. In this context data-driven probabilistic modelling offers an elegant alternative to ad hoc estimation procedures. Probabilistic models are useful as emulators, but also offer an attractive means of estimating particular statistics of interest. In this paradigm one can rely on proper scoring rules for model comparison and validation, and invoke Bayesian statistics to obtain rigorous uncertainty quantification. Stochastic neural networks provide a particularly rich class of probabilistic models, which, when paired with modern optimisation algorithms and GPUs, can be remarkably efficient. We demonstrate this approach by learning the single particle transition density of ocean surface drifters from decades of Global Drifter Program observations using a Bayesian mixture density network. From this we derive maps of various displacement statistics and corresponding uncertainty maps. Our model also offers a means of simulating drifter trajectories as a discrete-time Markov process, which could be used to study the transport of plankton or plastic in the upper ocean.

How to cite: Brolly, M.: Learning fluid dynamical statistics using stochastic neural networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7391, https://doi.org/10.5194/egusphere-egu23-7391, 2023.

X5.87
|
EGU23-11958
|
ECS
Maiken Baumberger, Linda Adorf, Bettina Haas, Nele Meyer, and Hanna Meyer

Soil temperature and soil moisture variations have large effects on ecological processes in the soil. To investigate and understand these processes, high-resolution data of soil temperature and soil moisture are required. Here, we present an approach to generate data of soil temperature and soil moisture continuously in space, depth, and time for a 400 km² study area in the Fichtel Mountains (Germany). As reference data, measurements with 1 m long soil probes were taken. To cover many different locations, the available 15 soil probes were shifted regularly in the course of one year. With this approach, around 250 different locations in forest sites, on meadows and on agricultural fields were captured under a variety of meteorological conditions. These measurements are combined with readily available meteorological data, satellite data and soil maps in a machine learning approach to learn the complex relations between these variables. We aim for a model which can predict the soil temperature and soil moisture continuously for our study area in the Fichtel Mountains, with a spatial resolution of 10 m x 10 m, down to 1 m depth with segments of 10 cm each and in an hourly resolution in time. Here, we present the results of our pilot study where we focus on the temperature and moisture change within the depth down to 1 m at one single location. To take temporal lags into account, we construct a Long Short-Term Memory network based on meteorological data as predictors to make temperature and moisture predictions in time and depth. The results indicate a high ability of the model to reproduce the time series of the single location and highlight the potential of the approach for the space-time-depth mapping of soil temperature and soil moisture.

How to cite: Baumberger, M., Adorf, L., Haas, B., Meyer, N., and Meyer, H.: Modelling Soil Temperature and Soil Moisture in Space, Depth, and Time with Machine Learning Techniques, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11958, https://doi.org/10.5194/egusphere-egu23-11958, 2023.

X5.88
|
EGU23-4454
|
ECS
Oriol Pomarol Moya and Derek Karssenberg

Time transfer functions describe the change of state variables over time in geoscientific numerical simulation models. The identification of these functions is an essential but challenging step in model building. While traditional methods rely on qualitative understanding or first order principles, the availability of large spatio-temporal data sets from direct measurements or extremely detailed physical-based system modelling has enabled the use of machine learning methods to discover the time transfer function directly from data. In this study we explore the feasibility of this data driven approach for numerical simulation of the co-evolution of soil, hydrology, vegetation, and grazing on landscape scale, at geological timescales. From empirical observation and hyper resolution (1 m, 1 week) modelling (Karssenberg et al, 2017) it has been shown that a hillslope system shows complex behaviour with two stable states, respectively high biomass on deep soils (healthy state) and low biomass on thin soils (degraded or desertic state). A catastrophic shift from healthy to degraded state occurs under changes of external forcing (climate, grazing pressure), with a transient between states that is rapid or slow depending on system characteristics. To identify and use the time transfer functions of this system at hillslope scale we follow four procedural steps. First, an extremely large data set of hillslope average soil and vegetation state is generated by a mechanistic hyper resolution (1 m, 1 week) system model, forcing it with different variations in grazing pressure over time. Secondly, a machine learning model predicting the rate of change in soil and vegetation as function of soil, vegetation, and grazing pressure, is trained on this data set. In the third step, we explore the ability of this trained machine learning model to predict the rate of system change (soil and vegetation) on untrained data. Finally, in the fourth step, we use the trained machine learning model as time transfer function in a forward numerical simulation of a hillslope to determine whether it is capable of representing the known complex behaviour of the system. Our findings are that the approach is in principle feasible. We compared the use of a deep neural network and a random forest. Both can achieve great fitting precision, although the latter performs much faster and requires less training data. Even though the machine learning based time transfer function shows differences in the rates of change in system state from those calculated using expert knowledge in Karssenberg et al. (2017), forward simulation appeared to be possible with system behaviour generally in line with that observed in the data from the hyper resolution model. Our findings indicate that discovery of time transfer functions from data is possible. Next steps need to involve the use observational data (e.g., from remote sensing) to test the approach using data from real-world systems.

 

Karssenberg, D., Bierkens, M.F.P., Rietkerk, M., Catastrophic Shifts in Semiarid Vegetation-Soil Systems May Unfold Rapidly or Slowly. The American Naturalist 2017. Vol. 190, pp. E145–E155.

How to cite: Pomarol Moya, O. and Karssenberg, D.: Machine learning for data driven discovery of time transfer functions in numerical modelling: simulating catastrophic shifts in vegetation-soil systems, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4454, https://doi.org/10.5194/egusphere-egu23-4454, 2023.

X5.89
|
EGU23-13143
|
ECS
Petrina Papazek, Irene Schicker, and Pascal Gfähler

With fast parallel computing hardware, particularly GPUs, becoming more accessible in the geosciences the now efficiently running deep learning techniques are ready to handle larger amounts of recorded observation and satellite derived data and are able to learn complex structures across time-series. Thus, a suitable deep learning setup is able to generate highly-resolved weather forecasts in real-time and on demand. Forecasts of irradiance and radiation can be challenging in machine learning as they embrace a high degree of diurnal and seasonal variation.

Continuously extended PV/solar power production grows into one of our most important fossil-fuel free energy sources. Unlike the just recently emerging PV power observations, solar irradiance offers long time-series from automized weather station networks. Being directly linked to PV outputs, forecasting highly resolved solar irradiance from nowcasting to short-range plays a crucial role in decision support and managing PV.

In this study, we investigate the suitability of several deep learning techniques adopted and developed to a set of heterogeneous data sources on selected locations. We compare the forecast results to traditional – however computationally expensive - numerical weather prediction models (NWP) and rapid update cycle models. Relevant input features include 3D-fields from NWP models (e.g.: AROME), satellite data and products (e.g.: CAMS), radiation time series from remote sensing, and observation time time-series (site observations and close sites). The amount of time-series data can be extended by a synthetic data generator, a part of our deep learning framework. Our main models investigated includes a sequence-to-sequence LSTM (long-short-term-memory) model using a climatological background model or NWP for post-processing, a Graph NN model, and an analogs based deep learning method. Furthermore, a novel neural network model based on two other ideas, the IrradianceNet and the PhyDNet, was developed. IrradPhyDNet combines the skills of IrradianceNet and PhyDNet and showed improved performance in comparison to the original models.

Results obtained by the developed methods yield, in general, high forecast-skills. For selected case studies of extreme events (e.g. Saharan dust) all novel methods could outperform the traditional methods.  Different combinations of inputs and processing-steps are part of the analysis.

How to cite: Papazek, P., Schicker, I., and Gfähler, P.: Comparison of LSTM, GraphNN, and IrradPhyDNet based Approaches for High-resolution Solar Irradiance Nowcasting, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13143, https://doi.org/10.5194/egusphere-egu23-13143, 2023.

X5.90
|
EGU23-14051
|
ECS
Simon Benaïchouche, Clément Le Goff, Brahim Boussidi, François Rousseau, and Ronan Fablet

Over the last decades, space oceanography missions, particularly altimeter missions, have greatly advanced our ability to observe sea surface dynamics. However, they still struggle to resolve spatial scales below ~ 100 km. On a global scale, sea surface current are derived from sea surface height by a geostrophical assumption. While future altimeter missions should improve the observation of sea surface height, the observation of sea surface current using altimetry techniques would remains indirect. In the other hands, recent works have considered the use of AIS (automated identification system) as a new mean to reconstruct sea surface current : AIS data streams provide an indirect observational models of total currents including ageostrophic phenomenas. In this work we consider the use of the supervised learning framework 4DVARNet, a supervised data driven approach that allow us to perform multi-modal experiments : We focus on an Observing System Simulation Experiment (OSSE) in a region of the Gulf-Stream and we show that the joint use of AIS and sea surface height (SSH) measurement could improve the reconstruction of sea surface current with respect to product derived solely from AIS or SSH observations in terms of physical and time scale resolved. 

How to cite: Benaïchouche, S., Le Goff, C., Boussidi, B., Rousseau, F., and Fablet, R.: Multi-modal data assimilation of sea surface currents from AIS data streams and satellite altimetry using 4DVARNet, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14051, https://doi.org/10.5194/egusphere-egu23-14051, 2023.

X5.91
|
EGU23-15684
|
ECS
Naveenkumar Parameswaran, Everardo Gonzalez, Ewa Bur­wicz-Ga­ler­ne, David Greenberg, Klaus Wallmann, and Malte Braack

Mass accumulation rates of sediments[g/cm2/yr] or sedimentation rates[cm/yr] on the seafloor are important to understand various benthic properties, like the rate of carbon sequestration in the seafloor and seafloor geomechanical stability. Several machine learning models, such as random forests, and k-Nearest Neighbours have been proposed for the prediction of geospatial data in marine geosciences, but face significant challenges such as the limited amount of labels for training purposes, skewed data distribution, a large number of features etc. Previous model predictions show deviation in the global sediment budget, a parameter used to determine a model's predicitve validity, revealing the lack of accurate representation of sedimentation rate by the state of the art models. 

Here we present a semi-supervised deep learning methodology to improve the prediction of sedimentation rates, making use of around 9x106  unlabelled data points. The semi-supervised neural network implementation has two parts: an unsupervised pretraining using an encoder-decoder network. The encoder with the optimized weights from the unsupervised training is then taken out and fitted with layers that lead to the target dimension. This network is then fine-tuned with 2782 labelled data points, which are observed sedimentation rates from peer-reviewed sources. The fine-tuned model then predicts the rate and quantity of sediment accumulating on the ocean floor, globally.

The developed semi-supervised neural network provide better predictions than supervised models trained only on labelled data. The predictions from the semi-supervised neural network are compared with that of the supervised neural network with and without dimensionality reduction(using Principle Component Analysis).

How to cite: Parameswaran, N., Gonzalez, E., Bur­wicz-Ga­ler­ne, E., Greenberg, D., Wallmann, K., and Braack, M.: Semi-supervised feature-based learning for prediction of Mass Accumulation Rate of sediments, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15684, https://doi.org/10.5194/egusphere-egu23-15684, 2023.

X5.92
|
EGU23-16597
|
Highlight
Nils Lehmann, Jonathan Bamber, and Xiaoxiang Zhu

One of the many ways in which anthropogenic climate change impacts our planet is
rising sea levels. The rate of sea level rise (SLR) across the oceans is,
however, not uniform in space or time and is influenced by a complex interplay
of ocean dynamics, heat uptake, and surface forcing. As a consequence,
short-term (years to a decade) regional SLR patterns are difficult to model
using conventional deterministic approaches. For example, the latest climate
model projections (called CMIP6) show some agreement in the globally integrated
rate of SLR but poor agreement when it comes to spatially-resolved
patterns. However, such forecasts are valuable for adaptation planning in
coastal areas and for protecting low lying assets.
Rather than a deterministic modeling approach, here we explore the possibility
of exploiting the high quality satellite altimeter derived record of sea surface
height variations, which cover the global oceans outside of ice-infested waters
over a period of 30 years. Alongside this rich and unique satellite record,
several data-driven models have shown tremendous potential for various
applications in Earth System science. We explore several data-driven deep
learning approaches for sea surface height forecasts over multi-annual to
decadal time frames. A limitation of some machine learning approaches is the
lack of any kind of uncertainty quantification, which is problematic for
applications where actionable evidence is sought. As a consequence, we equip
our models with a rigorous measure of uncertainty, namely conformal prediction which
is a model and dataset agnostic method that provides calibrated predictive
uncertainty with proven coverage guarantees. Based on a 30-year satellite
altimetry record and auxiliary climate forcing data from reanalysis such as
ERA5, we demonstrate that our methodology is a viable and attractive alternative
for decadal sea surface height forecasts.

How to cite: Lehmann, N., Bamber, J., and Zhu, X.: Global Decadal Sea Surface Height Forecast with Conformal Prediction, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16597, https://doi.org/10.5194/egusphere-egu23-16597, 2023.

X5.93
|
EGU23-6836
|
ECS
|
Clara Burgard, Nicolas C. Jourdain, Pierre Mathiot, and Robin Smith

One of the largest sources of uncertainty when projecting the Antarctic contribution to sea-level rise is the ocean-induced melt at the base of Antarctic ice shelves. This is because resolving the ocean circulation and the ice-ocean interactions occurring in the cavity below the ice shelves is computationally expensive.

Instead, for large ensembles and long-term projections of the ice-sheet evolution, ice-sheet models currently rely on parameterisations to link the ocean temperature and salinity in front of ice shelves to the melt at their base. However, current physics-based parameterisations struggle to accurately simulate basal melt patterns.

As an alternative approach, we explore the potential use of a deep feedforward neural network as a basal melt parameterisation. To do so, we train a neural network to emulate basal melt rates simulated by highly-resolved circum-Antarctic ocean simulations. We explore the influence of different input variables and show that the neural network struggles to generalise to ice-shelf geometries unseen during training, while it generalises better on timesteps unseen during training. We also test the parameterisation on separate coupled ocean-ice simulations to assess the neural network’s performance on independent data.  

How to cite: Burgard, C., Jourdain, N. C., Mathiot, P., and Smith, R.: Parameterising melt at the base of Antarctic ice shelves with a feedforward neural network, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6836, https://doi.org/10.5194/egusphere-egu23-6836, 2023.

X5.94
|
EGU23-7492
Flavio Cannavo', Vittorio Minio, Susanna Saitta, Salvatore Alparone, Alfio Marco Borzì, Andrea Cannata, Giuseppe Ciraolo, Danilo Contrafatto, Sebastiano D’Amico, Giuseppe Di Grazia, and Graziano Larocca

Monitoring the state of the sea is a fundamental task for economic activities in the coastal zone, such as transport, tourism and infrastructure design. In recent years, regular wave height monitoring for marine risk assessment and mitigation has become unavoidable as global warming impacts in more intense and frequent swells.
In particular, the Mediterranean Sea has been considered as one of the most responsive regions to global warming, which may promote the intensification of hazardous natural phenomena as strong winds, heavy precipitation and high sea waves. Because of the high density population along the Mediterranean coastlines, heavy swells could have major socio-economic consequences. To reduce the impacts of such scenarios, the development of more advanced monitoring systems of the sea state becomes necessary.
In the last decade, it has been demonstrated how seismometers can be used to measure sea conditions by exploiting the characteristics of a part of the seismic signal called microseism. Microseism is the continuous seismic signal recorded in the frequency band of 0.05 and 0.4 Hz that is likely generated by interactions of sea waves together and with seafloor or shorelines.
In this work, in the framework of i-WaveNET INTERREG project, we performed a regression analysis to develop a model capable of predicting the sea state in the Sicily Channel (Italy) using microseism, acquired by onshore instruments installed in Sicily and Malta. Considering the complexity of the relationship between spatial sea wave height data and seismic data measured at individual stations, we used supervised machine learning (ML) techniques to develop the prediction model. As input data we used the hourly Root Mean Squared (RMS) amplitude of the seismic signal recorded by 14 broadband stations, along the three components, and in different frequency bands, during 2018 - 2021. These stations, belonging to the permanent seismic networks managed by the National Institute of Geophysics and Volcanology INGV and the Department of Geosciences of the University of Malta, consist of three-component broadband seismometers that record at a sampling frequency of 100 Hz.
As for the target, the significant sea wave height data from Copernicus Marine Environment Monitoring Service (CMEMS) for the same period were used. Such data is the hindcast product of the Mediterranean Sea Waves forecasting system, with hourly temporal resolution and 1/24° spatial resolution. After a feature selection step, we compared three different kinds of ML algorithms for regression: K-Nearest-Neighbors (KNN), Random Forest (RF) and Light Gradient Boosting (LGB). The hyperparameters were tuned by using a grid-search algorithm, and the best models were selected by cross-validation.  Different metrics, such as MAE, R2 and RMSE, were considered to evaluate the generalization capabilities of the models and special attention was paid to evaluate the predictive ability of the models for extreme wave height values.
Results show model predictive capabilities good enough to develop a sea monitoring system to complement the systems currently in use.

How to cite: Cannavo', F., Minio, V., Saitta, S., Alparone, S., Borzì, A. M., Cannata, A., Ciraolo, G., Contrafatto, D., D’Amico, S., Di Grazia, G., and Larocca, G.: Machine Learning and Microseism as a Tool for Sea Wave Monitoring, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7492, https://doi.org/10.5194/egusphere-egu23-7492, 2023.

X5.95
|
EGU23-13322
|
ECS
Douglas Stumpp, Elliot Amir Jiwani-Brown, Célia Barat, Matteo Lupi, Francisco Muñoz, Thomas Planes, and Geneviève Savard

The ambient noise tomography (ANT) method is widely adopted to reconstruct shear-wave velocity anomalies and to generate high-resolution images of the crust and upper-mantle. A critical step in this process is the extraction of surface-wave dispersion curves from cross-correlation functions of continuous ambient noise recordings, which is traditionally performed manually on the dispersion spectrograms through human-machine interfaces. Picking of dispersion curves is sometimes prone to bias due to human interpretation. Furthermore, it is a laborious and time-consuming task that needs to be resolved in an automatized manner, especially when dealing with dense seismic network of nodal geophones where the large amount of generated data severely hinders manual picking approaches. In the last decade, several studies successfully employed machine learning methods in Earth Sciences and across many seismological applications. Early studies have shown versatile and reliable solutions by treating dispersion curve extraction as a visual recognition problem. 

We review and adapt a specific machine learning approach, deep convolutional neural networks, for use on dispersion spectrograms generated with the usual frequency-time analysis (FTAN) processing on ambient noise cross-correlations. To train and calibrate the algorithm we use several available datasets acquired from previous experiments across different geological settings. The main dataset consists of records acquired with a dense local geophone network (150 short period stations sampling at 250 Hz) deployed for one month in October 2021. The dataset has been acquired during the volcanic unrest of the Vulcano-Lipari complex, Italy. The network also accounts for additional 17 permanent broadband stations (sampling at 100 Hz) maintained by the National Institute of Geophysics and Volcanology (INGV) in Italy. We evaluate the performance of the dispersion curves extraction algorithm. The automatically-picked dispersion curves will be used to construct a shear-wave velocity model of the Vulcano-Lipari magmatic plumbing system and the surrounding area of the Aeolian archipelago.

 

How to cite: Stumpp, D., Amir Jiwani-Brown, E., Barat, C., Lupi, M., Muñoz, F., Planes, T., and Savard, G.: Nodal Ambient Noise Tomography and automatic picking of dispersion curves with convolutional neural network: case study at Vulcano-Lipari, Italy, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13322, https://doi.org/10.5194/egusphere-egu23-13322, 2023.

X5.96
|
EGU23-12566
Kyung-Hui Wang, Chae-Yeon Lee, Ju-Yong Lee, Min-Woo Jung, Dong-Geon Kim, Seung-Hee Han, Dae-Ryun Choi, and Hui-young Yun

Since PM2.5 (particulate matter with an aerodynamic diameter of less than 2.5 µm) directly threatens public health, in order to take appropriate measures(prevention) in advance, the Korea Ministry of Environment(MOE) has been implementing PM10 forecast nationwide since February 2014. PM2.5 forecasts have been implemented nationwide since January 2015. The currently implemented PM forecast by the MOE subdivides the country into 19 regions, and forecasts the level of PM in 4 stages of “Good”, “Moderate”, “Unhealthy”, and “Very unhealthy”.

Currently PM air quality forecasting system operated by the MOE is based on a numerical forecast model along with a weather and emission model. Numerical forecasting model has fundamental limitations such as the uncertainty of input data such as emissions and meteorological data, and the numerical model itself. Recently, many studies on predicting PM using artificial intelligence such as DNN, RNN, LSTM, and CNN have been conducted to overcome the limitations of numerical models.

In this study, in order to improve the prediction performance of the numerical model, past observational data (air quality and meteorological data) and numerical forecasting model data (chemical transport model) are used as input data. The machine learning model consists of DNN and Seq2Seq, and predicts 3 days (D+0, D+1, D+2) using 6-hour and 1-hour average input data, respectively. The PM2.5 concentrations predicted by the machine learning model and the numerical model were compared with the PM2.5 measurements.

The machine learning models were trained for input data from 2015 to 2020, and their PM forecasting performance was tested for 2021. Compared to the numerical model, the machine learning model tended to increase ACC and be similar or lower to FAR and POD.

Time series trend was showed machine learning PM forecasting trend is more similar to PM measurements compared with numerical model. Especially, machine learning forecasting model can appropriately predict PM low and high concentrations that numerical model is used to overestimate.

Machine learning forecasting model with DNN and Seq2Seq can found improvement of PM forecasting performance compared with numerical forecasting model. However, the machine learning model has limitations that the model can not consider external inflow effects.

In order to overcome the drawback, the models should be updated and added some other machine learning module such as CNN with spatial features of PM concentrations.

 

Acknowledgements

This study was supported in part by the ‘Experts Training Graduate Program for Particulate Matter Management’ from the Ministry of Environment, Korea and by a grant from the National Institute of Environmental Research (NIER), funded by the Ministry of Environment (ME) of the Republic of Korea (NIER-2022-04-02-068).

 

How to cite: Wang, K.-H., Lee, C.-Y., Lee, J.-Y., Jung, M.-W., Kim, D.-G., Han, S.-H., Choi, D.-R., and Yun, H.: Comparison of PM2.5 concentrations prediction model performance using Artificial Intelligence, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12566, https://doi.org/10.5194/egusphere-egu23-12566, 2023.

X5.97
|
EGU23-2174
Development of PM2.5 forecasting system in Seoul, South Korea using chemical transport modeling and ConvLSTM-DNN
(withdrawn)
Youn-Seo Koo, Ji-Seok Koo, Hui-Young Yun, and Kyung-Hui Wang
X5.98
|
EGU23-10394
|
ECS
Jin Feng

Current numerical weather prediction models contain significant systematic errors, due in part to indeterminate ground forcing (GF). This study considers an optimal virtual GF (GFo) derived by training observed and simulated datasets of 10-m wind speeds (WS10) for summer and winter. The GFo is added to an offline surface multilayer model (SMM) to revise predictions of WS10 in China by the Weather Research and Forecasting model (WRF). This revision is a data-based optimization under physical constraints. It reduces WS10 errors and offers wide applicability. The resulting model outperforms two purely physical forecasts (the original WRF forecast and the SMM with physical GF parameterized using urban, vegetation, and subgrid topography) and two purely data-based revisions (i.e., multilinear regression and multilayer perceptron). Compared with original WRF forecasting, using the GFo scheme reduces the Root Mean Square Error (RMSE) in WS10 across China by 25% in summer and 32% in winter. The frontal area index of GFo indicates that it includes both the effects of indeterminate GF and other possible complex physical processes associated with WS10.

How to cite: Feng, J.: Mitigate forecast error in surface wind speed using an offline single-column model with optimal ground forcing, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10394, https://doi.org/10.5194/egusphere-egu23-10394, 2023.

X5.99
|
EGU23-12218
Hui-nae Kwon, Hyeon-ju Jeon, Jeon-ho Kang, In-hyuk Kwon, and Seon Ki Park

The aircraft-based observation is one of the important anchor data used in the numerical weather prediction (NWP) models. Nevertheless, the bias has been noted in the temperature observation through several previous studies. As the performance on the hybrid four-dimensional ensemble variational (hybrid-4DEnVar) data assimilation (DA) system of the Korean Integrated Model (KIM) ⸺ the operational model in the Korea Meteorological Administration (KMA) ⸺ has been advanced, the need for the aircraft temperature bias correction (BC) has been confirmed. Accordingly, as a preliminary study on the BC, the static BC method based on the linear regression was applied to the KIM Package for Observation Processing (KPOP) system. However, the results showed there were limitations of a spatial discontinuity and a dependency on the calculation period of BC coefficients.

In this study, we tried to develop the machine learning-based bias estimation model to overcome these limitations. The MultiLayer Perceptron (MLP) based learning was performed to consider the vertical, spatial and temporal characteristics of each observation by flight IDs and phases, and at the same time to consider the correlation among observation variables. As a result of removing the predicted bias from the bias estimation model, the mean of the background innovation (O-B) decreases from 0.2217 K to 0.0136 K in a given test period. Afterwards, in order to verify the analysis field impact for BC, the bias estimation model will be grafted onto the KPOP system and then several DA cycle experiments will be conducted in the KIM.

How to cite: Kwon, H., Jeon, H., Kang, J., Kwon, I., and Park, S. K.: Bias correction of aircraft temperature observations in the Korean Integrated Model based on a deep learning approach, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12218, https://doi.org/10.5194/egusphere-egu23-12218, 2023.

X5.100
|
EGU23-4817
|
ECS
Lauri Tuppi, Madeleine Ekblom, Pirkka Ollinaho, and Heikki Järvinen

Numerical weather prediction models contain parameters that are inherently uncertain and cannot be determined exactly. Traditionally, the parameter tuning has been done manually, which can be an extremely labourious task. Tuning the entire model usually requires adjusting a relatively large amount of parameters. In case of manual tuning, the need to balance a number of requirements at the same time can lead the tuning process being a maze of subjective choices. It is, therefore, desirable to have reliable objective approaches for estimation of optimal values and uncertainties of these parameters. In this presentation we present how to optimise 20 key physical parameters having a strong impact on forecast quality. These parameters belong to the Stochastically Perturbed Parameters Scheme in the atmospheric model Open Integrated Forecasting System.

The results show that simultaneous optimisation of O(20) parameters is possible with O(100) algorithm steps using an ensemble of O(20) members, and that the optimised parameters lead to substantial enhancement of predictive skill. The enhanced predictive skill can be attributed to reduced biases in low-level winds and upper-tropospheric humidity in the optimised model. We find that the optimisation process is dependent on the starting values of the parameters that are optimised (starting from better suited values results in a better model). The results also show that the applicability of the tuned parameter values across different model resolutions is somewhat questionable since the model biases seem to be resolution-specific. Moreover, our optimisation algorithm tends to treat the parameter covariances poorly limiting its ability to converge to the global optimum.

How to cite: Tuppi, L., Ekblom, M., Ollinaho, P., and Järvinen, H.: Algorithmic optimisation of key parameters of OpenIFS, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4817, https://doi.org/10.5194/egusphere-egu23-4817, 2023.

X5.101
|
EGU23-3448
|
ECS
Detection and attribution of climate change using a neural network.
(withdrawn)
Constantin Bône
X5.102
|
EGU23-5149
|
ECS
Shivani Sharma and David Greenberg

Machine learning approaches have been widely used for improving the representation of subgrid scale parameterizations in Earth System Models. In our study we target the Cloud Microphysics parameterization, in particular the two-moment bulk scheme of the ICON (Icosahedral Non-hydrostatic) Model. 

 

Cloud microphysics parameterization schemes suffer from an accuracy/speed tradeoff. The simplest schemes, often heavy with assumptions (such as the bulk moment schemes) are most common in operational weather prediction models. Conversely, the more complex schemes with fewer assumptions –e.g. Lagrangian schemes such as the super-droplet method (SDM)– are computationally expensive and used only within research and development. SDM allows easy representation of complex scenarios with multiple hydrometeors and can also be used for simulating cloud-aerosol interactions. To bridge this gap and to make the use of more complex microphysical schemes feasible within operational models, we use a data-driven approach. 

 

Here we train a neural network to mimic the behavior of SDM simulations in a warm-rain scenario in a dimensionless control volume. The network behaves like a dynamical system that converts cloud droplets to rain droplets–represented as bulk moments–with only the current system state as the input. We use a multi-step training loss to stabilize the network over long integration periods, especially in cases with extremely low cloud water to start with. We find that the network is stable across various initial conditions and in many cases, emulates the SDM simulations better than the traditional bulk moment schemes. Our network also performs better than any previous ML-based attempts to learn from SDM. This opens the possibility of using the trained network as a proxy for imitating the computationally expensive SDM within operational weather prediction models with minimum computational overhead. 

How to cite: Sharma, S. and Greenberg, D.: Machine Learning Parameterization for Super-droplet Cloud Microphysics Scheme, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5149, https://doi.org/10.5194/egusphere-egu23-5149, 2023.

X5.103
|
EGU23-7281
|
ECS
Michael Himes, Natalya Kramarova, Tong Zhu, Jungbin Mok, Matthew Bandel, Zachary Fasnacht, and Robert Loughman

Retrieving ozone from limb measurements necessitates the modeling of scattered light through the atmosphere.  However, accurately modeling multiple scattering (MS) during retrieval requires excessive computational resources; consequently, operational retrieval models employ approximations in lieu of the full MS calculation.  Here we consider an alternative MS approximation method, where we use radiative transfer (RT) simulations to train neural network models to predict the MS radiances.  We present our findings regarding the best-performing network hyperparameters, normalization schemes, and input/output data structures.  Using RT calculations based on measurements by the Ozone Mapping and Profiling Suite's Limb Profiler (OMPS/LP), we compare the accuracy of these neural-network models with both the full MS calculation as well as the current MS approximation methods utilized during OMPS/LP retrievals.

How to cite: Himes, M., Kramarova, N., Zhu, T., Mok, J., Bandel, M., Fasnacht, Z., and Loughman, R.: Neural network surrogate models for multiple scattering: Application to OMPS LP simulations, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7281, https://doi.org/10.5194/egusphere-egu23-7281, 2023.

X5.104
|
EGU23-4695
Jiyeon Jang, Tae-Jin Oh, Sojung An, Wooyeon Park, Inchae Na, and Junghan Kim

Physical parameterization is one of the major components of Numerical Weather Prediction system. In Korean Integrated Model (KIM), physical parameterizations account for about 30 % of the total computation time. There are many studies of developing neural network based emulators to replace and accelerate physics based parameterization. In this study, we develop a planetary boundary layer(PBL) emulator which is based on Shin-Hong (Hong et al., 2006, 2010; Shin and Hong, 2013, 2015) scheme that computes the parameterized effects of vertical turbulent eddy diffusion of momentum, water vapor, and sensible heat fluxes. We compare the emulator performance with Multi-Layer Perceptron (MLP) based architectures: simple MLP, MLP application version, and MLP-mixer(Tolstikhin et al., 2021). MLP application version divides data into several vertical groups for better approximation of each vertical group layers. MLP-mixer is MLP based architecture that performs well in computer vision without using convolution and self-attention. We evaluate the resulting MLP based emulator performance. MLP application version and MLP-mixer showed significant performance improvement over simple MLP.

How to cite: Jang, J., Oh, T.-J., An, S., Park, W., Na, I., and Kim, J.: Development of PBL Parameterization Emulator using Neural Networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4695, https://doi.org/10.5194/egusphere-egu23-4695, 2023.

X5.105
|
EGU23-10904
|
ECS
|
Feier Yan, Julian Mak, and Yan Wang

Recent works have demonstrated the viability of employing data-driven / machine learning 
methods for the purposes of learning more about ocean turbulence, with applications to turbulence parameterisations in ocean general circulation models. Focusing on mesoscale geostrophic turbulence in the ocean context, works thus far have mostly focused on the choice of algorithms and testing of trained up models. Here we focus instead on the choice of eddy flux data to learn from. We argue that, for mesoscale geostrophic turbulence, it might be beneficial from a theoretical as well as practical point of view to learn from eddy fluxes with dynamically inert rotational fluxes removed (ideally in a gauge invariant fashion), instead of the divergence of the eddy fluxes as has been considered thus far. Outlooks for physically constrained and interpretable machine learning will be given in light of the results. 

How to cite: Yan, F., Mak, J., and Wang, Y.: On the choice of turbulence eddy fluxes to learn from in data-driven methods, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10904, https://doi.org/10.5194/egusphere-egu23-10904, 2023.

X5.106
|
EGU23-5003
|
ECS
Hannah Marie Eichholz, Jan Kretzschmar, Duncan Watson-Parris, Josefine Umlauft, and Johannes Quaas

In the preparation of the global kilometre-resolution coupled ICON climate model, it is necessary to calibrate cloud microphysical parameters. Here we explore the avenue towards optimally calibrating such parameters using machine learning. The emulator developed by Watson-Parris et al. (2021) is employed in combination with a perturbed-parameter ensemble of limited-area atmosphere-only ICON simulations for the North Atlantic ocean. In a first step, the autoconversion scaling parameter is calibrated, using satellite-retrieved top-of-atmosphere and bottom-of-atmosphere radiation fluxes. For this purpose, limited area simulations of the north atlantic are performed with ICON. In which different cloud microphysical parameters are changed, in order to evaluate possible influences on the output of radiation fluxes.

How to cite: Eichholz, H. M., Kretzschmar, J., Watson-Parris, D., Umlauft, J., and Quaas, J.: Towards machine-learning calibration of cloud parameters in the kilometre-resolution ICON atmosphere model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5003, https://doi.org/10.5194/egusphere-egu23-5003, 2023.

Posters virtual: Mon, 24 Apr, 16:15–18:00 | vHall AS

Chairperson: Maike Sonnewald
vAS.8
|
EGU23-10726
|
Jaakko Putkonen, M. Aymane Ahajjam, Timothy Pasch, and Robert Chance

The lack of ground level observation stations outside of settlements makes monitoring and forecasting local weather and permafrost challenging in the Arctic. Such predictive pieces of information are essential to help prepare for potentially hazardous weather conditions, especially during winter. In this study, we aim at enhancing predictive analytics in Alaska of permafrost and temperature by using a hybrid forecasting technique. In particular, we propose VMD-WT-InceptionTime model for short-term air temperature forecasting.

This proposed technique incorporates data preprocessing techniques and deep learning to enhance the accuracy of the next seven days air temperature forecasts. Initially, the Spearman correlation coefficient is utilized to examine the relationship between different inputs and the forecast target temperature. Following this, Variational Mode Decomposition (VMD) is used to decompose the most output-correlated input variables (i.e., temperature and relative humidity) to extract intrinsic and non-stationary time-frequency features from the original sequences. The Wavelet Transform (WT) is then employed to further extract intrinsic multi-resolution patterns from these decomposed input variables. Finally, a deep InceptionTime model is used for multi-step air temperature forecasting using these processed sequences. This forecasting technique was developed using an open dataset holding 20+ years of data from three locations in Alaska: North Slope, Alaska, Arctic National Wildlife Refuge, Alaska, and Diomede Island region, Bering Strait. Model performance has been rigorously evaluated of metrics including RMSE, MAPE and error.

Results highlight the effectiveness of the proposed hybrid model in providing more accurate short-term forecasts than several baselines (GBDT, SVR, ExtraTrees, RF, ARIMA, LSTM, GRU, and Transformer). More specifically, this technique reported RMSE and MAPE average increase rates amounting to 11.21% and 16.13% in North Slope, 30.01% and 34.97% in Arctic National Wildlife Refuge, and 16.39%, 23.46% in Diomede Island region. In addition, the proposed technique produces forecasts over all seven horizons with a maximum error of <1.5K, a minimum error of >-1.2K, and an average error lower than 0.18K for North Slope. For Arctic National Wildlife Refuge, a maximum error of <1K, a minimum error of >-0.9K, and an average of < 0.1K. While a maximum error of <0.9K, a minimum error of >-0.8K, and an average of <0.13K, for Diomede Island region. However, the worst performances achieved were errors of around 6K in the third horizon (i.e., 3rd day) for North Slope and the Arctic National Wildlife Refuge and the last horizon (i.e., 7th day) for the Diomede Islands region. Most of the worst performances of the proposed technique in all three locations can be attributed to having to produce forecasts of higher variations and wider temperature ranges than their averages.

Overall, this research highlights the potential of the decomposition techniques and deep learning to: 1) reveal and effectively learn the underlying cyclicity of air temperatures at varying resolutions that allows for accurate predictions without any knowledge of the governing physics, 2) produce accurate multi-step temperature forecasts in Arctic climates.

How to cite: Putkonen, J., Ahajjam, M. A., Pasch, T., and Chance, R.: A hybrid VMD-WT-InceptionTime model for multi-horizon short-term air temperature forecasting in Alaska, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10726, https://doi.org/10.5194/egusphere-egu23-10726, 2023.

vAS.9
|
EGU23-17486
Efficient Bayesian ensemble geophysical problem inversion using sample-wise updates
(withdrawn)
Dan MacKinlay, Dan Pagendam, Petra Kuhnert, Sreekanth Janardhanan, and Russell Tsuchida