ITS1.1/CL0.1.17 | Machine Learning for Climate Science
EDI
Machine Learning for Climate Science
Convener: Duncan Watson-Parris | Co-conveners: Marlene Kretschmer, Gustau Camps-Valls, Peer Nowack, Sebastian Sippel
Orals
| Tue, 16 Apr, 08:30–12:25 (CEST), 14:00–15:40 (CEST)
 
Room C
Posters on site
| Attendance Wed, 17 Apr, 10:45–12:30 (CEST) | Display Wed, 17 Apr, 08:30–12:30
 
Hall X5
Posters virtual
| Attendance Wed, 17 Apr, 14:00–15:45 (CEST) | Display Wed, 17 Apr, 08:30–18:00
 
vHall X5
Orals |
Tue, 08:30
Wed, 10:45
Wed, 14:00
Machine learning (ML) is transforming data analysis and modelling of the Earth system. While statistical and data-driven models have been used for a long time, recent advances in ML and deep learning now allow for encoding non-linear, spatio-temporal relationships robustly without sacrificing interpretability. This has the potential to accelerate climate science through new approaches for modelling and understanding the climate system. For example, ML is now used in the detection and attribution of climate signals, to merge theory and Earth observations in innovative ways, and to directly learn predictive models from observations. The limitations of machine learning methods also need to be considered, such as requiring, in general, rather large training datasets, data leakage, and/or poor generalisation abilities so that methods are applied where they are fit for purpose and add value.

This session aims to provide a venue to present the latest progress in the use of ML applied to all aspects of climate science, and we welcome abstracts focussed on, but not limited to:

More accurate, robust and accountable ML models:
- Hybrid models (physically informed ML, parameterizations, emulation, data-model integration)
- Novel detection and attribution approaches
- Probabilistic modelling and uncertainty quantification
- Uncertainty quantification and propagation
- Distributional robustness, transfer learning and/or out-of-distribution generalisation tasks in climate science
- Green AI

Improved understanding through data-driven approaches:
- Causal discovery and inference: causal impact assessment, interventions, counterfactual analysis
- Learning (causal) process and feature representations in observations or across models and observations
- Explainable AI applications
- Discover governing equations from climate data with symbolic regression approaches

Enhanced interaction:
- The human in the loop - active learning & reinforcement learning for improved emulation and simulations
- Large language models and AI agents - exploration and decision making, modeling regional decision-making
- Human interaction within digital twins

Orals: Tue, 16 Apr | Room C

08:30–08:35
Land
08:35–08:45
|
EGU24-3272
|
ECS
|
Virtual presentation
Jielong Wang, Yunzhong Shen, Joseph Awange, Ling Yang, and Qiujie Chen

Understanding long-term total water storage (TWS) changes in the Yangtze River Basin (YRB) is essential for optimizing water resource management and mitigating hydrological extremes. While the Gravity Recovery and Climate Experiment (GRACE) and its follow-on (GRACE-FO) mission have provided valuable observations for investigating global or regional TWS changes, the approximately one-year data gap between these missions and their relatively short 20-year data record limits our ability to study the continuous and long-term variability of YRB's TWS. In this study, two deep learning models are employed to bridge the data gap and reconstruct the historical TWS changes within YRB, respectively. For the data gap filling task, a noise-augmented u-shaped network (NA-UNet) is presented to address UNet's overfitting issues associated with training on limited GRACE observations. Results show that NA-UNet can accurately bridge the data gap, exhibiting favourable and stable performance at both the basin and grid scales. Subsequently, we introduce another deep learning model named RecNet, specifically designed to reconstruct the climate-driven TWS changes in YRB from 1923 to 2022. RecNet is trained on precipitation, temperature, and GRACE observations using a weighted mean square error (WMSE) loss function. We show that RecNet can successfully reconstruct the historical TWS changes, achieving strong correlations with GRACE, water budget estimates, hydrological models, drought indices, and existing reconstruction datasets. We also observe superior performance in RecNet when trained with WMSE compared to its non-weighted counterpart. In addition, the reconstructed datasets reveal a recurring occurrence of diverse hydrological extremes over the past century within YRB, influenced by major climate patterns. Together, NA-UNet and RecNet provide valuable observations for studying long-term climate variability and projecting future hydrological extremes in YRB, which can inform effective water resource management and contribute to the development of adaptive strategies for climate change.

How to cite: Wang, J., Shen, Y., Awange, J., Yang, L., and Chen, Q.: Reconstructing total water storage changes in the Yangtze River Basin based on deep learning models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3272, https://doi.org/10.5194/egusphere-egu24-3272, 2024.

08:45–08:55
|
EGU24-1101
|
ECS
|
Virtual presentation
Swarnalee Mazumder, Sebastian Hahn, and Wolfgang Wagner

This study introduces an approach for land heatwave forecasting, using spatiotemporal machine learning models trained with ERA5 reanalysis data. We focused on key environmental variables like soil moisture, vegetation, and meteorological factors for modelling. The study utilized linear regression as a base model, augmented by more complex algorithms such as Random Forest (RF), XGBoost, and Graph Neural Networks (GNN). We defined heatwaves using temperature data from 1970-2000, and the training phase involved data from 2000 to 2020, focusing on predictive accuracy for 2021-2023. This methodology enabled a detailed exploration of heatwave trends and dynamics over an extended period. Finally, we used explainable AI methods to further deepen our understanding of the complex interplay between environmental variables and heatwave occurrences.

How to cite: Mazumder, S., Hahn, S., and Wagner, W.: Monitoring The Development Of Land Heatwaves Using Spatiotemporal Models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1101, https://doi.org/10.5194/egusphere-egu24-1101, 2024.

08:55–09:05
|
EGU24-18615
|
On-site presentation
Evaluating the trade-offs between precision, prediction lead time, transferability, and generalisation in data-driven models for wheat prediction in Morocco
(withdrawn)
Bader Oulaid, Alice Milne, Toby Waine, Rafiq El Alami, and Ron Corstanje
09:05–09:15
|
EGU24-17389
|
ECS
|
On-site presentation
Claire Robin, Vitus Benson, Christan Requena-Mesa, Lazaro Alonso, Jeran Poehls, Marc Russwurm, Nuno Carvalhais, and Markus Reichstein

The biogeoscience community has increasingly embraced the application of machine learning models across various domains from fire prediction to vegetation forecasting. Yet, as these models become more widely used, there is sometimes a gap in understanding between what we assume the model learns and what the model actually learns. For example, Long-short Term Memory (LSTM) models are applied to long time series, hoping they benefit from access to more information, despite their tendency to rapidly forget information. This can lead to erroneous conclusions, misinterpretation of results, and an overestimation of the models, ultimately eroding trust in their reliability. 

To address this issue, we employ an explainable artificial intelligence (XAI) post hoc perturbation technique that is task-agnostic and model-agnostic. We aim to examine the extent to which the model leverages information for its predictions, both in terms of time and space. In other words, we want to observe the actual receptive field utilized by the model. We introduce a methodology designed to quantify both the spatial impact of neighboring pixels on predicting a specific pixel and the temporal periods contributing to predictions in time series models. The experiments take place after training the model, during inference. In the spatial domain, we define ground-truth pixels to predict, then examine the increase in prediction error, caused by shuffling their neighboring pixels at various distances from the selection. In the temporal domain, we investigate how shuffling a sequence of frames within the context period at different intervals relative to the target period affects the increase in prediction loss. This method can be applied across a broad spectrum of spatio-temporal tasks. Importantly, the method is easy-to-implement, as it only relies on the inference of predictions at test time and the shuffling of the perturbation area. 

For our experiments, we focus on the vegetation forecasting task, i.e., forecasting the evolution of the Vegetation Index (VI) based on Sentinel-2 imagery using previous Sentinel-2 sequences and weather information to guide the prediction. This task involves both spatial non-linear dependencies arising from the spatial context (e.g., the surrounding area, such as a river or a slope, directly influencing the VI) and non-linear temporal dependencies such as the gradual onset of drought conditions and the rapid influence of precipitation events. We compare several models for spatio-temporal tasks, including ConvLSTM and transformer-based architectures on their usage of neighboring pixels in space, and context period in time. We demonstrate that the ConvLSTM relies on a  restricted spatial area in its predictions, indicating a limited utilization of the spatial context up to 50m (5 pixels). Furthermore, it utilizes the global order of the time series sequence to capture the seasonal cycle but loses sensitivity to the local order after 15 days (3 frames). The introduced XAI method allows us to quantify spatial and temporal behavior exhibited by machine learning methods.

How to cite: Robin, C., Benson, V., Requena-Mesa, C., Alonso, L., Poehls, J., Russwurm, M., Carvalhais, N., and Reichstein, M.: Analyzing Spatio-Temporal Machine Learning Models through Input Perturbation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17389, https://doi.org/10.5194/egusphere-egu24-17389, 2024.

09:15–09:25
|
EGU24-17694
|
ECS
|
On-site presentation
Deborah Bassotto, Emiliano Diaz, and Gustau Camps-Valls

In recent years, the intersection of machine learning (ML) and climate science has yielded profound insights into understanding and predicting extreme climate events, particularly heatwaves and droughts. Various approaches have been suggested to define and model extreme events, including extreme value theory (Sura, 2011), random forests (e.g., (Weirich-Benet et al., 2023) and, more recently, deep learning (e.g., (Jacques-Dumas et al., 2022)). Within this context, quantile regression (QR) is valuable for modelling the relationship between variables by estimating the conditional quantiles of the response variable. This provides insights into the entire distribution rather than just the mean but also aids in unravelling the complex relationships among climate variables (Barbosa et al., 2011; Franzke, 2015). QR has been extended in many ways to address critical issues such as nonlinear relations, nonstationary processes, compound events, and the complexities of handling spatio-temporal data. 

This study presents a novel approach for predicting and better understanding heatwaves. We introduce an interpretable, nonlinear, non-parametric, and structured Spatio-Temporal Quantile Regression (STQR) method that incorporates the QR check function, commonly known as pinball loss, into machine learning models. We focus on analysing how the importance of predictors changes as the quantile being modelled increases. This allows us to circumvent arbitrary definitions of what constitutes a heatwave and instead observe if a natural definition of a heatwave emerges in predictor space. By analysing European heatwaves over recent decades using reanalysis and weather data, we demonstrate the advantages of our methodology over traditional extreme event modelling methods.

References

Barbosa, S.M., Scotto, M.G., Alonso, A.M., 2011. Summarising changes in air temperature over Central Europe by quantile regression and clustering. Nat. Hazards Earth Syst. Sci. 11, 3227–3233. https://doi.org/10.5194/nhess-11-3227-2011

Franzke, C.L.E., 2015. Local trend disparities of European minimum and maximum temperature extremes. Geophys. Res. Lett. 42, 6479–6484. https://doi.org/10.1002/2015GL065011

Jacques-Dumas, V., Ragone, F., Borgnat, P., Abry, P., Bouchet, F., 2022. Deep Learning-based Extreme Heatwave Forecast. Front. Clim. 4, 789641. https://doi.org/10.3389/fclim.2022.789641

Sura, P., 2011. A general perspective of extreme events in weather and climate. Atmospheric Res. 101, 1–21. https://doi.org/10.1016/j.atmosres.2011.01.012

Weirich-Benet, E., Pyrina, M., Jiménez-Esteve, B., Fraenkel, E., Cohen, J., Domeisen, D.I.V., 2023. Subseasonal Prediction of Central European Summer Heatwaves with Linear and Random Forest Machine Learning Models. Artif. Intell. Earth Syst. 2. https://doi.org/10.1175/AIES-D-22-0038.1

How to cite: Bassotto, D., Diaz, E., and Camps-Valls, G.: Spatio-temporal Nonlinear Quantile Regression for Heatwave Prediction and Understanding, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17694, https://doi.org/10.5194/egusphere-egu24-17694, 2024.

09:25–09:35
|
EGU24-19460
|
ECS
|
Highlight
|
Virtual presentation
Carlos Gomes and Thomas Brunschwiler

Earth observation (EO) repositories comprise Petabytes of data. Due to their widespread use, these repositories experience extremely large volumes of data transfers. For example, users of the Sentinel Data Access System downloaded 78.6 PiB of data in 2022 alone. The transfer of such data volumes between data producers and consumers causes substantial latency and requires significant amounts of energy and vast storage capacities. This work introduces Neural Embedding Compression (NEC), a method that transmits compressed embeddings to users instead of raw data, greatly reducing transfer and storage costs. The approach uses general purpose embeddings from Foundation Models (FM), which can serve multiple downstream tasks and neural compression, which balances between compression rate and the utility of the embeddings. We implemented the method by updating a minor portion of the FM’s parameters (approximately 10%) for a short training period of about 1% of the original pre-training iterations. NEC’s effectiveness is assessed through two EO tasks: scene classification and semantic segmentation. When compared to traditional compression methods applied to raw data, NEC maintains similar accuracy levels while reducing data by 75% to 90%. Notably, even with a compression rate of 99.7%, there’s only a 5% decrease in accuracy for scene classification. In summary, NEC offers a resource-efficient yet effective solution for multi-task EO modeling with minimal transfer of data volumes.

How to cite: Gomes, C. and Brunschwiler, T.: Earth Observation Applications through Neural Embedding Compression from Foundation Models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19460, https://doi.org/10.5194/egusphere-egu24-19460, 2024.

09:35–09:45
|
EGU24-16513
|
ECS
|
On-site presentation
Lazaro Alonso, Sujan Koirala, Nuno Carvalhais, Fabian Gans, Bernhard Ahrens, Felix Cremer, Thomas Wutzler, Mohammed Ayoub Chettouh, and Markus Reichstein

The application of automatic differentiation and deep learning approaches to tackle current challenges is now a widespread practice. The biogeosciences community is no stranger to this trend; however, quite often, previously known physical model abstractions are discarded.

In this study, we model the ecosystem dynamics of vegetation, water, and carbon cycles adopting a hybrid approach. This methodology involves preserving the physical model representations for simulating the targeted processes while utilizing neural networks to learn the spatial variability of their parameters. These models have historically posed challenges due to their complex process representations, varied spatial scales, and parametrizations.

We show that a hybrid approach effectively predicts model parameters with a single neural network, compared with the site-level optimized set of parameters. This approach demonstrates its capability to generate predictions consistent with in-situ parameter calibrations across various spatial locations, showcasing its versatility and reliability in modelling coupled systems.
Here, the physics-based process models undergo evaluation across several FLUXNET sites. Various observations—such as gross primary productivity, net ecosystem exchange, evapotranspiration, transpiration, the normalized difference vegetation index, above-ground biomass, and ecosystem respiration—are utilized as targets to assess the model's performance. Simultaneously, a neural network (NN) is trained to predict the model parameters, using input features(to the NN) such as plant functional types, climate types, bioclimatic variables, atmospheric nitrogen and phosphorus deposition, and soil properties. The model simulation is executed within our internal framework Sindbad.jl (to be open-sourced), designed to ensure compatibility with gradient-based optimization methods.

This work serves as a stepping stone, demonstrating that incorporating neural networks into a broad collection of physics-based models holds significant promise and has the potential to leverage the abundance of current Earth observations, enabling the application of these methods on a larger scale.

How to cite: Alonso, L., Koirala, S., Carvalhais, N., Gans, F., Ahrens, B., Cremer, F., Wutzler, T., Ayoub Chettouh, M., and Reichstein, M.: Hybrid Modelling: Bridging Neural Networks and Physics-Based Approaches in Terrestrial Biogeochemical Ecosystems, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-16513, https://doi.org/10.5194/egusphere-egu24-16513, 2024.

09:45–09:55
|
EGU24-2819
|
ECS
|
On-site presentation
Reda ElGhawi, Christian Reimers, Reiner Schnur, Markus Reichstein, Marco Körner, Nuno Carvalhais, and Alexander J. Winkler

The exchange of water and carbon between the land-surface and the atmosphere is regulated by meteorological conditions as well as plant physiological processes. Accurate modeling of the coupled system is not only crucial for understanding local feedback loops but also for global-scale carbon and water cycle interactions. Traditional mechanistic modeling approaches, e.g., the Earth system model ICON-ESM with the land component JSBACH4, have long been used to study the land-atmosphere coupling. However, these models are hampered by relatively rigid functional representations of terrestrial biospheric processes, e.g., semi-empirical parametrizations for stomatal conductance.

Here, we develop data-driven, flexible parametrizations controlling terrestrial carbon-water coupling based on eddy-covariance flux measurements using machine learning (ML). Specifically, we introduce a hybrid modeling approach (integration of data-driven and mechanistic modeling), that aims to replace specific empirical parametrizations of the coupled photosynthesis (GPP ) and transpiration (Etr ) modules with ML models pre-trained on observations. First, as a proof-of-concept, we train parametrizations based on original JSBACH4 output to showcase that our approach succeeds in reconstructing the original parametrizations, namely latent dynamic features for stomatal (gs) and aerodynamic (ga) conductance, the carboxylation rate of RuBisCO (Vcmax), and the photosynthetic electron transport rate for RuBisCO regeneration (Jmax). Second, we replace JSBACH4’s original parametrizations by dynamically calling the emulator parameterizations trained on the original JSBACH4 output using a Python-FORTRAN bridge. This allows us to assess the impact of data-driven parametrizations on the output in the coupled land-surface model. In the last step, we adopt the approach to infer these parametrizations from FLUXNET observations to construct an observation-informed model of water and carbon fluxes in JSBACH4.

Preliminary results in emulating JSBACH4 parametrizations reveal R2 ranging between 0.91-0.99 and 0.92-0.97 for GPP, Etr, and the sensible heat flux QH  at half-hourly scale for forest and grassland sites, respectively. JSBACH4 with the plugged-in ML-emulator parametrizations provides very similar, but not identical predictions as the original JSBACH4. For example, R2 for Etr (gs) amounts to 0.91 (0.84) and 0.93 (0.86) at grassland and forest sites, respectively. These differences in the transpiration flux between original predictions and JSBACH4 with emulating parametrizations only result in minor changes in the system, e.g., the soil-water budget in the two models is almost the same (R2 of ~0.99). Based on these promising results of our proof-of-concept, we are now preparing the hybrid JSBACH4 model with parametrizations trained on FLUXNET observations.

This modeling framework will then serve as the foundation for coupled land-atmosphere simulations using ICON-ESM, where key biospheric processes are represented by our hybrid observation-informed land-surface model.

How to cite: ElGhawi, R., Reimers, C., Schnur, R., Reichstein, M., Körner, M., Carvalhais, N., and Winkler, A. J.: Hybrid-Modeling of Land-Atmosphere Fluxes Using Integrated Machine Learning in the ICON-ESM Modeling Framework, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2819, https://doi.org/10.5194/egusphere-egu24-2819, 2024.

Atmosphere
09:55–10:05
|
EGU24-6655
|
ECS
|
On-site presentation
Tamara Happe, Jasper Wijnands, Miguel Ángel Fernández-Torres, Paolo Scussolini, Laura Muntjewerf, and Dim Coumou

Heatwaves over western Europe are increasing faster than elsewhere, which recent studies have attributed at least partly to changes in atmospheric dynamics. To increase our understanding of the dynamical drivers of western European heatwaves, we developed a heatwave classification method taking into account the spatio-temporal atmospheric dynamics. Our deep learning approach consists of several steps: 1) heatwave detection using the Generalized Density-based Spatial Clustering of Applications with Noise (GDBSCAN) algorithm; 2) dimensionality reduction of the spatio-temporal heatwave samples using a 3D Variational Autoencoder (VAE); and 3) a clustering of heatwaves using K-means, a Gaussian Mixture Model, and opt-SNE. We show that a VAE can extract meaningful features from high-dimensional climate data. Furthermore, we find four physically distinct clusters of heatwaves that are interpretable with known circulation patterns, i.e. UK High, Scandinavian High, Atlantic High, and Atlantic Low. Our results indicate that the heatwave phase space, as found with opt-SNE, is continuous with soft boundaries between these circulation regimes, indicating that heatwaves are best categorized in a probabilistic way.

How to cite: Happe, T., Wijnands, J., Fernández-Torres, M. Á., Scussolini, P., Muntjewerf, L., and Coumou, D.: Detecting spatio-temporal dynamics of western European heatwaves using deep learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6655, https://doi.org/10.5194/egusphere-egu24-6655, 2024.

10:05–10:15
|
EGU24-10922
|
ECS
|
On-site presentation
Vitus Benson, Ana Bastos, Christian Reimers, Alexander J. Winkler, Fanny Yang, and Markus Reichstein

Large deep neural network emulators are poised to revolutionize numerical weather prediction (NWP). Recent models like GraphCast or NeuralGCM can now compete and sometimes outperform traditional NWP systems, all at much lower computational cost. Yet to be explored is the applicability of large deep neural network emulators to other dense prediction tasks such as the modeling of 3D atmospheric composition. For instance the inverse modeling of carbon fluxes essential for estimating carbon budgets relies on fast CO2 transport models.

Here, we present a novel approach to atmospheric transport modeling of CO2 and other inert trace gases. Existing Eulerian transport modeling approaches rely on numerical solvers applied to the continuity equation, which are expensive: short time steps are required for numerical stability at the poles, and the loading of driving meteorological fields is IO-intensive. We learn high-fidelity transport in latent space by training graph neural networks, analogous to approaches used in weather forecasting, including an approach that conserves the CO2 mass. For this, we prepare the CarbonBench dataset, a deep learning ready dataset based on Jena Carboscope CO2 inversion data and NCEP NCAR meteorological reanalysis data together with ObsPack station observations for model evaluation.

Qualitative and quantitative experiments demonstrate the superior performance of our approach over a baseline U-Net for short-term (<40 days) atmospheric transport modeling of carbon dioxide. While the original GraphCast architecture achieves a similar speed to the TM3 transport model used to generate the training data, we show how various architectural changes introduced by us contribute to a reduced IO load (>4x) of our model, thereby speeding up forward runs. This is especially useful when applied multiple times with the same driving wind fields, e.g. in an inverse modeling framework. Thus, we pave the way towards integrating not only atmospheric observations (as is done in current CO2 inversions), but also ecosystem surface fluxes (not yet done) into carbon cycle inversions. The latter requires backpropagating through a transport operator to optimize a flux model with many more parameters (e.g. a deep neural network) than those currently used in CO2 inversions – which becomes feasible if the transport operator is fast enough. To the best of our knowledge, this work presents the first emulator of global Eulerian atmospheric transport, thereby providing an initial step towards next-gen inverse modeling of the carbon cycle with deep learning.

 

How to cite: Benson, V., Bastos, A., Reimers, C., Winkler, A. J., Yang, F., and Reichstein, M.: Graph Neural Networks for Atmospheric Transport Modeling of CO2 , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10922, https://doi.org/10.5194/egusphere-egu24-10922, 2024.

Coffee break
10:45–10:55
|
EGU24-4460
|
Highlight
|
On-site presentation
William Collins, Michael Pritchard, Noah Brenowitz, Yair Cohen, Peter Harrington, Karthik Kashinath, Ankur Mahesh, and Shashank Subramanian

Studying low-likelihood high-impact extreme weather and climate events in a warming world requires massive
ensembles to capture long tails of multi-variate distributions. In combination, it is simply impossible to generate
massive ensembles, of say 10,000 members, using traditional numerical simulations of climate models at high
resolution. We describe how to bring the power of machine learning (ML) to replace traditional numerical
simulations for short week-long hindcasts of massive ensembles, where ML has proven to be successful in terms of
accuracy and fidelity, at five orders-of-magnitude lower computational cost than numerical methods. Because
the ensembles are reproducible to machine precision, ML also provides a data compression mechanism to
avoid storing the data produced from massive ensembles. The machine learning algorithm FourCastNet (FCN) is
based on Fourier Neural Operators and Transformers, proven to be efficient and powerful in modeling a wide
range of chaotic dynamical systems, including turbulent flows and atmospheric dynamics. FCN has already been
proven to be highly scalable on GPU-based HPC systems. 

We discuss our progress using statistics metrics for extremes adopted from operational NWP centers to show
that FCN is sufficiently accurate as an emulator of these phenomena. We also show how to construct huge
ensembles through a combination of perturbed-parameter techniques and a variant of bred vectors to generate a
large suite of initial conditions that maximize growth rates of ensemble spread. We demonstrate that these
ensembles exhibit a ratio of ensemble spread relative to RMSE that is nearly identical to one, a key metric of
successful near-term NWP systems. We conclude by applying FCN to severe heat waves in the recent climate
record.

How to cite: Collins, W., Pritchard, M., Brenowitz, N., Cohen, Y., Harrington, P., Kashinath, K., Mahesh, A., and Subramanian, S.: Huge Ensembles of Weather Extremes using the Fourier Forecasting Neural Network, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4460, https://doi.org/10.5194/egusphere-egu24-4460, 2024.

10:55–11:05
|
EGU24-5611
|
ECS
|
Highlight
|
On-site presentation
Leonardo Olivetti and Gabriele Messori

In recent years, deep learning models have rapidly emerged as a standalone alternative to physics-based numerical models for medium-range weather forecasting. Several independent research groups claim to have developed deep learning weather forecasts which outperform those from state-of-the-art physics-basics models, and operational implementation of data-driven forecasts appears to be drawing near. Yet, questions remain about the capabilities of deep learning models to provide robust forecasts of extreme weather.

Our current work aims to provide an overview of recent developments in the field of deep learning weather forecasting, and highlight the challenges that extreme weather events pose to leading deep learning models. Specifically, we problematise the fact that predictions generated by many deep learning models appear to be oversmooth, tending to underestimate the magnitude of wind and temperature extremes. To address these challenges, we argue for the need to tailor data-driven models to forecast extreme events, and develop models aiming to maximise the skill in the tails rather than in the mean of the distribution. Lastly, we propose a foundational workflow to develop robust models for extreme weather, which may function as a blueprint for future research on the topic.

How to cite: Olivetti, L. and Messori, G.: Advances and Prospects of Deep Learning for Medium-Range Extreme Weather Forecasting, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5611, https://doi.org/10.5194/egusphere-egu24-5611, 2024.

11:05–11:15
|
EGU24-8321
|
ECS
|
On-site presentation
Habit Classification of PHIPS Stereo-Microscopic Ice Crystal Images
(withdrawn)
Franziska Nehlert, Lucas Grulich, Martin Schnaiter, Peter Spichtinger, Ralf Weigel, and Emma Järvinen
11:15–11:25
|
EGU24-10129
|
ECS
|
On-site presentation
Jannik Thümmel, Jakob Schlör, Felix Strnad, and Bedartha Goswami

Subseasonal to seasonal (S2S) weather forecasts play an important role as a decision making tool in several sectors of modern society. However, the time scale on which these forecasts are skillful is strongly dependent on atmospheric and oceanic background conditions. While deep learning-based weather prediction models have shown impressive results in the short- to medium range, S2S forecasts from such models are currently limited, partly due to fewer available training data and larger fluctuations in predictability. In order to develop more reliable S2S predictions we leverage Masked Autoencoders, a state-of-the-art deep learning framework, to extract large-scale representations of tropical precipitation and sea-surface temperature data.  We show that the learned representations are highly predictive for the El Niño Southern Oscillation and the Madden-Julian Oscillation, and can thus serve as a foundation for identifying windows of opportunity and generating skillful S2S forecasts.

How to cite: Thümmel, J., Schlör, J., Strnad, F., and Goswami, B.: Subseasonal to seasonal forecasts using Masked Autoencoders, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10129, https://doi.org/10.5194/egusphere-egu24-10129, 2024.

11:25–11:35
|
EGU24-10325
|
ECS
|
Highlight
|
On-site presentation
Helge Heuer, Mierk Schwabe, Pierre Gentine, Marco A. Giorgetta, and Veronika Eyring

In order to improve climate projections, machine learning (ML)-based parameterizations have been developed for Earth System Models (ESMs) with the goal to better represent subgrid-scale processes or to accelerate computations by emulating existent parameterizations. These data-driven models have shown success in approximating subgrid-scale processes based on high-resolution storm-resolving simulations. However, most studies have used a particular machine learning method such as simple Multilayer Perceptrons (MLPs) or Random Forest (RFs) to parameterize the subgrid tendencies or fluxes originating from the compound effect of various small-scale processes (e.g., turbulence, radiation, convection, gravity waves). Here, we use a filtering technique to explicitly separate convection from these processes in data produced by the Icosahedral Non-hydrostatic modelling framework (ICON) in a realistic setting. We use a method improved by incorporating density fluctuations for computing the subgrid fluxes and compare a variety of different machine learning algorithms on their ability to predict the subgrid fluxes. We further examine the predictions of the best performing non-deep learning model (Gradient Boosted Tree regression) and the U-Net. We discover that the U-Net can learn non-causal relations between convective precipitation and convective subgrid fluxes and develop an ablated model excluding precipitating tracer species. We connect the learned relations of the U-Net to physical processes in contrast to non-deep learning-based algorithms. Our results suggest that architectures such as a U-Net are particularly well suited to parameterize multiscale problems like convection, paying attention to the plausibility of the learned relations, thus providing a significant advance upon existing ML subgrid representation in ESMs.

How to cite: Heuer, H., Schwabe, M., Gentine, P., Giorgetta, M. A., and Eyring, V.: Interpretable multiscale Machine Learning-Based Parameterizations of Convection for ICON, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10325, https://doi.org/10.5194/egusphere-egu24-10325, 2024.

11:35–11:45
|
EGU24-12495
|
On-site presentation
Maximilian Gelbrecht and Niklas Boers

Combining process-based models in Earth system science with data-driven machine learning methods holds tremendous promise. Can we harness the best of both approaches? In our study, we integrate components of atmospheric models into artificial neural networks (ANN). The resulting hybrid atmospheric model can learn atmospheric dynamics from short trajectories while ensuring robust generalization and stability. We achieve this using the neural differential equations framework, combining ANNs with a differentiable, GPU-enabled version of the well-studied Marshall Molteni Quasigeostrophic Model (QG3). Similar to the approach of many atmospheric models, part of the model is computed in the spherical harmonics domain, and other parts in the grid domain. In our model, ANNs are used as parametrizations in both domains, and form together with the components of the QG3 model the right-hand side of our hybrid model. We showcase the capabilities of our model by demonstrating how it generalizes from the QG3 model to the significantly more complex primitive equation model of SpeedyWeather.jl. 

How to cite: Gelbrecht, M. and Boers, N.: Hybrid neural differential equation models for atmospheric dynamics, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12495, https://doi.org/10.5194/egusphere-egu24-12495, 2024.

11:45–11:55
|
EGU24-12826
|
ECS
|
On-site presentation
Christof Schötz, Alistair White, and Niklas Boers

We explore the task of learning the dynamics of a system from observed data without prior knowledge of the laws governing the system. Our extensive simulation study focuses on ordinary differential equation (ODE) problems that are specifically designed to reflect key aspects of various machine learning tasks for dynamical systems - namely, chaos, complexity, measurement uncertainty, and variability in measurement intervals. The study evaluates a variety of methods, including neural ODEs, transformers, Gaussian processes, echo state networks, and spline-based estimators. Our results show that the relative performance of the methods tested varies widely depending on the specific task, highlighting that no single method is universally superior. Although our research is predominantly in low-dimensional settings, in contrast to the high-dimensional nature of many climate science challenges, it provides insightful comparisons and understanding of how different approaches perform in learning the dynamics of complex systems.

How to cite: Schötz, C., White, A., and Boers, N.: Comparing Machine Learning Methods for Dynamical Systems, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12826, https://doi.org/10.5194/egusphere-egu24-12826, 2024.

11:55–12:05
|
EGU24-15144
|
ECS
|
Highlight
|
On-site presentation
Elena Fillola, Raul Santos-Rodriguez, and Matt Rigby

Inverse modelling systems relying on Lagrangian Particle Dispersion Models (LPDMs) are a popular way to quantify greenhouse gas emissions using atmospheric observations, providing independent evaluation of countries' self-reported emissions. For each GHG measurement, the LPDM performs backward-running simulations of particle transport in the atmosphere, calculating source-receptor relationships (“footprints”). These reflect the upwind areas where emissions would contribute to the measurement. However, the increased volume of satellite measurements from high-resolution instruments like TROPOMI cause computational bottlenecks, limiting the amount of data that can be processed for inference. Previous approaches to speed up footprint generation revolve around interpolation, therefore still requiring expensive new runs. In this work, we present the first machine learning-driven LPDM emulator that once trained, can approximate satellite footprints using only meteorology and topography. The emulator uses Graph Neural Networks in an Encode-Process-Decode structure, similar to Google’s Graphcast [1], representing latitude-longitude coordinates as nodes in a graph. We apply the model for GOSAT measurements over Brazil to emulate footprints produced by the UK Met Office’s NAME LPDM, training on data for 2014 and 2015 on a domain of size approximately 1600x1200km at a resolution of 0.352x0.234 degrees. Once trained, the emulator can produce footprints for a domain of up to approximately 6500x5000km, leveraging the flexibility of GNNs. We evaluate the emulator for footprints produced across 2016 on the 6500x5000km domain size, achieving intersection-over-union scores of over 40% and normalised mean absolute errors of under 30% for simulated CH4 concentrations. As well as demonstrating the emulator as a standalone AI application, we show how to integrate it with the full GHG emissions pipeline to quantify Brazil’s emissions. This method demonstrates the potential of GNNs for atmospheric dispersion applications and paves the way for large-scale near-real time emissions emulation.

 [1] Remi Lam et al.,Learning skillful medium-range global weather forecasting. Science 382,1416-1421 (2023). DOI:10.1126/science.adi2336

How to cite: Fillola, E., Santos-Rodriguez, R., and Rigby, M.: A Graph Neural Network emulator for greenhouse gas emissions inference, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15144, https://doi.org/10.5194/egusphere-egu24-15144, 2024.

12:05–12:15
|
EGU24-21760
|
On-site presentation
Waed Abed and Erika Coppola

Leveraging Machine Learning (ML) models, particularly Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) like Long-Short Term Memory (LSTM), and Artificial Neural Networks (ANN), has become pivotal in addressing the escalating frequency and severity of extreme events such as heatwaves, hurricanes, floods, and droughts. In climate modeling, ML proves invaluable for analyzing diverse datasets, including climate data and satellite imagery, outperforming traditional methods by adeptly handling vast information and identifying intricate patterns. Focusing on the study's emphasis on extreme precipitation events, the urgency arises from climate change, demanding more accurate and timely methods to predict and manage the impacts of these events.

In this study, we completed two main experiments to understand if ML algorithms can detect the extreme events. In both experiment the predictors that have been used are eastern and northern wind (u,v), geopotential height (z), specific humidity (q) and temperature (t) at four pressure levels, which are 1000hpa, 850hpa, 700hpa, and 500hpa. The frequency for the predictors is 3 hours, while the predictand being the precipitation accumulated over 3 hours. The data used in this study are the Re-Analysis -5th generation- (ERA5) produced by European Center for Medium-Range Weather Forecast (ECMWF), which provides global hourly estimates of large number of atmospheric, land and oceanic climate variables with a resolution of 25 km at different pressure levels and for the surface (precipitation in our case).

In this study, two main architectures have been applied. The first emulator, ERA-Emulator, contains 14 layers, divided in 4 blocks (input, convolutional, dense, output). In the convolutional block we have 6 convolutional layers, one layer of type ConvLSTM2D, that combines a 2D Convolutional layer and an LSTM layer, and five simple 2D convolutional layers, with two of them followed by a MaxPooling layer. In the Dense block there are three fully connected Dense layers followed by one Flatten layer and one Dropout layer. Then, we have the output layer, also a Dense layer. We used the same architecture for the second emulator, GRIPHO-Emulator, with one extra MaxPooling in the convolutional block, for a total of 15 layers. The first emulator uses variables from ERA5 both as input and output at 25 km resolution, while the second one uses variables from ERA5 as input, and the Gridded Italian Precipitation Hourly Observations dataset (GRIPHO) as output at 3 km resolution.

The ERA-Emulator is designed to approximate the downscaling function by utilizing low-resolution simulations to generate equivalent low resolution precipitation fields. ERA-Emulator resulted in a viable approach to address this challenge. The emulator demonstrates the capability to derive precipitation fields that align with ERA5 low-resolution simulations.  GRIPHO-emulator aims to downscale high resolution precipitation from low-resolution large-scale predictors. The emulator aims to estimate the downscaling function. GRIPHO-Emulator is able to create realistic high-resolution precipitation fields that well represent the observed precipitation distribution from the high resolution GRIPHO dataset.

How to cite: Abed, W. and Coppola, E.: Detection of High Convective Precipitation Events Using Machine Learning Methods, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-21760, https://doi.org/10.5194/egusphere-egu24-21760, 2024.

12:15–12:25
|
EGU24-15174
|
ECS
|
Highlight
|
On-site presentation
Philine L. Bommer, Marlene Kretschmer, Paul Boehnke, and Marina M.-C. Hoehne née Vidovic

Decision making and efficient early warning systems for extreme weather rely on subseasonal-to-seasonal (S2S) forecasts. However, the chaotic nature of the atmosphere impedes predictions by dynamical forecast systems on the S2S time scale. Improved predictability may arise due to remote drivers and corresponding teleconnections in so-called windows of opportunities, but using knowledge of such drivers to boost S2S forecast skill is challenging. Here, we present a spatio-temporal deep neural network (DNN), predicting a time series of weekly North Atlantic European (NAE) weather regimes on lead-times of one to six weeks during boreal winter. The spatio-temporal architecture combines a convolutional Long-short-term-memory (convLSTM) encoder with an Long-short-term-memory (LSTM) decoder and was built to consider both short and medium-range variability as information. As predictors it uses 2D (image) time series input data of expected drivers of European winter weather, including the stratospheric polar vortex  and tropical sea surface temperatures, alongside the 1D time series of NAE regimes. Our results indicate that additional information provided in the image time series yield a skill score improvement for longer lead times. In addition, by analysing periods of enhanced or decreased predictability of the DNN, we can infer further information regarding prevalent teleconnections.

How to cite: Bommer, P. L., Kretschmer, M., Boehnke, P., and Hoehne née Vidovic, M. M.-C.: Using spatio-temporal neural networks to investigating teleconnections and enhance S2S forecasts of european extreme weather , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15174, https://doi.org/10.5194/egusphere-egu24-15174, 2024.

Lunch break
Climate
14:00–14:10
|
EGU24-11831
|
ECS
|
On-site presentation
Nathan Mankovich, Shahine Bouabid, and Gustau Camps-Valls

Analyzing climate scenarios is crucial for quantifying uncertainties, identifying trends, and validating models. Objective statistical methods provide decision support for policymakers, optimize resource allocation, and enhance our understanding of complex climate dynamics. These tools offer a systematic and quantitative framework for effective decision-making and policy formulation amid climate change, including accurate projections of extreme events—a fundamental requirement for Earth system modeling and actionable future predictions. 

This study applies dynamic mode decomposition with control (DMDc) to assess temperature and precipitation variability in climate model projections under various future shared socioeconomic pathways (SSPs). We leverage global greenhouse gas emissions and local aerosol emissions as control parameters to unveil nuanced insights into climate dynamics.Our approach involves fitting distinct DMDc models over a high-ambition/low-forcing scenario (SSP126), a medium-forcing scenario (SSP245) and a high-forcing scenario (SSP585). By scrutinizing the eigenvalues and dynamic modes of each DMDc model, we uncover crucial patterns and trends that extend beyond traditional climate analysis methods. Preliminary findings reveal that temporal modes effectively highlight variations in global warming trends under different emissions scenarios. Moreover, the spatial modes generated by DMDc offer a refined understanding of temperature disparities across latitudes, effectively capturing large-scale oscillations such as the El Niño Southern Oscillation. 

The proposed data-driven analytical framework not only enriches our comprehension of climate dynamics but also enhances our ability to anticipate and adapt to the multifaceted impacts of climate change. Integrating DMDc into climate scenario analysis may help formulate more effective strategies for mitigation and adaptation.

References

Allen, Myles R., et al. "Warming caused by cumulative carbon emissions towards the trillionth tonne." Nature 458.7242 (2009): 1163-1166.

Zelinka, Mark D., et al. "Causes of higher climate sensitivity in CMIP6 models." Geophysical Research Letters 47.1 (2020): e2019GL085782.

Proctor, Joshua L., Steven L. Brunton, and J. Nathan Kutz. "Dynamic mode decomposition with control." SIAM Journal on Applied Dynamical Systems 15.1 (2016): 142-161.

How to cite: Mankovich, N., Bouabid, S., and Camps-Valls, G.: Analyzing Climate Scenarios Using Dynamic Mode Decomposition With Control, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-11831, https://doi.org/10.5194/egusphere-egu24-11831, 2024.

14:10–14:20
|
EGU24-9110
|
ECS
|
Highlight
|
On-site presentation
Zachary Labe, Thomas Delworth, Nathaniel Johnson, and William Cooke

To account for uncertainties in future projections associated with the level of greenhouse gas emissions, most climate models are run using different forcing scenarios, like the Shared Socioeconomic Pathways (SSPs). Although it is possible to compare real-world greenhouse gas concentrations with these hypothetical scenarios, it is less clear how to determine whether observed patterns of weather and climate anomalies align with individual scenarios, especially at the interannual timescale. As a result, this study designs a data-driven approach utilizing artificial neural networks (ANNs) that learn to classify global maps of annual-mean temperature or precipitation with a matching emission scenario using a high-resolution, single model initial-condition large ensemble. Here we construct our ANN framework to consider whether a climate map is from SSP1-1.9, SSP2-4.5, SSP5-8.5, a historical forcing scenario, or a natural forcing scenario using the Seamless System for Prediction and EArth System Research (SPEAR) by the NOAA Geophysical Fluid Dynamics Laboratory. A local attribution technique from explainable AI is then applied to identify the most relevant temperature and precipitation patterns used for each ANN prediction. The explainability results reveal that some of the most important geographic regions for distinguishing each climate scenario include anomalies over the subpolar North Atlantic, Central Africa, and East Asia. Lastly, we evaluate data from two overshoot simulations that begin in either 2031 or 2040, which are a set of future simulations that were excluded from the ANN training process. For the rapid mitigation experiment that starts a decade earlier, we find that the ANN links its climate maps to the lowest emission scenario by the end of the 21st century (SSP1-1.9) in comparison to the more moderate scenario (SSP2-4.5) that is selected for the later mitigation experiment. Overall, this framework suggests that explainable machine learning could provide one possible strategy for assessing observations with future climate change pathways.

How to cite: Labe, Z., Delworth, T., Johnson, N., and Cooke, W.: Explainable AI for distinguishing future climate change scenarios, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-9110, https://doi.org/10.5194/egusphere-egu24-9110, 2024.

14:20–14:30
|
EGU24-10298
|
ECS
|
Highlight
|
On-site presentation
Philipp Hess, Maximilian Gelbrecht, Michael Aich, Baoxiang Pan, Sebastian Bathiany, and Niklas Boers

Accurately assessing precipitation impacts due to anthropogenic global warming relies on numerical Earth system model (ESM) simulations. However, the discretized formulation of ESMs, where unresolved small-scale processes are included as semi-empirical parameterizations, can introduce systematic errors in the simulations. These can, for example, lead to an underestimation of spatial intermittency and extreme events.
 Generative deep learning has recently been shown to skillfully bias-correct and downscale precipitation fields from numerical simulations [1,2]. Using spatial context, these methods can jointly correct spatial patterns and summary statistics, outperforming established statistical approaches.
However, these approaches require separate training for each Earth system model individually, making corrections of large ESM ensembles computationally costly. Moreover, they only allow for limited control over the spatial scale at which biases are corrected and may suffer from training instabilities.
Here, we follow a novel diffusion-based generative approach [3, 4] by training an unconditional foundation model on the high-resolution target ERA5 dataset only. Using fully coupled ESM simulations of precipitation, we investigate the controllability of the generative process during inference to preserve spatial patterns of a given ESM field on different spatial scales.

[1] Hess, P., Drüke, M., Petri, S., Strnad, F. M., & Boers, N. (2022). Physically constrained generative adversarial networks for improving precipitation fields from Earth system models. Nature Machine Intelligence, 4(10), 828-839.

[2] Harris, L., McRae, A. T., Chantry, M., Dueben, P. D., & Palmer, T. N. (2022).A generative deep learning approach to stochastic downscaling of precipitation forecasts. Journal of Advances in Modeling Earth Systems, 14(10), e2022MS003120.

[3] Meng, C., He, Y., Song, Y., Song, J., Wu, J., Zhu, J. Y., & Ermon, S. (2021).  Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073.

[4] Bischoff, T., & Deck, K. (2023). Unpaired Downscaling of Fluid Flows with Diffusion Bridges. arXiv preprint arXiv:2305.01822.

How to cite: Hess, P., Gelbrecht, M., Aich, M., Pan, B., Bathiany, S., and Boers, N.: Downscaling precipitation simulations from Earth system models with generative deep learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10298, https://doi.org/10.5194/egusphere-egu24-10298, 2024.

14:30–14:40
|
EGU24-10759
|
ECS
|
Highlight
|
On-site presentation
Björn Lütjens, Noelle Selin, Andre Souza, Gosha Geogdzhayev, Dava Newman, Paolo Giani, Claudia Tebaldi, Duncan Watson-Parris, and Raffaele Ferrari

Motivation. Climate models are computationally so expensive that each model is only run for a very selected set of assumptions. In policy making, this computational complexity makes it difficult to rapidly explore the comparative impact of climate policies, such as quantifying the projected difference of local climate impacts with a 30 vs. 45€ price on carbon (Lütjens et al., 2023). Recently however, machine learning (ML) models have been used to emulate climate models that can rapidly interpolate within existing climate dataset.

Related Works. Several deep learning models have been developed to emulate the impact of greenhouse gas emissions onto climate variables such as temperature and precipitation. Currently, the foundation model ClimaX with O(100M-1B) parameters is considered the best performer according to the benchmark datasets, ClimateSet and ClimateBenchv1.0 (Kaltenborn et al., 2023; Nguyen et al., 2023; Watson-Parris et al., 2022).

Results. We show that linear pattern scaling, a simple method with O(10K) parameters, is at least on par with the best models for some climate variables, as shown in Fig 1. In particular, the ClimateBenchv1.0 annually-averaged and locally-resolved surface temperatures, precipitation, and 90th percentile precipitation can be well estimated with linear pattern scaling. Our research resurfaces that temperature-dependent climate variables have a mostly linear relationship to cumulative CO2 emissions.

As a next step, we will identify the complex climate emulation tasks that are not addressed by linear models and might benefit from deep learning research. To do so, we will plot the data complexity per climate variable and discuss the ML difficulties in multiple spatiotemporal scales, irreversible dynamics, and internal variability. We will conclude with a list of tasks that demand more advanced ML models.

Conclusion. Most of the ML-based climate emulation efforts have focused on variables that can be well approximated by linear regression models. Our study reveals the solved and unsolved problems in climate emulation and provides guidance for future research directions.

Data and Methods. We use the ClimateBenchv1.0 dataset and will show additional results on ClimateSet and a CMIP climate model that contains many ensemble members. Our model fits one linear regression to map cumulative CO2 emissions, co2(t), to globally- and annually-averaged surface temperature, tas(t). Our model then fits one linear regression model per grid cell to map tas(t) onto 2.5° local surface temperature. Our model is time-independent and uses only co2(t) as input. Our analysis will be available at github.com/blutjens/climate-emulator-tutorial

References.

Kaltenborn, J. et al., (2023). ClimateSet: A Large-Scale Climate Model Dataset for Machine Learning, in NeurIPS Datasets and Benchmarks

Lütjens, B. (2023). Deep Learning Emulators for Accessible Climate Projections, Thesis, Massachusetts Institute of Technology.

Nguyen, T. et al., (2023). ClimaX: A foundation model for weather and climate, in ICML

Watson-Parris, D. et al. (2022). ClimateBenchv1.0: A Benchmark for Data-Driven Climate Projections, in JAMES

How to cite: Lütjens, B., Selin, N., Souza, A., Geogdzhayev, G., Newman, D., Giani, P., Tebaldi, C., Watson-Parris, D., and Ferrari, R.: Is linear regression all you need? Clarifying use-cases for deep learning in climate emulation, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10759, https://doi.org/10.5194/egusphere-egu24-10759, 2024.

14:40–14:50
|
EGU24-3499
|
ECS
|
Highlight
|
Virtual presentation
James Briant, Dan Giles, Cyril Morcrette, and Serge Guillas

Underrepresentation of cloud formation is a known failing in current climate simulations. The coarse grid resolution required by the computational constraint of integrating over long time scales does not permit the inclusion of underlying cloud generating physical processes. This work employs a multi-output Gaussian Process (MOGP) trained on high resolution Unified Model (UM) simulation data to predict the variability of temperature and specific humidity fields within the climate model. A proof-of-concept study has been carried out where a trained MOGP model is coupled in-situ with a simplified Atmospheric General Circulation Model (AGCM) named SPEEDY. The temperature and specific humidity profiles of the SPEEDY model outputs are perturbed at each timestep according to the predicted high resolution informed variability. 10-year forecasts are generated for both default SPEEDY and ML-hybrid SPEEDY models and output fields are compared ensuring hybrid model predictions remain representative of Earth's atmosphere. Some changes in the precipitation, outgoing longwave and shortwave radiation patterns are observed indicating modelling improvements in the complex region surrounding India and the Indian sea.

How to cite: Briant, J., Giles, D., Morcrette, C., and Guillas, S.: A Hybrid Machine Learning Climate Simulation Using High Resolution Convection Modelling, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3499, https://doi.org/10.5194/egusphere-egu24-3499, 2024.

14:50–15:00
|
EGU24-5103
|
ECS
|
On-site presentation
Nils Bochow, Anna Poltronieri, Martin Rypdal, and Niklas Boers

Historical records of climate fields are often sparse due to missing measurements, especially before the introduction of large-scale satellite missions. Several statistical and model-based methods have been introduced to fill gaps and reconstruct historical records. Here, we employ a recently introduced deep-learning approach based on Fourier convolutions, trained on numerical climate model output, to reconstruct historical climate fields. Using this approach we are able to realistically reconstruct large and irregular areas of missing data, as well as reconstruct known historical events such as strong El Niño and La Niña with very little given information. Our method outperforms the widely used statistical kriging method as well as other recent machine learning approaches. The model generalizes to higher resolutions than the ones it was trained on and can be used on a variety of climate fields. Moreover, it allows inpainting of masks never seen before during the model training.

How to cite: Bochow, N., Poltronieri, A., Rypdal, M., and Boers, N.: Reconstructing Historical Climate Fields With Deep Learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5103, https://doi.org/10.5194/egusphere-egu24-5103, 2024.

15:00–15:10
|
EGU24-3614
|
On-site presentation
Martin Wegmann and Fernando Jaume-Santero

Understanding atmospheric variability is essential for adapting to future climate extremes. Key ways to do this are through analysing climate field reconstructions and reanalyses. However, producing such reconstructions can be limited by high production costs, unrealistic linearity assumptions, or uneven distribution of local climate records. 

Here, we present a machine learning-based non-linear climate variability reconstruction method using a Recurrent Neural Network that is able to learn from existing model outputs and reanalysis data. As a proof-of-concept, we reconstructed more than 400 years of global, monthly temperature anomalies based on sparse, realistically distributed pseudo-station data.

Our reconstructions show realistic temperature patterns and magnitude reproduction costing about 1 hour on a middle-class laptop. We highlight the method’s capability in terms of mean statistics compared to more established methods and find that it is also suited to reconstruct specific climate events. This approach can easily be adapted for a wide range of regions, periods and variables. As additional work-in-progress we show output of this approach for reconstructing European weather in 1807, including the extreme summer heatwave of that year.

How to cite: Wegmann, M. and Jaume-Santero, F.: From climate to weather reconstruction with inexpensive neural networks, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3614, https://doi.org/10.5194/egusphere-egu24-3614, 2024.

15:10–15:20
|
EGU24-12141
|
Highlight
|
On-site presentation
Balasubramanya Nadiga and Kaushik Srinivasan

This study focuses on the application of machine learning techniques to better characterize predictability of the spatiotemporal variability of sea surface temperature (SST) on the basin scale. Both, sub-seasonal variability including extreme events (cf. marine heatwaves) and interannual variability are considered. 

We rely on dimensionality reduction techniques---linear principal component analysis (PCA)  and nonlinear autoencoders and their variants---to then perform the actual prediction tasks in the corresponding latent space using disparate methodologies ranging from linear inverse modeling (LIM) to reservoir computing (RC), and attention-based transformers. 

After comparing performance, we examine various issues including the role of generalized synchronization in RC and implicit memory of RC vs. explicit long-term memory of transformers with the broad aim of shedding light on the effectiveness of these techniques in the context of data-driven climate prediction.

How to cite: Nadiga, B. and Srinivasan, K.: Climate Prediction in Reduced Dimensions: A Comparative Analysis of Linear Inverse Modeling, Reservoir Computing and Attention-based Transformers, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12141, https://doi.org/10.5194/egusphere-egu24-12141, 2024.

15:20–15:30
|
EGU24-17601
|
ECS
|
On-site presentation
Maura Dewey, Annica Ekman, Duncan Watson-Parris, and Hans Christen Hansson

Here we develop a machine learning emulator based on the Norwegian Earth System Model (NorESM) to predict regional climate responses to aerosol emissions and use it to study the sensitivity of surface temperature to anthropogenic emission changes in key policy regions. Aerosol emissions have both an immediate local effect on air quality, and regional effects on climate in terms of changes to temperature and precipitation distributions via direct radiative impacts and indirect cloud-aerosol interactions. Regional climate change depends on a balance between aerosol and greenhouse gas forcing, and in particular extreme events are very sensitive to changes in aerosol emissions. Our goal is to provide a tool which can be used to test the impacts of policy-driven emission changes efficiently and accurately, while retaining the spatio-temporal complexity of the larger physics-based Earth System Model.

How to cite: Dewey, M., Ekman, A., Watson-Parris, D., and Hansson, H. C.: Machine learning aerosol impacts on regional climate change., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17601, https://doi.org/10.5194/egusphere-egu24-17601, 2024.

15:30–15:40
|
EGU24-10262
|
ECS
|
Highlight
|
Virtual presentation
Henry Addison, Elizabeth Kendon, Suman Ravuri, Laurence Aitchison, and Peter Watson

High resolution projections are useful for planning climate change adaptation [1] but are expensive to produce using physical simulations. We make use of a state-of-the-art generative machine learning (ML) method, a diffusion model [2], to predict variables from a km-scale model over England and Wales. This is trained to emulate daily mean output from the Met Office 2.2km UK convection-permitting model (CPM) [3], averaged to 8.8km scale for initial testing, given coarse-scale (60km) weather states from the Met Office HadGEM3 general circulation model. This achieves downscaling at much lower computational cost than is required to run the CPM and when trained to predict precipitation the emulator produces samples with realistic spatial structure [4, 5]. We show the emulator learns to represent climate change over the 21st century. We present some diagnostics indicating that there is skill for extreme events with ~100 year return periods, as is necessary to inform decision-making. This is made possible by training the model on ~500 years of CPM data (48 years from each of 12 ensemble members). We also show the method can be useful in scenarios with limited high-resolution data. The method is stochastic and we find that it produces a well-calibrated spread of high resolution precipitation samples for given large-scale conditions, which is highly important for correctly representing extreme events.

Furthermore, we are extending this method to generate coherent multivariate samples including other impact-relevant variables (e.g. 2m temperature, 2m humidity and 10m wind). We will show the model’s performance at producing samples with coherent structure across all the different variables and its ability to represent extremes in multivariate climate impact indices.

References

[1] Kendon, E. J. et al. (2021). Update to the UKCP Local (2.2km) projections. Science report, Met Office Hadley Centre, Exeter, UK. [Online]. Available: https://www.metoffice.gov.uk/pub/data/weather/uk/ukcp18/science-reports/ukcp18_local_update_report_2021.pdf

[2] Song, Y. et al. (2021). Score-Based Generative Modeling through Stochastic Differential Equations. ICLR.

[3] Kendon EJ, E Fischer, CJ Short (2023) Variability conceals emerging trend in 100yr projections of UK local hourly rainfall extremes, Nature Comms, doi: 10.1038/s41467-023-36499-9

[4] Addison, Henry, Elizabeth Kendon, Suman Ravuri, Laurence Aitchison, and Peter AG Watson. (2022). Machine learning emulation of a local-scale UK climate model. arXiv preprint arXiv:2211.16116.

[5] Addison, H., Kendon, E., Ravuri, S., Aitchison, L., and Watson, P. (2023). Downscaling with a machine learning-based emulator of a local-scale UK climate model, EGU General Assembly 2023, Vienna, Austria, 24–28 Apr 2023, EGU23-14253, https://doi.org/10.5194/egusphere-egu23-14253

How to cite: Addison, H., Kendon, E., Ravuri, S., Aitchison, L., and Watson, P.: Machine learning-based emulation of a km-scale UK climate model, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10262, https://doi.org/10.5194/egusphere-egu24-10262, 2024.

Posters on site: Wed, 17 Apr, 10:45–12:30 | Hall X5

Display time: Wed, 17 Apr, 08:30–Wed, 17 Apr, 12:30
X5.137
|
EGU24-1245
Yuan Sun, Zhihao Feng, Wei Zhong, Hongrang He, Shilin Wang, Yao Yao, Yalan Zhang, and Zhongbao Bai

Tropical cyclones (TCs) seriously threaten the safety of human life and property especially when approaching coast or making landfall. Robust, long-lead predictions are valuable for managing policy responses. However, despite decades of efforts, seasonal prediction of TCs remains a challenge. Here, we introduce a deep-learning prediction model to make skillful seasonal prediction of TC track density in the Western North Pacific (WNP) during the typhoon season, with a lead time up to four months. To overcome the limited availability of observational data, we use TC tracks from CMIP5 and CMIP6 climate models as the training data, followed by a transfer-learning method to train a fully convolutional neural network named SeaUnet. Through the deep-learning process (i.e., heat map analysis), SeaUnet identifies physically based precursors. We show that SeaUnet has a good performance for typhoon distribution, outperforming state-of-the-art dynamic systems. The success of SeaUnet indicates its potential for operational use.

How to cite: Sun, Y., Feng, Z., Zhong, W., He, H., Wang, S., Yao, Y., Zhang, Y., and Bai, Z.: Seasonal prediction of typhoon track density using deep learning based on the CMIP datasets, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-1245, https://doi.org/10.5194/egusphere-egu24-1245, 2024.

X5.138
|
EGU24-2552
Midhun Murukesh and Pankaj Kumar

Deep learning methods have emerged as a potential alternative for the complex problem of climate data downscaling. Precipitation downscaling is challenging due to its stochasticity, skewness, and sparse extreme values. Also, the extreme values are essential to preserve during downscaling and extrapolating future climate projections, as they serve as trivial signals for impact assessments. This research looks into the usefulness of a deep learning method designed for gridded precipitation downscaling, focusing on how well it can generalize and transfer what it learns. This study configures and evaluates a deep learning-based super-resolution neural network called the Super-Resolution Deep Residual Network (SRDRN). Several synthetic experiments are designed to assess its performance over four geographically and climatologically distinct domain boxes over the Indian subcontinent. Domain boxes over Central India (CI), Southern Peninsula (SP), Northwest (NW), and Northeast (NE), exhibiting diverse geographical and climatological characteristics, are chosen to assess the generalization and transferability of SRDRN. Following the training on a set of samples from CI, SP and NW, the performance of the models is evaluated in comparison to the Bias Correction and Spatial Disaggregation (BCSD), a renowned statistical downscaling method. NE is a transfer domain where the trained SRDRN models are directly applied without additional training or fine-tuning. Several objective evaluation metrics, like the Kling-Gupta Efficiency (KGE) score, root mean squared error, mean absolute relative error, and percentage bias, are chosen for the evaluation of SRDRN. The systematic assessment of SRDRN models (KGE~0.9) across these distinct regions reveals a substantial superiority of SRDRN over the BCSD method (KGE~0.7) in downscaling and reconstructing precipitation rates during the test period, along with preserving extreme values with high precision. In conclusion, SRDRN proves to be a promising alternative for the statistical downscaling of gridded precipitation.

Keywords: Precipitation, Statistical downscaling, Deep learning, Transfer learning, SRDRN

How to cite: Murukesh, M. and Kumar, P.: Downscaling and reconstruction of high-resolution precipitation fields using a deep residual neural network: An assessment over Indian subcontinent, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-2552, https://doi.org/10.5194/egusphere-egu24-2552, 2024.

X5.139
|
EGU24-3640
|
ECS
Victor Carreira, Milena Silva, Igor Venancio, André Belem, Igor Viegas, André Spigolon, Ana Luiza Albuquerque, and Pedro Vitor

Shales are important rocks that store a significant amount of Organic Content. In this work, we present applications of realistic synthetic simulations using real-scaled geological sections. The case of the study is Santos Sedimentary Basin, a well-known and well-studied Geologic Basin. This synthetic data improves the performance of our IA for TOC estimators. Besides, it reduces costs and resources concerning data acquisition for IA simulations. The work consists of reconstructing a pseudo-well formed in a fracture zone modelled through an accurate 2D geological section. To simulate the effects of a fracture zone on geophysical logging data, we present the law of mixtures based on well-drilling concepts, whose objective is to impose geometric conditions on the set of subsurface rock packages. We generated four rock packs belonging to two mixed classes. Tests with noisy synthetic data produced by an accurate geological section were developed and classified using the proposed method (Carreira et al., 2024). Firstly, we go for a more controlled problem and simulate well-log data directly from an interpreted geologic cross-section. We then define two specific training data sets composed of density (RHOB), sonic (DT), spontaneous potential (SP) and gamma-ray (GR) logs,  and  Total Organic Carbon (TOC), spontaneous potential (SP), density (RHOB) and photoelectric effect (PE) all simulated through a Gaussian distribution function per lithology. Acquiring the sonic profile is essential not only for estimating the porosity of the rocks but also for in-depth simulations of the Total Organic Content (TOC) with the geological units cut by the synthetic wells. Since most wells Exploitation does not have this profile well and it is not economically viable to make a new acquisition, resorting to the nonlinear regression models to estimate the sonic profile showed that it is an important feature. We estimate the observed Total Organic Carbon (TOC) measurements using Passey and Wang's (2016) methodology to input data into the k-means classification model. The synthetic model proposed showed promissory results indicating that linear dependency may underscore k-means shale classification. 

How to cite: Carreira, V., Silva, M., Venancio, I., Belem, A., Viegas, I., Spigolon, A., Albuquerque, A. L., and Vitor, P.: Exploiting Pseudo Wells in a Synthetic Sedimentary Basin: a simulation in the Santos Off-Shore Basin in the Southeast Atlantic portion of Brazil, using synthetic TOC for k-means classification., EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3640, https://doi.org/10.5194/egusphere-egu24-3640, 2024.

X5.140
|
EGU24-4507
|
ECS
George Kontsevich and Ludvig Löwemark

As communities observe recurring regional weather patterns they will often ascribe colloquial names to them such as the Meiyu in East Asia or the Santa Ana winds of California. However, attaching quantitative characterizations to these same names often proves challenging. Classically heuristics have been developed for particular locations and climate phenomena, but their inherent subjectivity undermine the robustness of any subsequent quantitative analysis. To develop a neutral universal mesoscale metric we start by observing that the spatial distribution of rain in a given region is controlled by the interplay between the meteorological parameters (humidity, wind, pressure etc.) and the Earth’s topography. As a result, each recurring climactic phenomena exhibits a unique regional signature/distribution. Unlike at the synoptic scale, mesoscale climate patterns are largely stationary and an accumulation of two decades of high resolution satellite observations means that these patterns can now be reliably numerically extracted. The key additional observation is that at the mesoscale climate phenomena typically have either one or two non-co-occurring stationary states. This allows us to isolate patterns by a simple bifurcating of the subspace of the first two singular vectors. The end result behaves like a trivial Empirical Orthogonal Function (EOF) rotation that has a clear interpretation. It isolates the climate patterns as basis vectors and allows us to subsequently estimate the presence of the climate phenomena at arbitrary timescales. As a case study we use gridded precipitation data from NASA’s Global Precipitation Measurement (GPM) mission (compiled in to the IMERG dataset) in several regions and timescales of particular interest

How to cite: Kontsevich, G. and Löwemark, L.: Using IMERG precipitation patterns to index climate at the mesoscale: A basis rotation method based on climate bistability - an update, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-4507, https://doi.org/10.5194/egusphere-egu24-4507, 2024.

X5.141
|
EGU24-5033
|
ECS
Samantha Biegel, Konrad Schindler, and Benjamin Stocker

Land ecosystems play an important role in the carbon cycle, and hence the climate system. The engine of this cycle is Gross Primary Production (GPP), the assimilation of CO2 via photosynthesis at the ecosystem scale. Photosynthesis is directly affected by rising CO2 levels which, in turn, is expected to increase GPP and alter the dynamics of the carbon cycle. However, there is substantial uncertainty about the magnitude and geographical variability of the CO2 fertilisation effect (CFE) on GPP.

We use a large collection of eddy covariance measurements (317 sites, 2226 site-years), paired with remotely sensed information of vegetation greenness to estimate the effect of rising CO2 levels on GPP. We propose a hybrid modelling architecture, combining a physically-grounded process model based on eco-evolutionary optimality theory and a deep learning model. The intuition is that the process model represents the current understanding of the CFE, whereas the deep learning model does not implement explicit physical relations but has a higher capacity to learn effects of large and fast variations in the light, temperature, and moisture environment. The hybrid model is set up to learn a correction on the theoretically expected CFE. This makes it more effective in distilling the relatively small and gradual CFE. 

Our study investigates inherent limitations of different models when it comes to drawing conclusions about the CO2 fertilisation effect. Often, these limitations are due to the presence of latent confounders that give rise to spurious correlations. A promising avenue to address them is therefore the use of causal inference techniques. We show that one way to investigate causality is to test whether the trained hybrid model and its estimate of the CFE is stable across different ecosystems, as expected for a causal physical relation. 

In summary, we study how causal inference, based on a combination of physics-informed and statistical modelling, can contribute to more reliable estimates of the CO2 fertilisation effect, derived from ecosystem flux measurements.

How to cite: Biegel, S., Schindler, K., and Stocker, B.: Causal inference of the CO2 fertilisation effect from ecosystem flux measurements, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5033, https://doi.org/10.5194/egusphere-egu24-5033, 2024.

X5.142
|
EGU24-5616
|
ECS
Filippo Dainelli, Guido Ascenso, Enrico Scoccimarro, Matteo Giuliani, and Andrea Castelletti

Tropical Cyclones (TCs) are synoptic-scale, rapidly rotating storm systems primarily driven by air-sea heat and moisture exchanges. They are among the deadliest geophysical hazards, causing substantial economic losses and several fatalities due to their associated strong winds, heavy precipitation, and storm surges, leading to coastal and inland flooding. Because of the severe consequences of their impacts, accurately predicting the occurrence, intensity, and trajectory of TCs is of crucial socio-economic importance. Over the past few decades, advancements in Numerical Weather Prediction models, coupled with the availability of high-quality observational data from past events, have increased the accuracy of short-term forecasts of TC tracks and intensities. However, this level of improvement has not yet been mirrored in long-term climate predictions and projections. This can be attributed to the substantial computational resources required for running high-resolution climate models with numerous ensemble members over long periods. Additionally, the physical processes underlying TC formation are still poorly understood. To overcome these challenges, the future occurrence of TCs can instead be studied using indices, known as Genesis Potential Indices (GPIs), which correlate the likelihood of Tropical Cyclone Genesis (TCG) with large-scale environmental factors instrumental in their formation. GPIs are generally constructed as a product of atmospheric and oceanic variables accounting both for dynamic and thermodynamic processes. The variables are combined with coefficients and exponents numerically determined from past TC observations. Despite reproducing the spatial pattern and the seasonal cycle of observed TCs, GPIs fail to capture the inter-annual variability and exhibit inconsistent long-term trends.

In this work, we propose a new way to formulate these indices by using Machine Learning. Specifically, we forego all previously empirically determined coefficients and exponents and consider all the dynamic and thermodynamic factors incorporated into various indices documented in the literature. Then, using feature selection algorithms, we identify the most significant variables to explain TCG. Our analysis incorporates atmospheric variables as candidate factors to discern whether they inherently possess predictive signals for TCG. Furthermore, we also consider several climate indices that have been demonstrated to be related to TCG at the ocean basin scale. Recognizing that each factor and teleconnection has a distinct impact on TCG, we tailored our analysis to individual ocean basins. Consequently, our final model comprises a series of sub-models, each corresponding to a different tropical region. These sub-models estimate the distribution of TCG using distinct inputs, which are determined based on the outcomes of the basin-specific feature selection process. Preliminary findings indicate that the feature selection process yields distinct inputs for each ocean basin.

How to cite: Dainelli, F., Ascenso, G., Scoccimarro, E., Giuliani, M., and Castelletti, A.: Rethinking Tropical Cyclone Genesis Potential Indices via Feature Selection, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-5616, https://doi.org/10.5194/egusphere-egu24-5616, 2024.

X5.143
|
EGU24-5845
|
ECS
Advanced soil moisture forecasting using Cosmic Ray Neutron Sensor and Artificial Intelligence (AI)
(withdrawn after no-show)
Sanaraiz Ullah Khan, Hami Said Ahmed, Modou Mbaye, and Arsenio Toloza
X5.144
|
EGU24-6282
|
ECS
Yi Wang

In the context of global warming, changes in extreme weather events may pose a larger threat to society. Therefore, it is particularly important to improve our climatological understanding of high impact precipitation types (PTs), and how their frequency may change under warming. In this study, we use MIDAS (the Met Office Integrated Data Archive System) observational data to provide our best estimate of historical PTs (e.g. liquid rain, freezing rain, snow etc.) over China. We use machine learning (ML) techniques and meteorological analysis methods applied to data from the ERA5 historical climate reanalysis data to find the best variables for diagnosing PTs, and formed training and testing sets, which were input into ML training. We evaluate the diagnostic ability of the Random Forest Classifier (RFC) for different PTs. The results show that using meteorological variables such as temperature, relative humidity, and winds to determine different PTs, ERA5 grid data and MIDAS station data have good matching ability. Comparing the feature selection results with Kernel Density Estimation, it was found that the two methods have consistent results in evaluating the ability of variables to distinguish different PTs. RFC shows strong robustness in predicting different PTs by learning the differences in meteorological variables between 1990 and 2014. It can capture the frequency and spatial distribution of different PTs well, but this capture ability is sensitive to the training methods of the algorithm. In addition, the algorithm finds it difficult to identify events such as hail that are very low frequency in observations. According to the results of testing for different regions and seasons in China, models trained using seasonal data samples have relatively good performance, especially in winter. These results show the potential for combining a RFC with state-of-the-art climate models to effectively project the possible response of different PT frequencies to climate warming in the future. However, the training method of ML algorithm should be selected with caution.

How to cite: Wang, Y.: Identifying precipitation types over China using a machine learning algorithm, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6282, https://doi.org/10.5194/egusphere-egu24-6282, 2024.

X5.145
|
EGU24-6924
Ronghua Zhang

The tropical Pacific experienced triple La Nina conditions during 2020-22, and the future evolution of the climate condition in the region has received extensive attention. Recent observations and studies indicate that an El Nino condition is developing with its peak stage in late 2023, but large uncertainties still exist. Here, a transformer-based deep learning model is adopted to make predictions of the 2023-24 climate condition in the tropical Pacific. This purely data driven model is configured in such a way that upper-ocean temperature at seven depths and zonal and meridional wind stress fields are used as input predictors and output predictands, representing ocean-atmosphere interactions that participate in the form of the Bjerknes feedback and providing physical basis for predictability. In the same way as dynamical models, the prediction procedure is executed in a rolling manner; multi-month 3D temperature fields as well as surface winds are simultaneously preconditioned as input predictors in the prediction. This transformer model has been demonstrated to outperform other state-of-the-art dynamical models in retrospective prediction cases. Real-time predictions indicate that El Nino conditions in the tropical Pacific peak in late 2023. The underlying processes are further analyzed by conducting sensitivity experiments using this transformer model, in which initial fields of surface winds and upper-ocean temperature fields can be purposely adjusted to illustrate the changes to prediction skills. A comparison with other dynamical coupled model is also made.

How to cite: Zhang, R.: A purely data-driven transformer model for real-time predictions of the 2023-24 climate condition in the tropical Pacific, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-6924, https://doi.org/10.5194/egusphere-egu24-6924, 2024.

X5.146
|
EGU24-8010
|
ECS
|
Julia Garcia Cristobal, Jean Wurtz, and Valéry Masson

Predicting the weather in urban environments is a complex task because of the highly heterogeneous nature of the urban structure. However, there are many issues inherent in urban meteorology, such as thermal comfort and building’s energy consumption. Those stakes are linked to highly heterogeneous meteorological variables within the city such as temperature, humidity, wind, net radiative flux and city characteristics such as building uses and characteristics. State-of-the-art meteorological models with hectometric resolution, such as the Meso-NH (Lac et al. 2018) research model, can provide accurate forecasts of urban meteorology. However, they require too much computing power to be deployed operationally. Statistical downscaling techniques are machine learning methods enabling the estimation of a fine resolution field based on one or several lower resolution fields. ARPEGE is the operational planetary model of Météo-France and operates at a resolution of 5km on France. Using Meso-NH simulations covering Paris and the Île-de-France region, a statistical downscaling has been carried out to obtain a temperature field at 300m resolution using simulation outputs from the ARPEGE planetary model at 5km. The deduced temperature reproduces the urban heat island and the temperature heterogeneity simulated in Meso-NH. The estimated temperature field is able to represent the links between temperature and topography as well as the sharp gradients between the city and the urban parks.

 

Lac et al. 2018 : https://doi.org/10.5194/gmd-11-1929-2018

How to cite: Garcia Cristobal, J., Wurtz, J., and Masson, V.: Statistical Downscaling for urban meteorology at hectometric scale, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8010, https://doi.org/10.5194/egusphere-egu24-8010, 2024.

X5.147
|
EGU24-8955
|
ECS
Guido Ascenso, Giulio Palcic, Enrico Scoccimarro, Matteo Giuliani, and Andrea Castelletti

Tropical cyclones (TCs) are among the costliest and deadliest natural disasters worldwide. The destructive potential of a TC is usually modelled as a power of its maximum sustained wind speed, making the estimation of the intensity of TCs (TCIE) an active area of research. Indeed, TCIE has improved steadily in recent years, especially as researchers moved from subjective methods based on hand-crafted features to methods based on deep learning, which are now solidly established as the state of the art.

However, the datasets used for TCIE, which are typically collections of satellite images of TCs, often have two major issues: they are relatively small (usually ≤ 40,000 samples), and they are highly imbalanced, with orders of magnitude more samples for weak TCs than for intense ones. Together, these issues make it hard for deep learning models to estimate the intensity of the strongest TCs. To mitigate these issues, researchers often use a family of Computer Vision techniques known as “data augmentation”—transformations (e.g., rotations) applied to the images in the dataset that create similar, synthetic samples. The way these techniques have been used in TCIE studies has been largely unexamined and potentially problematic. For instance, some authors flip images horizontally to generate new samples, while others avoid doing so because it would cause images from the Northern Hemisphere to look like images from the Southern Hemisphere, which they argue would confuse the model. The effectiveness or potentially detrimental effects of this and other data augmentation techniques for TCIE have never been examined, as authors typically borrow their data augmentation strategies from established fields of Computer Vision. However, data augmentation techniques are highly sensitive to the task for which they are used and should be optimized accordingly. Furthermore, it remains unclear how to properly use data augmentation for TCIE to alleviate the imbalance of the datasets.

In our work, we explore how best to perform data augmentation for TCIE using an off-the-shelf deep learning model, focusing on two objectives:

  • Determining how much augmentation is needed and how to distribute it across the various classes of TC intensity. To do so, we use a modified Gini coefficient to guide the amount of augmentation to be done. Specifically, we aim to augment the dataset more for more intense (and therefore less represented) TCs. Our goal is to obtain a dataset that, when binned according to the Saffir Simpson scale, is as close to a normal distribution as possible (i.e., all classes of intensity are equally represented). 
  • Evaluating which augmentation techniques are best for deep learning-based TCIE. To achieve this, we use a simple feature selection algorithm called backwards elimination, which leads us to find an optimal set of data augmentations to be used. Furthermore, we explore the optimal parameter space for each augmentation technique (e.g., by what angles images should be rotated).

Overall, our work provides the first in-depth analysis of the effects of data augmentation for deep learning-based TCIE, establishing a framework to use these techniques in a way that directly addresses highly imbalanced datasets.

How to cite: Ascenso, G., Palcic, G., Scoccimarro, E., Giuliani, M., and Castelletti, A.: A Systematic Framework for Data Augmentation for Tropical Cyclone Intensity Estimation Using Deep Learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-8955, https://doi.org/10.5194/egusphere-egu24-8955, 2024.

X5.148
|
EGU24-10156
Daniela Flocco, Ester Piegari, and Nicola Scafetta

Maps of land surface temperature of the area of Naples (Southern Italy) show large spatial variation of temperature anomalies. In particular, the metropolitan area of Naples is generally characterized by higher temperatures than the rest of the area considered.

Since heat waves have become more frequent in the last decade, the creation of heat maps helps to understand the location where a town’s population may be more affected by them. Ideally, this kind of maps would provide residents with accurate information about the health problems they may face.

Large temperature anomalies variations are caused by multiple or competing factors, leaving uncertainty in identifying vulnerable areas at this time.

To overcome this limitation and identify areas more vulnerable to the effects of heat waves, not only in the city of Naples but also in its suburbs, we combine the use of Landsat data with unsupervised machine learning algorithms to provide detailed heat wave vulnerability maps. In particular, we develop a procedure based on a combined use of hierarchical and partitional cluster analyses that allows us to better identify areas characterized by temperature anomalies that are more similar to each other than to any other all over the year. This has important implications allowing discrimination between locations that potentially would be impacted higher or lower energy consumption.

How to cite: Flocco, D., Piegari, E., and Scafetta, N.: Heat wave vulnerability maps of Naples (Italy) from Landsat images and machine learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10156, https://doi.org/10.5194/egusphere-egu24-10156, 2024.

X5.149
|
EGU24-10328
|
ECS
|
Highlight
Graham Clyne

Recent advances in climate model emulation have been shown to accurately represent atmospheric variables from large general circulation models, but little investigation has been done into emulating land-related variables. The land-carbon sink absorbs around a third of the fossil fuel anthropogenic emissions every year, yet there is significant uncertainty around this prediction. We aim to reduce this uncertainty by first investigating the predictability of several land-related variables that drive land-atmospheric carbon exchange. We use data from the IPSL-CM6A-LR submission to the Decadal Climate Prediction Project (DCPP). The DCPP is initialized from observed data and explores decadal trends in relationships between various climatic variables. The land-component of the IPSL-CM6A-LR, ORCHIDEE, represents various land-carbon interactions and we target these processes for emulation. As a first step, we attempt to predict the target land variables from ORCHIDEE using a vision transformer. We then investigate the impacts of different feature selection on the target variables - by including atmospheric and oceanic variables, how does this improve the short and medium term predictions of land-related processes? In a second step, we apply generative modeling (with diffusion models) to emulate land processes. The diffusion model can be used to generate several unseen scenarios based on the DCPP and provides a tool to investigate a wider range of climatic scenarios that would be otherwise computationally expensive. 

How to cite: Clyne, G.: Emulating Land-Processes in Climate Models Using Generative Machine Learning, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10328, https://doi.org/10.5194/egusphere-egu24-10328, 2024.

X5.150
|
EGU24-10692
|
ECS
Michael Aich, Baoxiang Pan, Philipp Hess, Sebastian Bathiany, Yu Huang, and Niklas Boers

Earth system models (ESMs) are crucial for understanding and predicting the behaviour of the Earth’s climate system. Understanding and accurately simulating precipitation is particularly important for assessing the impacts of climate change, predicting extreme weather events, and developing sustainable strategies to manage water resources and mitigate associated risks. However, earth system models are prone to large precipitation biases because the relevant processes occur on a large range of scales and involve substantial uncertainties. In this work, we aim to correct such model biases by training generative machine learning models that map between model data and observational data. We address the challenge that the datasets are not paired, meaning that there is no sample-related ground truth to compare the model output to, due to the chaotic nature of geophysical flows. This challenge renders many machine learning approach unsuitable, and also implies a lack of performance metrics.

Our main contribution is the construction of a proxy variable that overcomes this problem and allows for supervised training and evaluation of a bias correction model. We show that a generative model is then able to correct spatial patterns and remove statistical biases in the South American domain. The approach successfully preserves large scale structures in the climate model fields while correcting small scale biases in the model data’s spatio-temporal structure and frequency distribution.

How to cite: Aich, M., Pan, B., Hess, P., Bathiany, S., Huang, Y., and Boers, N.: Down-scaling and bias correction of precipitation with generative machine learning models , EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10692, https://doi.org/10.5194/egusphere-egu24-10692, 2024.

X5.151
|
EGU24-10876
|
ECS
|
Highlight
Viola Steidl, Jonathan Bamber, and Xiao Xiang Zhu

Glacier ice thickness is a fundamental variable required for modelling flow and mass balance. However, direct measurements of ice thickness are scarce. Physics-based and data-driven approaches aim to reconstruct glacier ice thicknesses from the limited in-situ data. Farinotti et al. compared 17 models and found that their ice thickness estimates differ considerably on test glaciers.[1] Following these results, Farinotti et al. created an ensemble of models to develop the so-called consensus estimate of the ice thickness for the world’s glaciers in 2019.[2] Later, Millan et al. derived ice thickness estimates for the world’s glaciers using ice motion as the primary constraint. However, these results differ considerably from existing estimates and the 2019 consensus estimates.[3] It is evident, therefore, that significant uncertainty remains in ice thickness estimates.

Deep learning approaches are flexible and adapt well to complex structures and non-linear behaviour. However, they do not guarantee physical correctness of the predicted quantities. Therefore, we employ a physics-informed neural network (PINN), which integrates physical laws into their training process and is not purely data-driven. We include, for example, the conservation of mass in the loss function and estimate the depth-averaged flow velocity. Teisberg et al. also employed a mass-conserving PINN to interpolate the ice thickness of the well-studied Byrd glacier in Antarctica.[4] In this work, we extend the methodology by integrating the ratio between slope and surface flow velocities in estimating the depth-averaged flow velocity and mapping the coordinate variables to higher dimensional Fourier Features.[5] This allows to encompass glaciers in western Svalbard, addressing challenges posed by basal sliding, surface melting, and complex glacier geometries. Using surface velocity data from Millan et al. and topographical data from Copernicus DEM GLO-90[6] gathered through OGGM[7],  the model predicts ice thickness on glaciers with limited measurements. We are extending it to perform as a predictor of thickness for glaciers with no observations. Here, we present the machine learning pipeline, including the physical constraints employed and preliminary results for glaciers in western Svalbard.


[1] Daniel Farinotti et al., ‘How Accurate Are Estimates of Glacier Ice Thickness? Results from ITMIX, the Ice Thickness Models Intercomparison eXperiment’, The Cryosphere 11, no. 2 (April 2017): 949–70, https://doi.org/10.5194/tc-11-949-2017.

[2] Daniel Farinotti et al., ‘A Consensus Estimate for the Ice Thickness Distribution of All Glaciers on Earth’, Nature Geoscience 12, no. 3 (March 2019): 168–73, https://doi.org/10.1038/s41561-019-0300-3.

[3] Romain Millan et al., ‘Ice Velocity and Thickness of the World’s Glaciers’, Nature Geoscience 15, no. 2 (February 2022): 124–29, https://doi.org/10.1038/s41561-021-00885-z.

[4] Thomas O. Teisberg, Dustin M. Schroeder, and Emma J. MacKie, ‘A Machine Learning Approach to Mass-Conserving Ice Thickness Interpolation’, in 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 2021, 8664–67, https://doi.org/10.1109/IGARSS47720.2021.9555002.

[5] Matthew Tancik et al., ‘Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains’, (arXiv, 18 June 2020), https://doi.org/10.48550/arXiv.2006.10739.

[6] {https://doi.org/10.5270/ESA-c5d3d65}

[7] Fabien Maussion et al., ‘The Open Global Glacier Model (OGGM) v1.1’, Geoscientific Model Development 12, no. 3 (March 2019): 909–31, https://doi.org/10.5194/gmd-12-909-2019.

How to cite: Steidl, V., Bamber, J., and Zhu, X. X.: Physics-aware Machine Learning to Estimate Ice Thickness of Glaciers in West Svalbard, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-10876, https://doi.org/10.5194/egusphere-egu24-10876, 2024.

X5.152
|
EGU24-12600
Jorge Pérez-Aracil, Cosmin M. Marina, Pedro Gutiérrez, David Barriopedro, Ricardo García-Herrera, Matteo Giuliani, Ronan McAdam, Enrico Scoccimarro, Eduardo Zorita, Andrea Castelletti, and Sancho Salcedo-Sanz

The Analogue Method (AM) is a classical statistical downscaling technique applied to field reconstruction. It is widely used for prediction and attribution tasks. The method is based on the principle that two similar atmospheric states cause similar local effects. The core of the AM method is a K-nearest neighbor methodology. Thus, two different states have similarities according to the analogy criterion. The method has remained unchanged since its definition, although some attempts have been made to improve its performance. Machine learning (ML) techniques have recently been used to improve AM performance, however, it remains very similar. An ML-based hybrid approach for heatwave (HW) analysis based on the AM is presented here. It is based on a two-step procedure: in the first step, a non-supervised task is developed, where an autoencoder (AE) model is trained to reconstruct the predictor variable, i.e. the pressure field. Second, an HW event is selected, and then the AM method is applied to the latent space of the trained AE. Thus, the analogy between the fields is searched in the encoded data of the input variable, instead of on the original field. Experiments show that the meaningful features extracted by the AE lead to a better reconstruction of the target field when pressure variables are used as input. In addition, the analysis of the latent space allows for interpreting the results, since HW occurrence can be easily distinguished. Further research can be done on including multiple input variables. 

How to cite: Pérez-Aracil, J., Marina, C. M., Gutiérrez, P., Barriopedro, D., García-Herrera, R., Giuliani, M., McAdam, R., Scoccimarro, E., Zorita, E., Castelletti, A., and Salcedo-Sanz, S.: Autoencoder-based model for improving  reconstruction of heat waves using the analogue method, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-12600, https://doi.org/10.5194/egusphere-egu24-12600, 2024.

X5.153
|
EGU24-13138
|
ECS
Ruhhee Tabbussum, Bidroha Basu, and Laurence Gill

Enhancing flood prediction is imperative given the profound socio-economic impacts of flooding and the projected increase in its frequency due to the impacts of climate change. In this context, artificial intelligence (AI) models have emerged as valuable tools, offering enhanced accuracy and cost-effective solutions to simulate physical flood processes. This study addresses the development of an early warning system for groundwater flooding in the lowland karst area of south Galway, Ireland, employing neural network models with Bayesian regularization and scaled conjugate gradient training algorithms. The lowland karst area is characterised by several groundwater fed, intermittent lakes, known as turloughs that fill when the underlying karst system becomes surcharged during periods of high rainfall. The training datasets incorporate several years of field data from the study area and outputs from a highly calibrated semi-distributed hydraulic/hydrological model of the karst network. Inputs for training the models include flood volume data from the past 5 days, rainfall data, and tidal amplitude data over the preceding 4 days. Both daily and hourly models were developed to facilitate real-time flood predictions. Results indicate strong performance by both Bayesian and Scaled Conjugate Gradient models in real-time flood forecasting. The Bayesian model shows forecasting capabilities extending up to 45 days into the future, with a Nash-Sutcliffe Efficiency (NSE) of 1.00 up to 7 days ahead and 0.95 for predictions up to 45 days ahead. The Scaled Conjugate Gradient model offers the best performance up to 60 days into the future with NSE of 0.98 up to 20 days ahead and 0.95 for predictions up to 60 days ahead, coupled with the advantage of significantly reduced training time compared to the Bayesian model. Additionally, both models exhibit a Co-efficient of Correlation (r) value of 0.98 up to 60 days ahead. Evaluation measures such as Kling Gupta Efficiency reveal high performance, with values of 0.96 up to 15 days ahead for both Bayesian and Scaled Conjugate Gradient models, and 0.90 up to 45 days ahead in the future. The integration of diverse data sources and consideration of both daily and hourly models enhance the resilience and reliability of such an early warning system. In particular, the Scaled Conjugate Gradient model emerges as a versatile tool. It balances predictive accuracy with reduced computational demands, thereby offering practical insights for real-time flood prediction, and aiding in proactive flood management and response efforts.

How to cite: Tabbussum, R., Basu, B., and Gill, L.: Neural Network Driven Early Warning System for Groundwater Flooding: A Comprehensive Approach in Lowland Karst Areas, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-13138, https://doi.org/10.5194/egusphere-egu24-13138, 2024.

X5.154
|
EGU24-15586
|
ECS
|
Highlight
Daniel Banciu, Jannik Thuemmel, and Bedartha Goswami

Deep learning-based weather prediction models have gained popularity in recent years and are effective in forecasting weather over short to medium time scales with models such as FourCastNet being competitive with Numerical Weather Prediction models. 
However, on longer timescales, the complexity and interplay of different weather and climate variables leads to increasingly inaccurate predictions. 

Large-scale climate phenomena, such as the active periods of the Madden-Julian Oscillation (MJO), are known to provide higher predictability for longer forecast times.
These so called Windows of Opportunity thus hold promise as strategic tools for enhancing S2S forecasts.

In this work, we evaluate the capability of FourCastNet to represent and utilize the presence of (active) MJO phases.
First, we analyze the correlation between the feature space of FourCastNet and different MJO indices.
We further conduct a comparative analysis of prediction accuracy within the South East Asia region during active and inactive MJO phases.

How to cite: Banciu, D., Thuemmel, J., and Goswami, B.: Identifying Windows of Opportunity in Deep Learning Weather Models, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-15586, https://doi.org/10.5194/egusphere-egu24-15586, 2024.

X5.155
|
EGU24-17165
|
ECS
Sebastian Hoffmann, Jannik Thümmel, and Bedartha Goswami

Deep learning weather prediction (DLWP) models have recently proven to be a viable alternative to classical numerical integration. Often, the skill of these models can be improved further by providing additional exogenous fields such as time of day, orography, or sea surface temperatures stemming from an independent ocean model. These merely serve as information sources and are not predicted by the model.

In this study, we explore how such exogenous fields can be utilized by DLWP models most optimally and find that the de facto standard way of concatenating them to the input is suboptimal. Instead, we suggest leveraging existing conditioning techniques from the broader deep learning community that modulate the mean and variance of normalized feature vectors in latent space. These, so called, style-based techniques lead to consistently smaller forecast errors and, at the same time, can be integrated with relative ease into existing forecasting architectures. This makes them an attractive avenue to improve deep learning weather prediction in the future.

How to cite: Hoffmann, S., Thümmel, J., and Goswami, B.: Conditioning Deep Learning Weather Prediction Models On Exogenous Fields, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17165, https://doi.org/10.5194/egusphere-egu24-17165, 2024.

X5.156
|
EGU24-17372
|
ECS
Enhancing the evaluation of Deep Learning Downscaling methods using Explainable Artificial Intelligence (XAI) techniques
(withdrawn)
José González-Abad and José Manuel Gutiérrez
X5.157
|
EGU24-17554
|
ECS
|
Thea Quistgaard, Peter L. Langen, Tanja Denager, Raphael Schneider, and Simon Stisen

Central to understanding climate change impacts and mitigation strategies is the generation of high-resolution, local-scale projections from global climate models. This study focuses on Danish hydrology, developing models finely tuned to generate essential climate fields such as temperature, precipitation, evaporation, and water vapor flux.

Employing advancements in computer science and deep learning, we introduce a pioneering Cascaded Diffusion Model for high-resolution image generation. This model utilizes our understanding of climate dynamics in a hydrological context by integrating multiple climate variable fields across an expanded North Atlantic domain to produce a model for stable and realistic generation. In our approach, 30 years of low-resolution daily conditioning data (ERA5) are re-gridded to match the 2.5x2.5 km 'ground truth' data (30 years of DANRA), and preprocessed by shifting a 128x128 image within a larger 180x180 pixel area, ensuring varied geographic coverage. This data, along with land-sea masks and topography, is fed as channels into the model. A novel aspect of our model is its specialized loss function, weighted by a signed distance function to reduce the emphasis on errors over sea areas, aligning with our focus on land-based hydrological modeling.

This research is part of a larger project aimed at bridging the gap between CMIP data models and ERA5 and DANRA analysis. It represents the first phase in a three-step process, with future stages focusing on downscaling from CMIP6 to CORDEX-EUROPE models, and ultimately integrating model and analysis work to form a complete pipeline from global projections to localized daily climate statistics.

How to cite: Quistgaard, T., Langen, P. L., Denager, T., Schneider, R., and Stisen, S.: Using Cascaded Diffusion Models and Multi-Channel Data Integration for High-Resolution Statistical Downscaling of ERA5 over Denmark, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-17554, https://doi.org/10.5194/egusphere-egu24-17554, 2024.

X5.158
|
EGU24-20342
|
ECS
Federica Bortolussi, Hilda Sandström, Fariba Partovi, Joona Mikkilä, Patrick Rinke, and Matti Rissanen

Pests significantly impact crop yields, leading to food insecurity. Pesticides are substances, or a mixture of substances, made to eliminate or control pests, or to regulate the growth of crops.
Currently, more than 1000 pesticides are available in the market. However, their long-lasting environmental impact necessitates strict regulation, especially regarding their presence in food (FAO, 2022). Pesticides play also a role in the atmosphere as their volatilization can produce oxidized products through photolysis or OH reactions and they can be transported over large distances.
The fundamental properties and behaviours of these compounds are still not well understood. Because of their complex structure, even low DFT level computations can be extremely expensive. 
This project applies machine learning (ML) tools to chemical ionization mass spectra to ultimately develop a technique capable of predicting spectra’s peak intensities and the chemical ionization mass spectrometry (CIMS) sensitivity to pesticides. The primary challenge is to develop a ML model that comprehensively explains ion-molecule interactions while minimizing computational costs.

Our data set comprises different standard mixtures containing, in total, 716 pesticides measured with an orbitrap atmospheric pressure CIMS, with a multi-scheme chemical ionization inlet (MION) and five different concentrations (Rissanen et al, 2019; Partovi et al, 2023). The reagents of the ionization methods are CH2Br2, H2O, O2 and (CH3)2CO, generating respectively Br- , H3O+, O2- and [(CH3)2 CO + H]+ ions.

The project follows a general ML workflow: after an exploratory analysis, the data are preprocessed and fed to the ML algorithm, which classifies the ionization method able to detect the molecule, and, therefore, predicts the peak intensity of each pesticide; the accuracy of the prediction can be retrieved after measuring the performance of the model.
A random forest classifier was chosen to perform the classification of the ionization methods, to predict which one was able to detect each pesticide. The regression was performed with a kernel ridge regressor. Each algorithm was run with different types of molecular descriptors (topological fingerprint, MACCS keys and many-body tensor representation), to test which one was able to represent the molecular structure most accurately.

The results of the exploratory analysis highlight different trends between the positive and negative ionization methods, suggesting that different ion-molecule mechanisms are involved (Figure 1). The classification reaches around 80% accuracy for each ionization method with all four molecular descriptors tested, while the regression can predict fairly well the logarithm of the intensities of each ionization method, reaching 0.5 of error with MACCS keys for (CH3)2CO reagent (Figure 2).

Figure 1: Distribution of pesticide peak intensities for each reagent ion at five different concentrations.

Figure 2: Comparison of the KRR performance on (CH3)2CO reagent data with four different molecular descriptors.

 

 

How to cite: Bortolussi, F., Sandström, H., Partovi, F., Mikkilä, J., Rinke, P., and Rissanen, M.: Building A Machine Learning Model To Predict Sample Pesticide Content Utilizing Thermal Desorption MION-CIMS Analysis, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-20342, https://doi.org/10.5194/egusphere-egu24-20342, 2024.

Posters virtual: Wed, 17 Apr, 14:00–15:45 | vHall X5

Display time: Wed, 17 Apr, 08:30–Wed, 17 Apr, 18:00
vX5.12
|
EGU24-887
|
ECS
A data driven modeling framework to correlate the flash-floods potential with the Modified Land cover Characteristics in a changing climate: a study over Krishna River basin, India
(withdrawn)
Sumana Sarkar and Kalidhasan Ramesh
vX5.13
|
EGU24-3307
|
ECS
|
Ratih Prasetya, Adhi Harmoko Saputro, Donaldi Sukma Permana, and Nelly Florida Riama

This study explores the transformative potential of supervised machine learning algorithms in improving rainfall prediction models for Indonesia. Leveraging the NEX-GDDP-CMIP6 dataset's high-resolution, global, and bias-corrected data, we compare various machine learning regression algorithms. Focusing on the EC Earth3 model, our approach involves an in-depth analysis of five weather variables closely tied to daily rainfall. We employed a diverse set of algorithms, including linear regression, K-nearest neighbor regression (KNN), random forest regression, decision tree regression, AdaBoost, extra tree regression, extreme gradient boosting regression (XGBoost), support vector regression (SVR), gradient boosting decision tree regression (GBDT), and multi-layer perceptron. Performance evaluation highlights the superior predictive capabilities of Gradient Boosting Decision Tree and KNN, achieving an impressive RMSE score of 0.04 and an accuracy score of 0.99. In contrast, XGBoost exhibits lower performance metrics, with an RMSE score of 5.1 and an accuracy score of 0.49, indicating poor rainfall prediction. This study contributes in advancing rainfall prediction models, hence emphasizing the improvement of methodological choices in harnessing machine learning for climate research.

How to cite: Prasetya, R., Harmoko Saputro, A., Sukma Permana, D., and Florida Riama, N.: Comparative Study of Supervised Learning Algorithms on Rainfall Prediction using NEX-GDDP-CMIP6 Data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-3307, https://doi.org/10.5194/egusphere-egu24-3307, 2024.