ITS1.14/CL5.8 | Machine Learning for Climate Science
EDI
Machine Learning for Climate Science
Co-organized by AS5/ESSI1/NP4
Convener: Duncan Watson-Parris | Co-conveners: Katarzyna (Kasia) TokarskaECSECS, Marlene KretschmerECSECS, Sebastian Sippel, Gustau Camps-Valls
Orals
| Fri, 28 Apr, 08:30–12:25 (CEST), 14:00–15:40 (CEST)
 
Room N1
Posters on site
| Attendance Fri, 28 Apr, 16:15–18:00 (CEST)
 
Hall X5
Posters virtual
| Attendance Fri, 28 Apr, 16:15–18:00 (CEST)
 
vHall CL
Orals |
Fri, 08:30
Fri, 16:15
Fri, 16:15
Machine learning (ML) is currently transforming data analysis and modelling of the Earth system. While statistical and data-driven models have been used for a long time, recent advances in machine learning now allow for encoding non-linear, spatio-temporal relationships robustly without sacrificing interpretability. This has the potential to accelerate climate science, by providing new physics-based modelling approaches; improving our understanding of the underlying processes; reducing and better quantifying climate signals, variability, and uncertainty; and even making predictions directly from observations across different spatio-temporal scales. The limitations of machine learning methods need to also be considered, such as requiring, in general, rather large training datasets, data leakage, and/or poor generalisation abilities, so that methods are applied where they are fit for purpose and add value.

This session aims to provide a venue to present the latest progress in the use of ML applied to all aspects of climate science and we welcome abstracts focussed on, but not limited to:
- Causal discovery and inference: causal impact assessment, interventions, counterfactual analysis
- Learning (causal) process and feature representations in observations or across models and observations
- Hybrid models (physically informed ML, emulation, data-model integration)
- Novel detection and attribution approaches
- Probabilistic modelling and uncertainty quantification
- Explainable AI applications to climate data science and climate modelling
- Distributional robustness, transfer learning and/or out-of-distribution generalisation tasks in climate science

Please note that a companion session “ML for Earth System modelling” focuses specifically on ML for model improvement, particularly for near-term time-scales (including seasonal and decadal) forecasting, and related abstracts should be submitted there.

Orals: Fri, 28 Apr | Room N1

Chairpersons: Katarzyna (Kasia) Tokarska, Duncan Watson-Parris
Explainable and Interpretable Machine Learning for Climate
08:30–08:40
|
EGU23-15000
|
ECS
|
Highlight
|
On-site presentation
Jordi Cerdà-Bautista, José María Tárraga, Gherardo Varando, Alberto Arribas, Ted Shepherd, and Gustau Camps-Valls

The current situation regarding food insecurity in the continent of Africa, and the Horn of Africa in particular, is at an unprecedented risk level triggered by continuous drought events, complicated interactions between food prices, crop yield, energy inflation and lack of humanitarian aid, along with disrupting conflicts and migration flows. The study of a food-secure environment is a complex, multivariate, multiscale, and non-linear problem difficult to understand with canonical data science methodologies. We propose an alternative approach to the food insecurity problem from a causal inference standpoint to discover the causal relations and evaluate the likelihood and potential consequences of specific interventions. In particular, we demonstrate the use of causal inference for understanding the impact of humanitarian interventions on food insecurity in Somalia. In the first stage of the problem, we apply different data transformations to the main drivers to achieve the highest degree of correlation with the interested variable. In the second stage, we infer causation from the main drivers and interested variables by applying different causal methods such as PCMCI or Granger causality. We analyze and harmonize different time series, per district of Somalia, of the global acute malnutrition (GAM) index, food market prices, crop production, conflict levels, drought and flood internal displacements, as well as climate indicators such as the NDVI index, precipitation or land surface temperature. Then, assuming a causal graph between the main drivers causing the food insecurity problem, we estimate the effect of increasing humanitarian interventions on the GAM index, considering the effects of a changing climate, migration flows, and conflict events. We show that causal estimation with modern methodologies allows us to quantify the impact of humanitarian aid on food insecurity.

 

References

 

[1] Runge, J., Bathiany, S., Bollt, E. et al. Inferring causation from time series in Earth system sciences. Nat Commun 10, 2553 (2019). https://doi.org/10.1038/s41467-019-10105-3

[2] Sazib Nazmus, Mladenova lliana E., Bolten John D., Assessing the Impact of ENSO on Agriculture Over Africa Using Earth Observation Data, Frontiers in Sustainable Food Systems, 2020, 10.3389/fsufs.2020.509914. https://www.frontiersin.org/article/10.3389/fsufs.2020.509914

[3] Checchi, F., Frison, S., Warsame, A. et al. Can we predict the burden of acute malnutrition in crisis-affected countries? Findings from Somalia and South Sudan. BMC Nutr 8, 92 (2022). https://doi.org/10.1186/s40795-022-00563-2

How to cite: Cerdà-Bautista, J., Tárraga, J. M., Varando, G., Arribas, A., Shepherd, T., and Camps-Valls, G.: Causal inference to study food insecurity in Africa, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15000, https://doi.org/10.5194/egusphere-egu23-15000, 2023.

08:40–08:50
|
EGU23-13462
|
ECS
|
On-site presentation
Kai-Hendrik Cohrs, Gherardo Varando, Markus Reichstein, and Gustau Camps-Valls

Hybrid modeling describes the synergy between parametric models and machine learning [1]. Parts of a parametric equation are substituted by non-parametric machine learning models, which can then represent complex functions. These are inferred together with the parameters of the equation from the data. Hybrid modeling promises to describe complex relationships and to be scientifically interpretable. These promises, however, need to be taken with a grain of salt. With too flexible models, such as deep neural networks, the problem of equifinality arises: There is no identifiable optimal solution. Instead, many outcomes describe the data equally well, and we will obtain one of them by chance. Interpreting the result may lead to erroneous conclusions. Moreover, studies have shown that regularization techniques can introduce a bias on jointly estimated physical parameters [1].

We propose double machine learning (DML) to solve these problems [2]. DML is a theoretically well-founded technique for fitting semi-parametric models, i.e., models consisting of a parametric and a non-parametric component. DML is widely used for debiased treatment effect estimation in economics. We showcase its use for geosciences on two problems related to carbon dioxide fluxes: 

  • Flux partitioning, which aims at separating the net carbon flux (NEE) into its main contributing gross fluxes, namely, RECO and GPP.
  • Estimation of the temperature sensitivity parameter of ecosystem respiration Q10.

First, we show that in the case of synthetic data for Q10 estimation, we can consistently retrieve the true value of Q10 where the naive neural network approach fails. We further apply DML to the carbon flux partitioning problem and find that it is 1) able to retrieve the true fluxes of synthetic data, even in the presence of strong (and more realistic) heteroscedastic noise, 2) retrieves main gross carbon fluxes on real data consistent with established methods, and 3) allows us to causally interpret the retrieved GPP as the direct effect of the photosynthetically active radiation on NEE. This way, the DML approach can be seen as a causally interpretable, semi-parametric version of the established daytime methods. We also investigate the functional relationships inferred with DML and the drivers modulating the obtained light-use efficiency function. In conclusion, DML offers a solid framework to develop hybrid and semiparametric modeling and can be of widespread use in geosciences.

 

[1] Reichstein, Markus, et al. “Combining system modeling and machine learning into hybrid ecosystem modeling.” Knowledge-Guided Machine Learning (2022). https://doi.org/10.1201/9781003143376-14

[2] Chernozhukov, Victor, et al. “Double/debiased machine learning for treatment and structural parameters.” The Econometrics Journal, Volume 21, Issue 1, 1 (2018): C1–C68. https://doi.org/10.1111/ectj.12097

How to cite: Cohrs, K.-H., Varando, G., Reichstein, M., and Camps-Valls, G.: Double machine learning for geosciences, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13462, https://doi.org/10.5194/egusphere-egu23-13462, 2023.

08:50–09:00
|
EGU23-6061
|
ECS
|
Highlight
|
On-site presentation
Sebastian Hickman, Paul Griffiths, Peer Nowack, and Alex Archibald

Air pollution contributes to millions of deaths worldwide every year. The concentration of a particular air pollutant, such as ozone, is controlled by physical and chemical processes which act on varying temporal and spatial scales. Quantifying the strength of causal drivers (e.g. temperature) on air pollution from observational data, particularly at extrema, is challenging due to the difficulty of disentangling correlation and causation, as many drivers are correlated. Furthermore, because air pollution is controlled in part by large scale atmospheric phenomena, using local (e.g. individual grid cell level) covariates for analysis is insufficient to fully capture the effect of these phenomena on air pollution. 

 

Access to large spatiotemporal datasets of air pollutant concentrations and atmospheric variables, coupled with recent advances in self-supervised learning, allow us to learn reduced representations of spatiotemporal atmospheric fields, and therefore account for non-local and non-instantaneous processes in downstream tasks.

 

We show that these learned reduced representations can be useful for tasks such as air pollution forecasting, and crucially to quantify the causal effect of varying atmospheric fields on air pollution. We make use of recent advances in bounding causal effects in the presence of unobserved confounding to estimate, with uncertainty, the causal effect of changing atmospheric fields on air pollution. Finally, we compare our quantification of the causal drivers of air pollution to results from other approaches, and explore implications for our methods and for the wider goal of improving the process-level treatment of air pollutants in chemistry-climate models.

How to cite: Hickman, S., Griffiths, P., Nowack, P., and Archibald, A.: Using reduced representations of atmospheric fields to quantify the causal drivers of air pollution, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6061, https://doi.org/10.5194/egusphere-egu23-6061, 2023.

09:00–09:10
|
EGU23-6450
|
ECS
|
On-site presentation
Fernando Iglesias-Suarez, Veronika Eyring, Pierre Gentine, Tom Beucler, Michael Pritchard, Jakob Runge, and Breixo Solino-Fernandez

Earth system models are fundamental to understanding and projecting climate change, although there are considerable biases and uncertainties in their projections. A large contribution to this uncertainty stems from differences in the representation of clouds and convection occurring at scales smaller than the resolved model grid. These long-standing deficiencies in cloud parameterizations have motivated developments of computationally costly global high-resolution cloud resolving models, that can explicitly resolve clouds and convection. Deep learning can learn such explicitly resolved processes from cloud resolving models. While unconstrained neural networks often learn non-physical relationships that can lead to instabilities in climate simulations, causally-informed deep learning can mitigate this problem by identifying direct physical drivers of subgrid-scale processes. Both unconstrained and causally-informed neural networks are developed using a superparameterized climate model in which deep convection is explicitly resolved, and are coupled to the climate model. Prognostic climate simulations with causally-informed neural network parameterization are stable, accurately represent mean climate and variability of the original climate model, and clearly outperform its non-causal counterpart. Combining causal discovery and deep learning is a promising approach to improve data-driven parameterizations (informed by causally-consistent physical fields) for both their design and trustworthiness.

How to cite: Iglesias-Suarez, F., Eyring, V., Gentine, P., Beucler, T., Pritchard, M., Runge, J., and Solino-Fernandez, B.: The key role of causal discovery to improve data-driven parameterizations in climate models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6450, https://doi.org/10.5194/egusphere-egu23-6450, 2023.

09:10–09:20
|
EGU23-16846
|
ECS
|
On-site presentation
Emiliano Díaz, Gherardo Varando, Fernando Iglesias-Suarez, Gustau Camps-Valls, Kenza Tazi, Kara Lamb, and Duncan Watson-Parris

Discovering causal relationships from purely observational data is often not possible. In this case, combining observational and experimental data can allow for the identifiability of the underlying causal structure. In Earth Systems sciences, carrying out interventional experiments is often impossible for ethical and practical reasons. However, “natural interventions”, are often present in the data, and these represent regime changes caused by changes to exogenous drivers. In [3,4], the Invariant Causal Prediction (ICP) methodology was presented to identify the causes of a target variable of interest from a set of candidate causes. This methodology takes advantage of natural interventions, resulting in different cause variables distributions across different environments.  In [2] this methodology is implemented in a geoscience problem, namely identifying the causes of Pyrocumulunimbus (pyroCb), and storm clouds resulting from extreme wildfires. Although a set of plausible causes is produced, certain heuristic adaptations to the original ICP methodology were implemented to overcome some of the practical. limitations of ICP: a large number of hypothesis tests required and a failure to identify causes when these have a high degree of interdependence. In this work, we try to circumvent these difficulties by taking a different approach. We use a learning paradigm similar to that presented in [3] to learn causal representations invariant across different environments. Since we often don’t know exactly how to define the different environments best, we also propose to learn functions that describe their spatiotemporal extent. We apply the resulting algorithm to the pyroCb database in [1] and other Earth System sciences datasets to verify the plausibility of the causal representations found and the environments that describe the so-called natural interventions.. 

 

[1] Tazi et al. 2022. https://arxiv.org/abs/2211.13052

[2] Díaz et al. 2022 .https://arxiv.org/abs/2211.08883

[3] Arjovsky et al. 2019. https://arxiv.org/abs/1907.02893

[4] Peters et al.2016.  https://www.jstor.org/stable/4482904

[5] Heinze-Deml et al. 2018. https://doi.org/10.1515/jci-2017-0016

How to cite: Díaz, E., Varando, G., Iglesias-Suarez, F., Camps-Valls, G., Tazi, K., Lamb, K., and Watson-Parris, D.: Learning causal drivers of PyroCb, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16846, https://doi.org/10.5194/egusphere-egu23-16846, 2023.

09:20–09:30
|
EGU23-10568
|
ECS
|
On-site presentation
Zheng Wu, Tom Beucler, and Daniela Domeisen

Extreme stratospheric events such as extremely weak vortex events and strong vortex events can influence weather in the troposphere from weeks to months and thus are important sources of predictability of tropospheric weather on subseasonal to seasonal (S2S) timescales. However, the predictability of weak vortex events is limited to 1-2 weeks in state-of-the-art forecasting systems, while strong vortex events are more predictable than weak vortex events. Longer predictability timescales of the stratospheric extreme events would benefit long-range surface weather prediction. Recent studies showed promising results in the use of machine learning for improving weather prediction. The goal of this study is to explore the potential of a machine learning approach in extending the predictability of stratospheric extreme events in S2S timescales. We use neural networks (NNs) to predict the monthly stratospheric polar vortex strength with lead times up to five months using the first five principal components (PCs) of the sea surface temperature (SST), mean sea level pressure (MSLP), Barents–Kara sea-ice concentration (BK-SIC), poleward heat flux at 100 hPa, and zonal wind at 50, 30, and 2 hPa as precursors. These physical variables are chosen as they are indicated as potential precursors for the stratospheric extremes in previous studies. The results show that the accuracy and Brier Skill Score decrease with longer lead times and the performance is similar between weak and strong vortex events. We then employ two different NN attribution methods to uncover feature importance (heat map) in the inputs for the NNs, which indicates the relevance of each input for NNs to make the prediction. The heat maps suggest that precursors from the lower stratosphere are important for the prediction of the stratospheric polar vortex strength with a lead time of one month while the precursors at the surface and the upper stratosphere become more important with lead times longer than one month. This result is overall consistent with the previous studies that subseasonal precursors to the stratospheric extreme events may come from the lower troposphere. Our study sheds light on the potential of explainable NNs in searching for opportunities for skillful prediction of stratospheric extreme events and, by extension, surface weather on S2S timescales.

How to cite: Wu, Z., Beucler, T., and Domeisen, D.: Extended-range predictability of stratospheric extreme events using explainable neural networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10568, https://doi.org/10.5194/egusphere-egu23-10568, 2023.

09:30–09:40
|
EGU23-12528
|
ECS
|
On-site presentation
|
|
Philine Bommer, Marlene Kretschmer, Anna Hedstroem, Dilyara Bareeva, and Marina M.-C. Hoehne

Explainable artificial intelligence (XAI) methods serve as a support for researchers to shed light onto the reasons behind the predictions made by deep neural networks (DNNs). XAI methods have already been successfully applied to climate science, revealing underlying physical mechanisms inherent in the studied data. However, the evaluation and validation of XAI performance is challenging as explanation methods often lack ground truth. As the number of XAI methods is growing, a comprehensive evaluation is necessary to enable well-founded XAI application in climate science.

In this work we introduce explanation evaluation in the context of climate research. We apply XAI evaluation to compare multiple explanation methods for a multi-layer percepton (MLP) and a convolutional neural network (CNN). Both MLP and CNN assign temperature maps to classes based on their decade. We assess the respective explanation methods using evaluation metrics measuring robustness, faithfulness, randomization, complexity and localization. Based on the results of a random baseline test we establish an explanation evaluation guideline for the climate community. We use this guideline to rank the performance in each property of similar sets of explanation methods for the MLP and CNN. Independent of the network type, we find that Integrated Gradients, Layer-wise relevance propagation and InputGradients exhibit a higher robustness, faithfulness and complexity compared to purely Gradient-based methods, while sacrificing reactivity to network parameters, i.e. low randomisation scores. The contrary holds for Gradient, SmoothGrad, NoiseGrad and FusionGrad. Another key observation is that explanations using input perturbations, such as SmoothGrad and Integrated Gradients, do not improve robustness and faithfulness, in contrast to theoretical claims. Our experiments highlight that XAI evaluation can be applied to different network tasks and offers more detailed information about different properties of explanation method than previous research. We demonstrate that using XAI evaluation helps to tackle the challenge of choosing an explanation method.

How to cite: Bommer, P., Kretschmer, M., Hedstroem, A., Bareeva, D., and Hoehne, M. M.-C.: Evaluation of explainable AI solutions in climate science, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12528, https://doi.org/10.5194/egusphere-egu23-12528, 2023.

09:40–09:50
|
EGU23-102
|
ECS
|
On-site presentation
Shan He, Song Yang, and Dake Chen

Large-scale climate variability is analysed, modelled, and predicted mainly based on general circulation models and low-dimensional association analysis. The models’ equational basis makes it difficult to produce mathematical analysis results and clear interpretations, whereas the association analysis cannot establish causation sufficiently to make invariant predictions. However, the macroscale causal structures of the climate system may accomplish the tasks of analysis, modelling, and prediction according to the concepts of causal emergence and causal prediction’s invariance.

Under the assumptions of no unobserved confounders and linear Gaussian models, we examine whether the macroscale causal structures of the climate system can be inferred not only to model but also to predict the large-scale climate variability. Specifically, first, we obtain the causal structures of the macroscale air-sea interactions of El Niño–Southern Oscillation (ENSO), which are interpretable in terms of physics. The structural causal models constructed accordingly can model the ENSO diversity realistically and predict the ENSO variability. Second, this study identifies the joint effect of ENSO and three other winter climate phenomena on the interannual variability in the East Asian summer monsoon. Using regression, these causal precursors can predict the monsoon one season ahead, outperforming association-based empirical models and several climate models. Third, we introduce a framework that infers ENSO’s air-sea interactions from high-dimensional data sets. The framework is based on aggregating the causal discovery results of bootstrap samples to improve high-dimensional variable selection. It is also based on spatial-dimension reduction to allow of clear interpretations at the macroscale.

While further integration with nonlinear non-Gaussian models will be necessary to establish the full benefits of inferring causal structures as a standard practice in research and operational predictions, our study may offer a route to providing concise explanations of the climate system and reaching accurate invariant predictions.

How to cite: He, S., Yang, S., and Chen, D.: Inferring Causal Structures to Model and Predict ENSO and Its Effect on Asian Summer Monsoon, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-102, https://doi.org/10.5194/egusphere-egu23-102, 2023.

09:50–10:00
|
EGU23-3457
|
Virtual presentation
Ranjini Swaminathan, Tristan Quaife, and Richard Allan

The presence and amount of vegetation in any given region controls Gross Primary Production (GPP) or  the flux of carbon into the land driven by the process of photosynthesis. Earth System Models (ESMs) give us the ability to simulate GPP through modelling the various interactions between the atmosphere and biosphere including under different climate change scenarios in the future. GPP is the largest flux of the global carbon cycle and plays an important role including in carbon budget calculations.  However, GPP estimates from ESMs not only vary widely but also have much uncertainty in the underpinning attributors for this variability.  

We use data from pre-industrial Control (pi-Control) simulations to avail of the longer time period to sample data from as well as to exclude the influence of anthropogenic forcing in GPP estimation thereby leaving GPP to be largely attributable to two factor - (a) input atmospheric forcings and (b) the processes using those input climate variables to diagnose GPP. 

We explore the processes determining GPP with a physically-guided Machine Learning framework applied to a set of Earth System Models (ESMs) from the Sixth Coupled Model Intercomparison Project (CMIP6). We use this framework to examine whether differences in GPP across models are caused by differences in atmospheric state or process representations. 

Results from our analysis show that models with similar regional atmospheric forcing do not always have similar GPP distributions. While there are regions where climate models largely agree on what atmospheric variables are most relevant for GPP, there are regions such as the tropics where there is more uncertainty.  Our analysis highlights the potential of ML to identify differences in atmospheric forcing and carbon cycle process modelling across current state-of-the-art ESMs. It also allows us to extend the analysis with observational estimates of forcings as well as GPP for model improvement. 

How to cite: Swaminathan, R., Quaife, T., and Allan, R.: Evaluating Vegetation Modelling in Earth System Models with Machine Learning Approaches, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3457, https://doi.org/10.5194/egusphere-egu23-3457, 2023.

10:00–10:10
|
EGU23-6306
|
ECS
|
Highlight
|
On-site presentation
Arthur Grundner, Tom Beucler, Pierre Gentine, Marco A. Giorgetta, Fernando Iglesias-Suarez, and Veronika Eyring

A promising approach to improve cloud parameterizations within climate models, and thus climate projections, is to train machine learning algorithms on storm-resolving model (SRM) output. The ICOsahedral Non-hydrostatic (ICON) modeling framework permits simulations ranging from numerical weather prediction to climate projections, making it an ideal target to develop data-driven parameterizations for sub-grid scale processes. Here, we systematically derive and evaluate the first data-driven cloud cover parameterizations with coarse-grained data based on ICON SRM simulations. These parameterizations range from simple analytic models and symbolic regression fits to neural networks (NNs), populating a performance x complexity plane. In most models, we enforce sparsity and discourage correlated features by sequentially selecting features based on the models' performance gains. Guided by a set of physical constraints, we use symbolic regression to find a novel equation to parameterize cloud cover. The equation represents a good compromise between performance and complexity, achieving the highest performance (R^2>0.9) for its complexity (13 trainable parameters). To model sub-grid scale cloud cover in its full complexity, we also develop three different types of NNs that differ in the degree of vertical locality they assume for diagnosing cloud cover from coarse-grained atmospheric state variables. Using the game-theory based interpretability library SHapley Additive exPlanations, we analyze our most non-local NN and identify an overemphasis on specific humidity and cloud ice as the reason why it cannot perfectly generalize from the global to the regional coarse-grained SRM data. The interpretability tool also helps visualize similarities and differences in feature importance between regionally and globally trained NNs, and reveals a local relationship between their cloud cover predictions and the thermodynamic environment. Our results show the potential of deep learning and symbolic regression to derive accurate yet interpretable cloud cover parameterizations from SRMs.

How to cite: Grundner, A., Beucler, T., Gentine, P., Giorgetta, M. A., Iglesias-Suarez, F., and Eyring, V.: Data-Driven Cloud Cover Parameterizations, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-6306, https://doi.org/10.5194/egusphere-egu23-6306, 2023.

Coffee break
Chairpersons: Marlene Kretschmer, Gustau Camps-Valls
General Session
10:45–10:55
|
EGU23-4350
|
ECS
|
On-site presentation
|
Paula Harder, Venkatesh Ramesh, Alex Hernandez-Garcia, Qidong Yang, Prasanna Sattigeri, Daniela Szwarcman, Campbell Watson, and David Rolnick

The availability of reliable, high-resolution climate and weather data is important to inform long-term decisions on climate adaptation and mitigation and to guide rapid responses to extreme events. Forecasting models are limited by computational costs and, therefore, often generate coarse-resolution predictions. Statistical downscaling can provide an efficient method of upsampling low-resolution data. In this field, deep learning has been applied successfully, often using image super-resolution methods from computer vision. However, despite achieving visually compelling results in some cases, such models frequently violate conservation laws when predicting physical variables. In order to conserve physical quantities, we develop methods that guarantee physical constraints are satisfied by a deep learning downscaling model while also improving their performance according to traditional metrics. We compare different constraining approaches and demonstrate their applicability across different neural architectures as well as a variety of climate and weather data sets, including ERA5 and WRF data sets.

How to cite: Harder, P., Ramesh, V., Hernandez-Garcia, A., Yang, Q., Sattigeri, P., Szwarcman, D., Watson, C., and Rolnick, D.: Physics-Constrained Deep Learning for Downscaling, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4350, https://doi.org/10.5194/egusphere-egu23-4350, 2023.

10:55–11:05
|
EGU23-15286
|
ECS
|
On-site presentation
|
Luca Glawion, Julius Polz, Benjamin Fersch, Harald Kunstmann, and Christian Chwala

Natural disasters caused by cyclones, hail, landslides or floods are directly related to precipitation. Global climate models are an important tool to adapt to these hazards in a future climate. However, they operate on spatial and temporal discretizations that limit the ability to adequately reflect these fast evolving, highly localized phenomena which has led to the development of various downscaling approaches .

Conditional generative adversarial networks (cGAN) have recently been applied as a promising downscaling technique to improve the spatial resolution of climate data. The ability of GANs to generate ensembles of solutions from random perturbations can be used to account for the stochasticity of climate data and quantify uncertainties. 

We present a cGAN for not only downscaling the spatial, but simultaneously also the temporal dimension of precipitation data as a so-called video super resolution approach. 3D convolutional layers are exploited for extracting and generating temporally consistent  rain events with realistic fine-scale structure. We downscale coarsened gauge adjusted and climatology corrected precipitation data from Germany from a spatial resolution of 32 km to 2 km and a temporal resolution of 1 hr to 10 min, by applying a novel training routine using partly normalized and logarithmized data, allowing for improved extreme value statistics of the generated fields.

Exploiting the fully convolutional nature of our model we can generate downscaled maps for the whole of Germany in a single downscaling step at low latency. The evaluation of these maps using a spatial and temporal power spectrum analysis shows that the generated temporal and spatial structures are in high agreement with the reference. Visually, the generated temporally evolving and advecting rain events are hardly classifiable as artificial generated. The model also shows high skill regarding pixel-wise error and localization of high precipitation intensities, considering the FSS, CRPS, KS and RMSE. Due to the underdetermined downscaling problem a probabilistic cGAN approach yields additional information to deterministic models which we use for comparison. The method is also capable of preserving the climatology, e.g., expressed as the annual precipitation sum. Investigations of temporal aggregations of the downscaled fields revealed an interesting effect. We observe that structures generated in networks with convolutional layers are not placed completely at random, but can generate recurrent structures, which can also be discovered within other prominent DL downscaling models. Although they can be mitigated by adequate model selection, their occurrence remains an open research question.

We conclude that our proposed approach can extend the application of cGANs for downscaling to the time dimension and therefore is a promising candidate to supplement conventional downscaling methods due to the high performance and computational efficiency.

How to cite: Glawion, L., Polz, J., Fersch, B., Kunstmann, H., and Chwala, C.: Spatio-temporal downscaling of precipitation data using a conditional generative adversarial network, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15286, https://doi.org/10.5194/egusphere-egu23-15286, 2023.

11:05–11:15
|
EGU23-4044
|
ECS
|
On-site presentation
Björn Lütjens, Patrick Alexander, Raf Antwerpen, Guido Cervone, Matthew Kearney, Bingkun Luo, Dava Newman, and Marco Tedesco

Motivation. Ice melting in Greenland and Antarctica has increasingly contributed to rising sea levels. Yet, the exact speed of melting, existence of abrupt tipping points, and in-detail links to climate change remain uncertain. Ice shelves essentially prevent the ice sheet from slipping into the ocean and better prediction of collapses is needed. Meltwater at the surface of ice shelves indicates ice shelf collapse through destabilizing ice shelves via fracturing and flexural processes (Banwell et al., 2013) and is likely impacted by a warming climate ( Kingslake et al., 2017). Maps of meltwater have been created from in-situ and remote observations, but their low and irregular spatiotemporal resolution severely limits studies (Kingslake et al., 2019).

Research Gap. In particular, there does not exist daily high-resolution (< 500m) maps of surface meltwater. We propose the first daily high-resolution surface meltwater maps by developing a deep learning-based downscaling method, called DailyMelt, that fuses observations and simulations of varying spatiotemporal resolution, as illustrated in Fig.1. The created maps will improve understanding of the origin, transport, and controlling physical processes of surface meltwater. Moreover, they will act as unified source to improve sea level rise and meltwater predictions in climate models. 

Data. To synthesize surface meltwater maps, we leverage observations from satellites (MODIS, Sen-1 SAR) which are high-resolution (500m, 10m), but have substantial temporal gaps due to repeat time and cloud coverage. We fuse them with simulations (MAR) and passive microwave observations (MEaSURE) that are daily, but low-resolution (6km, 3.125km). In a significant remote sensing effort, we have downloaded, reprojected, and regridded all products into daily observations for our study area over Greenland’s Helheim glacier. 

Approach and expected results. Within deep generative vision models, diffusion-based models promise sharp and probabilistic predictions. We have implemented SRDiff (Li H. et al., 2022) and tested it on spatially downscaling external data. As a baseline model, we have implemented a statistical downscaling model that is a local hybrid physics-linear regression model (Noel et al., 2016). In our planned benchmark, we expect a baseline UNet architecture that minimizes RMSE to create blurry maps and a generative adversarial network that minimizes adversarial loss to create sharp but deterministic maps. We have started with spatial downscaling and will include temporal downscaling. 

In summary, we will create the first daily high-resolution (500m) surface meltwater maps, have introduced the first diffusion-based model for downscaling Earth sciences data, and have created the first benchmark dataset for downscaling surface meltwater maps.

 

References.

Banwell, A. F., et al. (2013), Breakup of the Larsen B Ice Shelf triggered by chain reaction drainage of supraglacial lakes, Geophys. Res. Lett., 40 

Kingslake J, et al. (2017), Widespread movement of meltwater onto and across Antarctic ice shelves, Nature, 544(7650)

Kingslake J., et al. (2019), Antarctic Surface Hydrology and Ice Shelf Stability Workshop report, US Antarctic Program Data Center

Li H., et al. (2022), SRDiff: Single image super-resolution with diffusion probabilistic models, Neurocomputing, 479

Noël, B., et al. (2016), A daily, 1 km resolution data set of downscaled Greenland ice sheet surface mass balance (1958–2015), The Cryosphere, 10

How to cite: Lütjens, B., Alexander, P., Antwerpen, R., Cervone, G., Kearney, M., Luo, B., Newman, D., and Tedesco, M.: DailyMelt: Diffusion-based Models for Spatiotemporal Downscaling of (Ant-)arctic Surface Meltwater Maps, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-4044, https://doi.org/10.5194/egusphere-egu23-4044, 2023.

11:15–11:25
|
EGU23-1502
|
ECS
|
On-site presentation
Naomi Simumba and Michiaki Tatsubori

Transfer learning is a technique wherein information learned by previously trained models is applied to new learning tasks. Typically, weights learned by a network pretrained on other datasets are copied or transferred to new networks. These new networks, or downstream models, are then are then used for assorted tasks. Foundation models extend this concept by training models on large datasets. Such models gain a contextual understanding which can then be used to improve performance of downstream tasks in different domains. Common examples include GPT-3 in the field on natural language processing and ImageNet trained models in the field of computer vision.

Beyond its high rate of data collection, satellite data also has a wide range of meaningful applications including climate impact modelling and sustainable energy. This makes foundation models trained on satellite data very beneficial as they would reduce the time, data, and computational resources required to obtain useful downstream models for these applications.

However, satellite data models differ from typical computer vision models in a crucial way. Because several types of satellite data exist, each with its own benefits, a typical use case for satellite data involves combining multiple data inputs in configurations that are not readily apparent during pretraining of the foundation model. Essentially, this means that the downstream application may have a different number of input channels from the pretrained model, which raises the question of how to successfully transfer information learned by the pretrained model to the downstream application.

This research proposes and examines several architectures for the downstream model that allow for pretrained weights to be incorporated when a different number of input channels is required. For evaluation, models pretrained with self-supervised learning on precipitation data are applied to a downstream model which conducts temporal interpolation of precipitation data and requires two inputs. The effect of including perceptual loss to enhance model performance is also evaluated. These findings can be used to guide adaptation for applications ranging from flood modeling, land use detection, and more.

How to cite: Simumba, N. and Tatsubori, M.: Adapting Transfer Learning for Multiple Channels in Satellite Data Applications, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1502, https://doi.org/10.5194/egusphere-egu23-1502, 2023.

11:25–11:35
|
EGU23-14856
|
ECS
|
Virtual presentation
Andrii Antonenko, Viacheslav Boretskij, and Oleksandr Zagaria

Air pollution has become an integral part of modern life. The main source of air pollution can be considered combustion processes associated with energy-intensive corporate activities. Energy companies consume about one-third of the fuel produced and are a significant source of air pollution [1]. State and public air quality monitoring networks were created to monitor the situation. Public monitoring networks are cheaper and have more coverage than government ones. Although the state monitoring system shows more accurate data, an inexpensive network is sufficient to inform the public about the presence or absence of pollution (air quality). In order to inform the public, the idea arose to test the possibility of detecting types of pollution using data from cheap air quality monitoring sensors. In general, to use a cheap sensor for measurements, it must first be calibrated (corrected) by comparing its readings with a reference device. Various mathematical methods can be used for this. One of such method is neural network training, which has proven itself well for correcting PM particle readings due to relative humidity impact [2].

The idea of using a neural network to improve data quality is not new, but it is quite promising, as the authors showed in [3]. The main problem to implement this method is connected with a reliable dataset for training the network. For this, it is necessary to register sensor readings for relatively clean air and for artificially generated or known sources of pollution. Training the neural network on the basis of collected data can be used to determine (classify) types of air: with pollution (pollutant) or without. For this, an experiment was set up in the "ReLab" co-working space at the Taras Shevchenko National University of Kyiv. The sensors were placed in a closed box, in which airflow ventilation is provided. The ZPHS01B [4] sensor module was used for inbox measurements, as well as, calibrated sensors PMS7003 [5] and BME280 [6]. Additionally, IPS 7100 [7] and SPS30 [8] were added to enrich the database for ML training. A platform based on HiLink 7688 was used for data collecting, processing, and transmission.

Data was measured every two seconds, independently from each sensor. Before each experiment, the room was ventilated to avoid influence on the next series of experiments.

References

1. Zaporozhets A. Analysis of means for monitoring air pollution in the environment. Science-based technologies. 2017, Vol. 35, no3. 242-252. DOI: 10.18372/2310-5461.35.11844

2. Antonenko A, (2021) Correction of fine particle concentration readings depending on relative humidity, [Master's thesis, Taras Shevchenko National University of Kyiv], 35 pp.

3. Lee, J. Kang, S. Kim, Y. Im, S. Yoo , D. Lee, “Long-Term Evaluation and Calibration of Low-Cost Particulate Matter (PM) Sensor”, Sensors 2020, vol. 20, 3617, 24 pp., 2020.`

4. ZPHS01B Datasheet URL: https://pdf1.alldatasheet.com/datasheet-pdf/view/1303697/WINSEN/ZPHS01B.html

5. Plantower PMS7003 Datasheet URL: https://www.espruino.com/datasheets/PMS7003.pdf

6. Bosch 280 Datasheet URL: https://www.mouser.com/datasheet/2/783/BST-BME280-DS002-1509607.pdf

7. https://pierasystems.com/intelligent-particle-sensors/

8. https://sensirion.com/products/catalog/SPS30/

How to cite: Antonenko, A., Boretskij, V., and Zagaria, O.: Classification of Indoor Air Pollution Using Low-cost Sensors by Machine Learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14856, https://doi.org/10.5194/egusphere-egu23-14856, 2023.

11:35–11:45
|
EGU23-10431
|
On-site presentation
Suyeon Choi and Yeonjoo Kim

Physical process-based numerical prediction models (NWPs) and radar-based probabilistic methods have been mainly used for short-term precipitation prediction. Recently, radar-based precipitation nowcasting models using advanced machine learning (ML) have been actively developed. Although the ML-based model shows outstanding performance in short-term rainfall prediction, it significantly decreases performance due to increased lead time. It has the limitation of being a black box model that does not consider the physical process of the atmosphere. To address these limitations, we aimed to develop a hybrid precipitation nowcasting model, which combines NWP and an advanced ML-based model via an ML-based ensemble method. The Weather Research and Forecasting (WRF) model was used as NWP to generate a physics-based rainfall forecast. In this study, we developed the ML-based precipitation nowcasting model with conditional Generative Adversarial Network (cGAN), which shows high performance in the image generation tasks. The radar reflectivity data, WRF hindcast meteorological outputs (e.g., temperature and wind speed), and static information of the target basin (e.g., DEM, Land cover) were used as input data of cGAN-based model to generate physics-informed rainfall prediction at the lead time up to 6 hours. The cGAN-based model was trained with the data for the summer season of 2014-2017. In addition, we proposed an ML-based blending method, i.e., XGBoost, that combines cGAN-based model results and WRF forecast results. To evaluate the hybrid model performance, we analyzed the performance of precipitation predictions on three heavy rain events in South Korea. The results confirmed that using the blending method to develop a hybrid model could provide an improved precipitation nowcasting approach.

 

Acknowledgements

 This work was supported by a grant from the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (2020R1A2C2007670).

How to cite: Choi, S. and Kim, Y.: Developing hybrid precipitation nowcasting model with WRF and conditional GAN-based model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10431, https://doi.org/10.5194/egusphere-egu23-10431, 2023.

11:45–11:55
|
EGU23-5431
|
ECS
|
On-site presentation
|
Shanning Bao, Nuno Carvalhais, Lazaro Alonso, Siyuan Wang, Johannes Gensheimer, Ranit De, and Jiancheng Shi

Photosynthesis model parameters represent vegetation properties or sensitivities of photosynthesis processes. As one of the model uncertainty sources, parameters affect the accuracy and generalizability of the model. Ideally, parameters of ecosystem-level photosynthesis models, i.e., gross primary productivity (GPP) models, can be measured or inversed from observations at the local scale. To extrapolate parameters to a larger spatial scale, current photosynthesis models typically adopted fixed values or plant-functional-type(PFT)-specific values. However, the fixed and PFT-based parameterization approaches cannot capture sufficiently the spatial variability of parameters and lead to significant estimation errors. Here, we propose a Simultaneous Parameter Inversion and Extrapolation approach (SPIE) to overcome these issues. 

SPIE refers to predicting model parameters using an artificial neural network (NN) constrained by both model loss and ecosystem features including PFT, climate types, bioclimatic variables, vegetation features, atmospheric nitrogen and phosphorus deposition and soil properties. Taking a light use efficiency (LUE) model as an example, we evaluated SPIE at 196 FLUXNET eddy covariance flux sites. The LUE model accounts for the effects of air temperature, vapor pressure deficit, soil water availability (SW), light saturation, diffuse radiation fraction and CO2 on GPP using five independent sensitivity functions. The SW was represented using the water availability index and can be optimized based on evapotranspiration. Thus, we optimized the NN by minimizing the model loss which consists of GPP errors, evapotranspiration errors, and constraints on sensitivity functions. Furthermore, we compared SPIE with 11 typical parameter extrapolating approaches, including PFT- and climate-specific parameterizations, global and PFT-based parameter optimization, site-similarity, and regression methods using Nash-Sutcliffe model efficiency (NSE), determination coefficient (R2) and normalized root mean squared error (NRMSE).

The results in ten-fold cross-validation showed that SPIE had the best performance across various temporal and spatial scales and across assessing metrics. None of the parameter extrapolating approaches reached the same performance as the on-site calibrated parameters (NSE=0.95), but SPIE was the only approach showing positive NSE (=0.68) in cross-validation across sites. Moreover, the site-level NSE, R2, and NRMSE of SPIE all significantly outperformed per biome and per climate type. Ranges of parameters were more constrained by SPIE than site calibrations.

Overall, SPIE is a robust parameter extrapolation approach that overcomes strong limitations observed in many of the standard model parameterization approaches. Our approach suggests that model parameterizations can be determined from observations of vegetation, climate and soil properties, and expands from customary clustering methods (e.g., PFT-specific parameterization). We argue that expanding SPIE to other models overcomes current limits in parameterization and serves as an entry point to investigate the robustness and generalization of different models.

How to cite: Bao, S., Carvalhais, N., Alonso, L., Wang, S., Gensheimer, J., De, R., and Shi, J.: Towards Robust Parameterizations in Ecosystem-level Photosynthesis Models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5431, https://doi.org/10.5194/egusphere-egu23-5431, 2023.

11:55–12:05
|
EGU23-12889
|
On-site presentation
Robert A. Rohde and Zeke Hausfather

Berkeley Earth is premiering a new high resolution analysis of historical instrumental temperatures.

This builds on our existing work on climate reconstruction by adding a simple machine learning layer to our analysis.  This new approach extracts weather patterns from model, satellite, and reanalysis data, and then layers these weather patterns on top of instrumental observations and our existing interpolation methods to produce new high resolution historical temperature fields.  This has quadrupled our output resolution from the previous 1° x 1° lat-long to a new global 0.25° x 0.25° lat-long resolution.  However, this is not simply a downscaling effort.  Firstly, the use of weather patterns derived from physical models and observations increases the spatial realism of the reconstructed fields.  Secondly, observations from regions with high density measurement networks have been directly incorporated into the high resolution field, allowing dense observations to be more fully utilized.  

This new data product uses significantly more observational weather station data and produces higher resolution historical temperature fields than any comparable product, allowing for unprecedented insights into historical local and regional climate change.  In particular, the effect of geographic features such as mountains, coastlines, and ecosystem variations are resolved with a level of detail that was not previously possible.  At the same time, previously established techniques for bias corrections, noise reduction, and error analysis continued to be utilized.  The resulting global field initially spans 1850 to present and will be updated on an ongoing basis.  This project does not significantly change the global understanding of climate change, but helps to provide local detail that was often unresolved previously.  The initial data product focuses on monthly temperatures, though a proposal exists to also create a high resolution daily temperature data set using similar methods.

This talk will describe the construction of the new data set and its characteristics.  The techniques used in this project are accessible enough that they are likely to be useful for other types of instrumental analyses wishing to improve resolution or leverage basic information about weather patterns derived from models or other sources.

How to cite: Rohde, R. A. and Hausfather, Z.: New Berkeley Earth High Resolution Temperature Data Set, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12889, https://doi.org/10.5194/egusphere-egu23-12889, 2023.

12:05–12:15
|
EGU23-849
|
ECS
|
On-site presentation
Linn Carlstedt, Lea Poropat, and Céline Heuzé

Understanding the forcing of regional sea level variability is crucial as many people all over the world live along the coasts and are endangered by the sea level rise. The adding of fresh water into the oceans due to melting of the Earth’s land ice together with thermosteric changes has led to a rise of the global mean sea level (GMSL) with an accelerating rate during the twentieth century, and has now reached a mean rate of 3.7 mm per year according to IPCCs latest report. However, this change varies spatially and the dynamics behind what forces sea level variability on a regional to local scale is still less known, thus making it hard for decision makers to mitigate and adapt with appropriate strategies.

Here we present a novel approach using machine learning (ML) to identify the dynamics and determine the most prominent drivers forcing coastal sea level variability. We use a recurrent neural network called Long Short-Term Memory (LSTM) with the advantage of learning data in sequences and thus capable of storing some memory from previous timesteps, which is beneficial when dealing with time series. To train the model we use hourly ERA5 10-m wind, mean sea level pressure (MSLP), sea surface temperature (SST), evaporation and  precipitation data between 2009-2017 in the North Sea region. To reduce the dimensionality of the data but still preserve maximal information we conduct principal component analysis (PCA) after removing the climatology which are calculated by hourly means over the years. Depending on the explained variance of the PCs for each driver, 2-4 PCs are chosen and cross-correlated to eliminate collinearity, which could affect the model results. Before being used in the ML model the final preprocessed data are normalized by min-max scaling to optimize the learning. The target data in the model are hourly in-situ sea level observations from West-Terschelling in the Netherlands. Using in-situ observations compared to altimeter data enhances the ability of making good predictions in coastal zones as altimeter data has a tendency to degrade along the coasts. The sea level time series is preprocessed by tidal removal and de-seasoned by subtracting the hourly means. To determine which drivers are most prominent for the sea surface variability in our location, we mute one driver at a time in the training of the network and evaluate the eventual improvement or deterioration of the predictions.

Our results show that the zonal wind is the most prominent forcing of sea level variability in our location, followed by meridional wind and MSLP. While the SST greatly affects the GMSL, SST seems to have little to no effect on local sea level variability compared to other drivers. This approach shows great potential and can easily be applied to any coastal zone and is thus very useful for a broad body of decision makers all over the world. Identifying the cause of local sea level variability will also enable the ability of producing better models for future predictions, which is of great importance and interest.

How to cite: Carlstedt, L., Poropat, L., and Heuzé, C.: Drivers of sea level variability using neural networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-849, https://doi.org/10.5194/egusphere-egu23-849, 2023.

12:15–12:25
|
EGU23-753
|
ECS
|
Highlight
|
On-site presentation
Lea Poropat, Céline Heuzé, and Heather Reese

In climate research we often want to focus on a specific region and the most prominent processes affecting it, but how exactly do we select the borders of that region? We also often need to use long-term in situ observations to represent a larger area, but which area exactly are they representative for? In ocean sciences we usually consider basins as separate regions or even simpler, just select a rectangle of the ocean, but that does not always correspond to the real, physically relevant borders. As alternative, we use an unsupervised classification model, Gaussian Mixture Model (GMM), to separate the northwestern European seas into regions based on the sea level variability observed by altimetry satellites.

After performing a principal component (PC) analysis on the 24 years of monthly sea level data, we use the stacked PC maps as input for the GMM. We used the Bayesian Information Criterion to determine into how many regions our area should be split because GMM requires the number of classes to be selected a priori. Depending on the number of PCs used, the optimal number of classes was between 12 and 18, more PCs typically allowing the separation into more regions. Due to the complexity of the data and the dependence of the results on the starting randomly chosen weights, the classification can differ to a degree with every new run of the model, even if we use the exact same data and parameters. To tackle that, instead of using one model, we use an ensemble of models and then determine which class does each grid point belong to by soft voting, i.e., each of the models provides a probability that the point belongs to a particular class and the class with the maximal sum of probabilities wins. As a result, we obtain both the classification and the likelihood of the model belonging to that class.

Despite not using the coordinates of the data points in the model at all, the obtained classes are clearly location dependent, with grid points belonging to the same class always being close to each other. While many classes are defined by bathymetry changes, e.g., the continental shelf break and slope, sometimes other factors come into play, such as for the split of the Norwegian coast into two classes or for the division in the Barents Sea, which is probably based on the circulation. The North Sea is also split into three distinct regions, possibly based on sea level changes caused by dominant wind patterns.

This method can be applied to almost any atmospheric or oceanic variable and used for larger or smaller areas. It is quick and practical, allowing us to delimit the area based on the information we cannot always clearly see from the data, which can facilitate better selection of the regions that need further research.

How to cite: Poropat, L., Heuzé, C., and Reese, H.: Finding regions of similar sea level variability with the help of a Gaussian Mixture Model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-753, https://doi.org/10.5194/egusphere-egu23-753, 2023.

Lunch break
Chairpersons: Sebastian Sippel, Marlene Kretschmer
Extreme Events and Climate Hazards
14:00–14:10
|
EGU23-8615
|
Highlight
|
On-site presentation
Peter Watson
Multiple studies have now demonstrated that machine learning (ML) can give improved skill for simulating fairly typical weather events in climate simulations, for tasks such as downscaling to higher resolution and emulating and speeding up expensive model parameterisations. Many of these used ML methods with very high numbers of parameters, such as neural networks, which are the focus of the discussion here. Not much attention has been given to the performance of these methods for extreme event severities of relevance for many critical weather and climate prediction applications, with return periods of more than a few years. This leaves a lot of uncertainty about the usefulness of these methods, particularly for general purpose models that must perform reliably in extreme situations. ML models may be expected to struggle to predict extremes due to there usually being few samples of such events. 
 
This presentation will review the small number of studies that have examined the skill of machine learning methods in extreme weather situations. It will be shown using recent results that machine learning methods that perform reasonably for typical weather events can have very large errors in extreme situations, highlighting the necessity of testing the performance for these cases. Extrapolation to extremes is found to work well in some studies, however. 
 
It will be argued that more attention needs to be given to performance for extremes in work applying ML in climate science. Research gaps that seem particularly important are identified. These include investigating the behaviour of ML systems in events that are multiple standard deviations beyond observed records, which have occurred in the past, and evaluating performance of complex generative models in extreme events. Approaches to address these problems will be discussed.

How to cite: Watson, P.: Machine learning applications for weather and climate need greater focus on extremes, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8615, https://doi.org/10.5194/egusphere-egu23-8615, 2023.

14:10–14:20
|
EGU23-12657
|
Highlight
|
On-site presentation
Karin Mora, Gunnar Brandt, Vitus Benson, Carsten Brockmann, Gustau Camps-Valls, Miguel-Ángel Fernández-Torres, Tonio Fincke, Norman Fomferra, Fabian Gans, Maria Gonzalez, Chaonan Ji, Guido Kraemer, Eva Sevillano Marco, David Montero, Markus Reichstein, Christian Requena-Mesa, Oscar José Pellicer Valero, Mélanie Weynants, Sebastian Wieneke, and Miguel D. Mahecha

Compound heat waves and drought events draw our particular attention as they become more frequent. Co-occurring extreme events often exacerbate impacts on ecosystems and can induce a cascade of detrimental consequences. However, the research to understand these events is still in its infancy. DeepExtremes is a project funded by the European Space Agency (https://rsc4earth.de/project/deepextremes/) aiming at using deep learning to gain insight into Earth surface under extreme climate conditions. Specifically, the goal is to forecast and explain extreme, multi-hazard, and compound events. To this end, the project leverages the existing Earth observation archive to help us better understand and represent different types of hazards and their effects on society and vegetation. The project implementation involves a multi-stage process consisting of 1) global event detection; 2) intelligent subsampling and creation of mini-data-cubes; 3) forecasting methods development, interpretation, and testing; and 4) cloud deployment and upscaling. The data products will be made available to the community following the reproducibility and FAIR data principles. By effectively combining Earth system science with explainable AI, the project contributes knowledge to advancing the sustainable management of consequences of extreme events. This presentation will show the progress made so far and specifically introduce how to participate in the challenges about spatio-temporal extreme event prediction in DeepExtremes.

How to cite: Mora, K., Brandt, G., Benson, V., Brockmann, C., Camps-Valls, G., Fernández-Torres, M.-Á., Fincke, T., Fomferra, N., Gans, F., Gonzalez, M., Ji, C., Kraemer, G., Marco, E. S., Montero, D., Reichstein, M., Requena-Mesa, C., Valero, O. J. P., Weynants, M., Wieneke, S., and Mahecha, M. D.: DeepExtremes: Explainable Earth Surface Forecasting Under Extreme Climate Conditions, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12657, https://doi.org/10.5194/egusphere-egu23-12657, 2023.

14:20–14:30
|
EGU23-16443
|
ECS
|
On-site presentation
Zavud Baghirov, Basil Kraft, Martin Jung, Marco Körner, and Markus Reichstein

There is evidence for a strong coupling between the terrestrial carbon and water cycles and that these cycles should be studied as an interconnected system (Humphrey et al. 2018). One of the key methods to numerically represent the Earth system is process based modelling, which is, however, still subject to large uncertainties, e.g., due to wrong or incomplete process knowledge (Bonan and Doney 2018). Such models are often rigid and only marginally informed by Earth observations. This is where machine learning (ML) approaches can be advantageous, due to their ability to learn from data in a flexible way. These methods have their own shortcomings, such as their “black-box” nature and lack of physical consistency.

Recently, it has been suggested by Reichstein et al. (2019) to combine process knowledge with ML algorithms to model environmental processes. The so-called hybrid modelling approach has already been used to model different components of terrestrial water storage (TWS) in a global hydrological model (Kraft et al. 2022). This study follows-up on this work with the objective to improve the parameterization of some processes (e.g., soil moisture) and to couple the model with the carbon cycle. The coupling could potentially reduce model uncertainties and help to better understand water-carbon interactions.

The proposed hybrid model of the coupled water and carbon cycles is forced with reanalysis data from ERA-5, such as air temperature, net radiation, and CO2 concentration from CAMS. Water-carbon cycle processes are constrained using observational data products of water-carbon cycles. The hybrid model uses a long short-term memory (LSTM) model—a member of the recurrent neural networks family—at its core for processing the time-series Earth observation data. The LSTM simulates a number of coefficients which are used as parameters in the conceptual model of water and carbon cycles. Some of the key processes represented in the conceptual model are evapotranspiration, snow, soil moisture, runoff, groundwater, water use efficiency (WUE), ecosystem respiration, and net ecosystem exchange. The model partitions TWS into different components and it can be used to assess the impact of different TWS components on the CO2 growth rate. Moreover, we can assess the learned system behaviors of water and carbon cycle interactions for different ecosystems.

References:

Bonan, Gordon B, and Scott C Doney. 2018. “Climate, Ecosystems, and Planetary Futures: The Challenge to Predict Life in Earth System Models.” Science 359 (6375): eaam8328.

Humphrey, Vincent, Jakob Zscheischler, Philippe Ciais, Lukas Gudmundsson, Stephen Sitch, and Sonia I Seneviratne. 2018. “Sensitivity of Atmospheric CO2 Growth Rate to Observed Changes in Terrestrial Water Storage.” Nature 560 (7720): 628–31.

Kraft, Basil, Martin Jung, Marco Körner, Sujan Koirala, and Markus Reichstein. 2022. “Towards Hybrid Modeling of the Global Hydrological Cycle.” Hydrology and Earth System Sciences 26 (6): 1579–1614.

Reichstein, Markus, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Carvalhais, et al. 2019. “Deep Learning and Process Understanding for Data-Driven Earth System Science.” Nature 566 (7743): 195–204.

How to cite: Baghirov, Z., Kraft, B., Jung, M., Körner, M., and Reichstein, M.: Hybrid machine learning model of coupled carbon and water cycles, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16443, https://doi.org/10.5194/egusphere-egu23-16443, 2023.

14:30–14:40
|
EGU23-984
|
ECS
|
On-site presentation
Marco Landt-Hayen, Willi Rath, Sebastian Wahl, Nils Niebaum, Martin Claus, and Peer Kröger

Machine learning (ML) and in particular artificial neural networks (ANNs) push state-of-the-art solutions for many hard problems e.g., image classification, speech recognition or time series forecasting. In the domain of climate science, ANNs have good prospects to identify causally linked modes of climate variability as key to understand the climate system and to improve the predictive skills of forecast systems. To attribute climate events in a data-driven way with ANNs, we need sufficient training data, which is often limited for real world measurements. The data science community provides standard data sets for many applications. As a new data set, we introduce a collection of climate indices typically used to describe Earth System dynamics. This collection is consistent and comprehensive as we use control simulations from Earth System Models (ESMs) over 1,000 years to derive climate indices. The data set is provided as an open-source framework that can be extended and customized to individual needs. It allows to develop new ML methodologies and to compare results to existing methods and models as benchmark. Exemplary, we use the data set to predict rainfall in the African Sahel region and El Niño Southern Oscillation with various ML models. We argue that this new data set allows to thoroughly explore techniques from the domain of explainable artificial intelligence to have trustworthy models, that are accepted by domain scientists. Our aim is to build a bridge between the data science community and researchers and practitioners from the domain of climate science to jointly improve our understanding of the climate system.

How to cite: Landt-Hayen, M., Rath, W., Wahl, S., Niebaum, N., Claus, M., and Kröger, P.: Data-driven Attributing of Climate Events with Climate Index Collection based on Model Data (CICMoD), EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-984, https://doi.org/10.5194/egusphere-egu23-984, 2023.

14:40–14:50
|
EGU23-12948
|
ECS
|
On-site presentation
Peter Miersch, Shijie Jiang, Oldrich Rakovec, and Jakob Zscheischler

River floods are among the most devastating natural hazards, causing thousands of deaths and billions of euros in damages every year. Floods can result from a combination of compounding drivers such as heavy precipitation, snowmelt, and high antecedent soil moisture. These drivers and the processes they govern vary widely both between catchments and between flood events within a catchment, making a causal understanding of the underlying hydrological processes difficult.

Modern causal inference methods, such as the PCMCI framework, are able to identify drivers from complex time series through causal discovery and build causally aware statistical models. However, causal inference tailored to extreme events remains a challenge due to data length limitations. To overcome data limitations, here we bridge the gap between synthetic and real world data using 1,000 years of simulated weather to drive as state-of-the-art hydrological model (the mesoscale Hydrological Model, mHM) over a wide range of European catchments. From the simulated time series, we extract high runoff events, on which we evaluate the causal inference approach. We identify the minimum data necessary for obtaining robust causal models, evaluate metrics for model evaluation and comparison, and compare causal flood drivers across catchments. Ultimately, this work will help establish best practices in causal inference for flood research to identify meteorological and catchment specific flood drivers in a changing climate.

How to cite: Miersch, P., Jiang, S., Rakovec, O., and Zscheischler, J.: Identifying drivers of river floods using causal inference, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12948, https://doi.org/10.5194/egusphere-egu23-12948, 2023.

14:50–15:00
|
EGU23-239
|
ECS
|
On-site presentation
Roberto Ingrosso and Mathieu Boudreault

The future evolution of tropical cyclones (TCs) in a warming world is an important issue, considering their potential socio-economic impacts on the areas hit by these phenomena. Previous studies provide robust responses about the future increase in intensity and in the global proportion of major TCs (Category 4–5). On the other hand, high uncertainty is associated to a projected future decrease in global TCs frequency and to potential changes in TC tracks and translation speed.

Risk management and regulatory actions require more robust quantification in how the climate change affects TCs dynamics.  A probabilistic hybrid TC model based upon statistical and climate models, physically coherent with TCs dynamics, is being built to investigate the potential impacts of climate change. Here, we provide preliminary results, in terms of present climate reconstruction (1980-2021) and future projections (2022-2060) of cyclogenesis locations and TC tracks, based on different statistical models, such as logistic and multiple linear regressions and random forest.  Physical predictors associated with the TC formation and motion and produced by reanalysis (ERA5) and the Community Earth System Model (CESM) ensemble are considered in this study.

 

How to cite: Ingrosso, R. and Boudreault, M.: Toward a hybrid tropical cyclone global model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-239, https://doi.org/10.5194/egusphere-egu23-239, 2023.

15:00–15:10
|
EGU23-16449
|
On-site presentation
Stefano Materia, Martin Jung, Markus G. Donat, and Carlos Gomez-Gonzalez

Seasonal Forecasts are critical tools for early-warning decision support systems, that can help reduce the related risk associated with hot or cold weather and other events that can strongly affect a multitude of socio-economic sectors. Recent advances in both statistical approaches and numerical modeling have improved the skill of Seasonal Forecasts. However, especially in mid-latitudes, they are still affected by large uncertainties that can limit their usefulness.

The MSCA-H2020 project ARTIST aims at improving our knowledge of climate predictability at the seasonal time-scale, focusing on the role of unexplored drivers, to finally enhance the performance of current prediction systems. This effort is meant to reduce uncertainties and make forecasts efficiently usable by regional meteorological services and private bodies. This study focuses on seasonal prediction of heat extremes in Europe, and here we present a first attempt to predict heat wave accumulated activity across different target seasons. An empirical seasonal forecast is designed based on Machine Learning techniques. A feature selection approach is used to detect the best subset of predictors among a variety of candidates, and then an assessment of the relative importance of each predictor is done, in different European regions for the four main seasons.

Results show that many observed teleconnections are caught by the data-driven approach, while a few features that show to be linked to the heat wave propensity of a season deserve a deeper understanding of the underpinning physical process.

How to cite: Materia, S., Jung, M., Donat, M. G., and Gomez-Gonzalez, C.: Data-driven seasonal forecasts of European heat waves, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16449, https://doi.org/10.5194/egusphere-egu23-16449, 2023.

15:10–15:20
|
EGU23-14493
|
ECS
|
On-site presentation
Alessandro Lovo, Corentin Herbert, and Freddy Bouchet
Understanding and predicting extreme events is one of the major challenges for the study of climate change impacts, risk assessment, adaptation, and the protection of living beings. Extreme heatwaves are, and likely will be in the future, among the deadliest weather events. They also increase strain on water resources, food security and energy supply. Developing the ability to forecast their probability of occurrence a few days, weeks, or even months in advance would have major consequences to reduce our vulnerability to these events. Beyond the practical benefits of forecasting heat waves, building statistical models for extreme events which are interpretable is also highly beneficial from a fundamental point of view. Indeed, they enable proper studies of the processes underlying extreme events such as heat waves, improve dataset or model validation, and contribute to attribution studies. Machine learning provides tools to reach both these goals.
We will first demonstrate that deep neural networks can predict the probability of occurrence of long-lasting 14-day heatwaves over France, up to 15 days ahead of time for fast dynamical drivers (500 hPa geopotential height field), and at much longer lead times for slow physical drivers (soil moisture). Those results are amazing in terms of forecasting skill. However, these machine learning models tend to be very complex and are often treated as black boxes. This limits our ability to use them for investigating the dynamics of extreme heat waves.
To gain physical understanding, we have then designed a network architecture which is intrinsically interpretable. The main idea of this architecture is that the network first computes an optimal index, which is an optimal projection of the physical fields in a low-dimensional space. In a second step, it uses a fully non-linear representation of the probability of occurrence of the event as a function of the optimal index. This optimal index can be visualized and compared with classical heuristic understanding of the physical process, for instance in terms of geopotential height and soil moisture. This fully interpretable network is slightly less efficient than the off-the-shelf deep neural network. We fully quantify the performance loss incurred when requiring interpretability and make the connection with the mathematical notion of committor functions.
This new machine learning tool opens the way for understanding optimal predictors of weather and climate extremes. This has potential for the study of slow drivers, and the effect of climate change on the drivers of extreme events.

How to cite: Lovo, A., Herbert, C., and Bouchet, F.: Interpretable probabilistic forecast of extreme heat waves, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-14493, https://doi.org/10.5194/egusphere-egu23-14493, 2023.

15:20–15:30
|
EGU23-9810
|
ECS
|
On-site presentation
Rackhun Son, Nuno Carvalhais, Lazaro Silva, Christian Requena-Mesa, Ulrich Weber, Veronika Gayler, Tobias Stacke, Reiner Schnur, Julia Nabel, Alexander Winkler, and Sönke Zaehle

Fire is an ubiquitous process within the Earth system that has significant impacts in terrestrial ecosystems. Process-based fire models quantify fire disturbance effects in stand-alone dynamic global vegetation models (DGVMs) and within coupled Earth system models (ESMs), and their advances have incorporated both descriptions of natural processes and anthropogenic drivers. However, we still observe a limited skill in modeling and predicting fire at global scale, mostly due to the stochastic nature of fire, but also due to the limits in empirical parameterizations in these process-based models. As an alternative, statistical approaches have shown the advantages of machine learning in providing robust diagnostics of fire damages, though with limited value for process-based modeling frameworks. Here, we develop a deep-learning-based fire model (DL-fire) to estimate gridded burned area fraction at global scale and couple it within JSBACH4, the land surface model used in the ICON ESM. We compare the resulting hybrid model integrating DL-fire into JSBACH4 (JDL-fire) against the standard fire model within JSBACH4 and the stand-alone DL-fire results. The stand-alone DL-fire model forced with observations shows high performance in simulating global burnt fraction, showing a monthly correlation (Rm) with the Global Fire Emissions Database (GFED4) of 0.78 and of 0.8 at global scale during the training (2004-10) and validation periods (2011-15), respectively. The performance remains nearly the same when evaluating the hybrid modeling approach JDL-fire (Rm=0.76 and 0.86 in training and evaluation periods, respectively). This outperforms the currently used standard fire model in JSBACH4 (Rm=-0.16 and 0.22 in training and evaluation periods, respectively) by far. We further evaluate the modeling results across specific fire regions and apply layer-wise relevance propagation (LRP) to quantify importance of each predictor. Overall, land properties, such as fuel amount and water contents in soil layers, stand out as the major factors determining burnt fraction in DL-fire, paralleled by meteorological conditions, over tropical and high latitude regions. Our study demonstrates the potential of hybrid modeling in advancing the predictability of Earth system processes by integrating statistical learning approaches in physics-based dynamical systems.

How to cite: Son, R., Carvalhais, N., Silva, L., Requena-Mesa, C., Weber, U., Gayler, V., Stacke, T., Schnur, R., Nabel, J., Winkler, A., and Zaehle, S.: Integration of a deep-learning-based fire model into a global land surface model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9810, https://doi.org/10.5194/egusphere-egu23-9810, 2023.

15:30–15:40
|
EGU23-11238
|
ECS
|
Highlight
|
Virtual presentation
Jatan Buch, A. Park Williams, and Pierre Gentine

One of the main challenges for forecasting fire activity is the tradeoff between accuracy at finer spatial scales relevant to local decision making and predictability over seasonal (next 2-4 months) and subseasonal-to-seasonal (next 2 weeks to 2 months) timescales. To achieve predictability at long lead times and high spatial resolution, several analyses in the literature have constructed statistical models of fire activity using only antecedent climate predictors. However, in this talk, I will present preliminary seasonal forecasts of wildfire frequency and burned area for the western United States using SMLFire1.0, a stochastic machine learning (SML) fire model, that relies on both observed antecedent climate and vegetation predictors and seasonal forecasts of fire month climate. In particular, I will discuss results obtained by forcing the SMLFire1.0 model with seasonal forecasts from: a) downscaled and bias-corrected North American Multi-Model Ensemble (NMME) outputs, and b) skill-weighted climate analogs constructed using an autoregressive ML model. I will also comment upon the relative contribution of uncertainties, from climate forecasts and fire model simulations respectively, in projections of wildfire frequency and burned area across several spatial scales and lead times. 

How to cite: Buch, J., Williams, A. P., and Gentine, P.: Seasonal forecasts of wildfire frequency and burned area in the western United States using a stochastic machine learning fire model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11238, https://doi.org/10.5194/egusphere-egu23-11238, 2023.

Posters on site: Fri, 28 Apr, 16:15–18:00 | Hall X5

Chairpersons: Marlene Kretschmer, Sebastian Sippel
X5.225
|
EGU23-8661
Robert von Tils and Sven Wiemers

Microscale RANS (Reynolds Averaged Navier Stokes) models are able to simulate the urban climate for entire large cities with a high spatial resolution of up to 5 m horizontally. They do this using data from geographic information systems (GIS) that must be specially processed to provide the models with information about the terrain, buildings, land use, and resolved vegetation. If high-performance computers, for example from research institutions, are not available for the simulations or are beyond the financial scope, the calculation on commercially available servers can take several weeks. The calculation of a reference initial state for a city is often followed by questions regarding adaptation measures due to climate change or the influence of smaller and larger future building developments on the urban climate. These changes lead locally to a change of the urban climate but are also influenced by the urban climate itself.

In order to save computational time and to comfortably give a quantitative fast initial assessment, we trained a neural network that predicts the simulation results of a RANS model (for example: air temperature at night and during the day, wind speed, cold air flow) and implemented this network in a GIS. The tool allows to calculate the impact of development projects on the urban climate in a fraction of the time required by a RANS simulation and comes close to the RANS model in terms of accuracy. It can also be used by people without in-depth knowledge of urban climate modeling and is therefore particularly suitable for use, for example, in specialized offices of administrative departments or by project developers.

How to cite: von Tils, R. and Wiemers, S.: An urban climate neural network screening tool, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8661, https://doi.org/10.5194/egusphere-egu23-8661, 2023.

X5.226
|
EGU23-1855
|
ECS
Venkatesh Budamala, Abhinav Wadhwa, and Rajarshi Das Bhowmik

Unprecedented flash floods (FF) in urban regions are increasing due to heavy rainfall intensity and magnitude as a result of human-induced climate and land-use changes. The changes in weather patterns and various anthropogenic activities increase the complexity of modelling the FF at different spatiotemporal scales: which indicates the importance of multi-resolution forcing information. Towards this, developing new methods for processing coarser resolution spatio-temporal datasets are essential for the efficient modelling of FF. While a wide range of methods is available for spatial and temporal downscaling of the climate data, the multi-temporal downscaling strategy has not been investigated for ungauged stations of streamflow. The current study proposed a multi-temporal downscaling (MTD) methodology for gauged and ungauged stations using Adaptive Emulator Modelling concepts for daily to sub-daily streamflows. The proposed MTD framework for ungauged stations comprise a hybrid framework with conceptual and machine learning-based approaches to analyze the catchment behavior and downscale the model outputs from daily to sub-daily scales. The study area, Peachtree Creek watershed (USA), frequently experiences flash floods; hence, selected to validate the proposed framework. Further, the study addresses the critical issues of model development, seasonality, and diurnal variation of MTD data. The study obtained MTD data with minimal uncertainty on capturing the hydrological signatures and nearly 95% of accuracy in predicting the flow attributes over ungauged stations. The proposed framework can be highly useful for short- and long-range planning, management, and mitigation measurements, where the absence of fine resolution data prohibits flash flood modeling.

How to cite: Budamala, V., Wadhwa, A., and Bhowmik, R. D.: Multi-Temporal Downscaling of Streamflow for Ungauged Stations/ Sub-Basins from Daily to Sub-Daily Interval Using Hybrid Framework – A Case Study on Flash Flood Watershed, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1855, https://doi.org/10.5194/egusphere-egu23-1855, 2023.

X5.227
|
EGU23-2289
|
ECS
|
Meryem Bouchahmoud, Tommi Bergman, and Christina Williamson

Aerosols in the climate system have a direct link to the Earth’s energy balance. Aerosols interact directly with the solar radiation through scattering and absorption; and indirectly by changing cloud properties. The effect aerosols have on climate is one of the major causes of radiative forcing (RF) uncertainty in global climate model simulations. Thus, reducing aerosol RF uncertainty is key to improving climate prediction. The objective of this work is to understand the magnitude and causes of aerosol uncertainty in the chemical transport model TM5.

Perturbed Parameter Ensembles (PPEs) are a set of model runs created by perturbing an ensemble of parameters. Parameters are model inputs, for this study we focus on parameters describing aerosol emissions, properties and processes, such as dry deposition, aging rate, emissions to aerosols microphysics. PPEs vary theses parameters over their uncertainty range all at once to study their combine effect on TM5.

Varying these parameters along with others through their value range, will reflect on TM5 outputs. The TM5 outputs parameters we are using in our sensitivity study are the cloud droplet number concentration and the ambient aerosol absorption optical thickness at 550nm.

Here we discuss the design of the PPE, and one-at-a-time sensitivity studies used in this process. The PPE samples the parameter space to enable us to use emulation. Emulating is a machine learning technique that uses a statistical surrogate model to replace the chemical transport model. The aim is to provide output data with more dense sampling throughout the parameter space. We will be using a Gaussian process emulator, which has been shown to be an efficient technique for quantifying parameter sensitivity in complex global atmospheric models.

We also describe plans to extend this work to emulate an aerosol PPE for EC-Earth. The PPE for EC-Earth will also contain cloud parameters that will vary over their uncertainty range together with the aerosol parameters to examine the influence of aerosol parametric uncertainty on RF.

 

How to cite: Bouchahmoud, M., Bergman, T., and Williamson, C.: Towards understanding the effect of parametric aerosol uncertainty on climate using a chemical transport model perturbed parameter ensemble., EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2289, https://doi.org/10.5194/egusphere-egu23-2289, 2023.

X5.228
|
EGU23-2986
Measuring atmospheric turbulence using Background Oriented Schlieren and machine learning
(withdrawn)
Zin Hu and Cheng Li
X5.229
|
EGU23-3404
|
ECS
|
Highlight
|
Sarah Brüning, Stefan Niebler, and Holger Tost

Clouds and their interdependent feedback mechanisms remain a source of insecurity in climate science. This said, overcoming relating obstacles especially in the context of a changing climate emphasizes the need for a reliable database today more than ever. While passive remote sensing sensors provide continuous observations of the cloud top, they lack vital information on subjacent levels. Here, active instruments can deliver valuable insights to fill this gap in knowledge.

This study sets on to combine the benefits of both instrument types. It aims (1) to reconstruct the vertical distribution of volumetric radar data along the cloud column and (2) to interpolate the resultant 3D cloud structure to the satellite’s full disk by applying a contemporary Deep-Learning approach. Input data was derived by an automated spatio-temporally matching between high-resoluted satellite channels and the overflight of the radar. These samples display the physical predictors that were fed into the network to reconstruct the cloud vertical distribution on each of the radar’s height levels along the whole domain. Data from the entire year 2017 was used to integrate seasonal variations into the modeling routine.

The results demonstrate not only the network’s ability to reconstruct the cloud column along the radar track but also to interpolate coherent structures into a large-scale perspective. While the model performs equally well over land and water bodies, its applicable time frame is limited to daytime predictions only. Finally, the generated data can be leveraged to build a comprehensive database of 3D cloud structures that is to be exploited in proceeding applications.

How to cite: Brüning, S., Niebler, S., and Tost, H.: Deep learning-based generation of 3D cloud structures from geostationary satellite data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3404, https://doi.org/10.5194/egusphere-egu23-3404, 2023.

X5.230
|
EGU23-3418
|
ECS
Guillaume Bertoli, Sebastian Schemm, Firat Ozdemir, Fernando Perez Cruz, and Eniko Szekely

Modelling the transfer of radiation through the atmosphere is a key component of weather and climate models. The operational radiation scheme in the Icosahedral Nonhydrostatic Weather and Climate Model (ICON) is ecRad. The radiation scheme ecRad is accurate but computationally expensive. It is operationally run in ICON on a grid coarser than the dynamical grid and the time step interval between two calls is significantly larger. This is known to reduce the quality of the climate prediction. A possible approach to accelerate the computation of the radiation fluxes is to use machine learning methods. Machine learning methods can significantly speed up computation of radiation, but they may cause climate drifts if they do not respect essential physical laws. In this work, we study random forest and neural network emulations of ecRad. We study different strategies to compare the stability of the emulations. Concerning the neural network, we compare loss functions with an additional energy penalty term and we observe that modifying the loss function is essential to predict accurately the heating rates. The random forest emulator, which is significantly faster to train than the neural network is used as a reference model that the neural network must outperform. The random forest emulator can become extremely accurate but the memory requirement quickly become prohibitive. Various numerical experiments are performed to illustrate the properties of the machine learning emulators.

How to cite: Bertoli, G., Schemm, S., Ozdemir, F., Perez Cruz, F., and Szekely, E.: Building a physics-constrained, fast and stable machine learning-based radiation emulator, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3418, https://doi.org/10.5194/egusphere-egu23-3418, 2023.

X5.231
|
EGU23-3619
|
ECS
Daniel Galea, Julian Kunkel, and Bryan Lawrence

Tropical cyclones are high-impact weather events which have large human and economic effects, so it is important to be able to understand how their location, frequency and structure might change in a future climate.

Here, a lightweight deep learning model is presented which is intended for detecting the presence of tropical cyclones during the execution of numerical simulations for use in an online data reduction method. This will help to avoid saving vast amounts of data for analysis after the simulation is complete. With run-time detection, it might be possible to reduce the need for some of the high-frequency high-resolution output which would otherwise be required.

The model was trained on ERA-Interim reanalysis data from 1979 to 2017 and the training concentrated on delivering the highest possible recall rate (successful detection of cyclones) while rejecting enough data to make a difference in outputs.

When tested using data from the two subsequent years, the recall or probability of detection rate was 92%. The precision rate or success ratio obtained was that of 36%. For the desired data reduction application, if the desired target included all tropical cyclone events, even those which did not obtain hurricane-strength status, the effective precision was 85%.

The recall rate and the Area Under Curve for the Precision/Recall (AUC-PR) compare favourably with other methods of cyclone identification while using the smallest number of parameters for both training and inference. 

Work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-843612

How to cite: Galea, D., Kunkel, J., and Lawrence, B.: TCDetect: A new method of Detecting the Presence of Tropical Cyclones using Deep Learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3619, https://doi.org/10.5194/egusphere-egu23-3619, 2023.

X5.232
|
EGU23-3875
|
ECS
|
Paul Heubel, Lydia Keppler, and Tatiana Iliyna

The Southern Ocean acts as one of Earth's major carbon sinks, taking up anthropogenic carbon from the atmosphere. Earth System Models (ESMs) are used to project its future evolution. However, the ESMs in the Coupled Model Intercomparison Project version 6 (CMIP6) disagree on the biogeochemical representation of the Southern Ocean carbon cycle, both with respect to the phasing and the magnitude of the seasonal cycle of dissolved inorganic carbon (DIC), and they compare poorly with observations.

We develop a framework to investigate model biases in 10 CMIP6 ESMs historical runs incorporating explainable artificial intelligence (xAI) methodologies. Using both a linear Random Forest feature relevance approach to a nonlinear self organizing map - feed forward neural network (SOM-FFN) framework, we relate 5 drivers of the seasonal cycle of DIC in the Southern Ocean in the different CMIP6 models. We investigate temperature, salinity, silicate, nitrate and dissolved oxygen as potential drivers. This analysis allows us to determine dominant statistical drivers of the seasonal cycle of DIC in the different models, and how they compare to the observations. Our findings inform future model development to better constrain the seasonal cycle of DIC.

How to cite: Heubel, P., Keppler, L., and Iliyna, T.: Explainable AI for oceanic carbon cycle analysis of CMIP6, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-3875, https://doi.org/10.5194/egusphere-egu23-3875, 2023.

X5.233
|
EGU23-4432
Climatic micro-zonation of Naples (Italy) through Landsat and machine learning
(withdrawn)
Daniela Flocco, Ester Piegari, and Nicola Scafetta
X5.234
|
EGU23-5487
|
ECS
|
Highlight
Lucas Kugler, Christopher Marrs, Eric Kosczor, and Matthias Forkel

Remote sensing has played a fundamental role for land cover mapping and change detection at least since the launch of the Landsat satellite program in 1972. In 1995, the Central Intelligence Agency of the United States of America released previously classified spy imagery taken from 1960 onwards with near-global coverage from the Keyhole programme, which includes the CORONA satellite mission. CORONA imagery is a treasure because it contains information about land cover 10 years before the beginning of the civilian Earth observation and has a high spatial resolution < 2m. However, this imagery is only pan-chromatic and usually not georeferenced, which has so far prevented a large-scale application for land cover mapping or other geophysical and environmental applications.

Here, we aim to harvest the valuable information about past land cover from CORONA imagery for a state-wide mapping of past land cover changes between 1965 and 1978 by training, testing and validating various deep learning models.

To the best of our knowledge, this is the first work to analyse land cover from CORONA data on a large scale, dividing land cover into six classes based on the CORINE classification scheme. The particular focus of the work is to test the transferability of the deep learning approaches to unknown CORONA data.

To investigate the transferability, we selected 27 spatially and temporally distributed study areas (each 23 km²) in the Free State of Saxony (Germany) and created semantic masks to train and test 10 different U-shaped neuronal network architectures to extract land cover from CORONA data. As input, we use either the original panchromatic pixel values and different texture measures. From these input data, ten different training datasets and test datasets were derived for cross-validation.

The training results show that a semantic segmentation of land cover from CORONA data with the used architectures is possible. Strong differences in model performance (based on cross validation and the intersection over union metric, IOU) were detected among the classes. Classes with many sample data achieve significantly better IOU values than underrepresented classes. In general, a U-shaped architecture with a Transformer as Encoder (Transformer U-Net) achieved the best results. The best segmentation performance (IOU 83.29%), was obtained for forests, followed by agriculture (74.21%). For artificial surfaces, a mean IOU of 68.83% was achieved. Water surfaces achieved a mean IOU of 66.49%. For the shrub vegetation and open areas classes only IOU values mostly below 25% were achieved. The deep learning models were successfully transferable in space (between test areas) and time (between CORONA imagery from different years) especially for classes with many sample data. The transferability of deep learning models was difficult for the mapping of water bodies. Despite the general good model performance and successful transferability for most classes, the transferability was limited especially for imagery of very poor quality. Our approach enabled the state-wide mapping of land cover in Saxony between 1965 and 1978 with a spatial resolution of 2 m. We identify an increase in urban cover and a decrease in cropland cover

How to cite: Kugler, L., Marrs, C., Kosczor, E., and Forkel, M.: Harvesting historical spy imagery by evaluating deep learning models for state-wide mapping of land cover changes between 1965-1978, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5487, https://doi.org/10.5194/egusphere-egu23-5487, 2023.

X5.235
|
EGU23-7862
Pascal Horton and Noelia Otero

The rapid development of deep learning approaches has conquered many fields, and precipitation prediction is one of them. Precipitation modeling remains a challenge for numerical weather prediction or climate models, and parameterization is required for low spatial resolution models, such as those used in climate change impact studies. Machine learning models have been shown to be capable of learning the relationships between other meteorological variables and precipitation. Such models are much less computationally intensive than explicit modeling of precipitation processes and are becoming more accurate than parametrization schemes.

Most existing applications focus either on precipitation extremes aggregated over a domain of interest or on average precipitation fields. Here, we are interested in spatial extremes and focus on the prediction of heavy precipitation events (>95th percentile) and extreme events (>99th percentile) over the European domain. Meteorological variables from ERA5 are used as input, and E-OBS data as target. Different architectures from the literature are compared in terms of predictive skill for average precipitation fields as well as for the occurrence of heavy or extreme precipitation events (threshold exceedance). U-Net architectures show higher skills than other variants of convolutional neural networks (CNN). We also show that a shallower U-Net architecture performs as well as the original network for this application, thus reducing the model complexity and, consequently, the computational resources. In addition, we analyze the number of inputs based on the importance of the predictors provided by a layer-wise relevance propagation procedure.

How to cite: Horton, P. and Otero, N.: Predicting spatial precipitation extremes with deep learning models. A comparison of existing model architectures., EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7862, https://doi.org/10.5194/egusphere-egu23-7862, 2023.

X5.236
|
EGU23-8496
|
ECS
Eshaan Agrawal and Christian Schroder de Witt

With no end to anthropogenic greenhouse gas emissions in sight, policymakers are increasingly debating artificial mechanisms to cool the earth's climate. One such solution is stratospheric atmospheric injections (SAI), a method of solar geoengineering where particles are injected into the stratosphere in order to reflect the sun’s rays and lower global temperatures. Past volcanic events suggest that SAI can lead to fast substantial surface temperature reductions, and it is projected to be economically feasible. Research in simulation, however, suggests that SAI can lead to catastrophic side effects. It is also controversial among politicians and environmentalists because of the numerous challenges it poses geopolitically, environmentally, and for human health. Nevertheless, SAI is increasingly receiving attention from policymakers. In this research project, we use deep reinforcement learning to study if, and by how much, carefully engineered temporally and spatially varying injection strategies can mitigate catastrophic side effects of SAI. To do this, we are using the HadCM3 global circulation model to collect climate system data in response to artificial longitudinal aerosol injections. We then train a neural network emulator on this data, and use it to learn optimal injection strategies under a variety of objectives by alternating model updates with reinforcement learning. We release our dataset and code as a benchmark dataset to improve emulator creation for solar aerosol engineering modeling. 

How to cite: Agrawal, E. and Schroder de Witt, C.: Utilizing AI emulators to Model Stratospheric Aerosol Injections and their Effect on Climate, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8496, https://doi.org/10.5194/egusphere-egu23-8496, 2023.

X5.237
|
EGU23-8666
|
ECS
Rohith Teja Mittakola, Philippe Ciais, Jochen Schubert, David Makowski, Chuanlong Zhou, Hassan Bazzi, Taochun Sun, Zhu Liu, and Steven Davis

Natural gas is the primary fuel used in U.S. residences, especially during winter, when cold temperatures drive the heating demand. In this study, we use daily county-level gas consumption data to assess the spatial patterns of the relationships and sensitivities of gas consumption by U.S. households considering outdoor temperatures. Linear-plus-plateau functions are found to be the best fit for gas consumption and are applied to derive two key coefficients for each county: the heating temperature threshold (Tcrit) below which residential heating starts and the rate of increase in gas consumption when the outdoor temperature drops by one degree (Slope). We then use interpretable machine learning models to evaluate the key building properties and socioeconomic factors related to the spatial patterns of Tcrit and Slope based on a large database of individual household properties and population census data. We find that building age, employment rates, and household size are the main predictors of Tcrit, whereas the share of gas as a heating fuel and household income are the main predictors of Slope. The latter result suggests inequalities across the U.S. with respect to gas consumption, with wealthy people living in well-insulated houses associated with low Tcrit and Slope values. Finally, we estimate potential reductions in gas use in U.S. residences due to improvements in household insulation or a hypothetical behavioral change toward reduced consumption by adopting a 1°C lower Tcrit than the current value and a reduced slope. These two scenarios would result in 25% lower gas consumption at the national scale, avoiding 1.24 million MtCO2 of emissions per year. Most of these reductions occur in the Midwest and East Coast regions. The results from this study provide new quantitative information for targeting efforts to reduce household gas use and related CO2 emissions in the U.S.

How to cite: Mittakola, R. T., Ciais, P., Schubert, J., Makowski, D., Zhou, C., Bazzi, H., Sun, T., Liu, Z., and Davis, S.: Drivers of Natural Gas Use in United States Buildings, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8666, https://doi.org/10.5194/egusphere-egu23-8666, 2023.

X5.238
|
EGU23-9434
|
ECS
Elżbieta Lasota, Julius Polz, Christian Chwala, Lennart Schmidt, Peter Lünenschloß, David Schäfer, and Jan Bumberger

The rapidly growing number of low-cost environmental sensors and data from opportunistic sensors constantly advances the quality as well as the spatial and temporal resolution of weather and climate models. However, it also leads to the need for effective tools to ensure the quality of collected data.

Time series quality control (QC) from multiple spatial, irregularly distributed sensors is a challenging task, as it requires the simultaneous integration and analysis of observations from sparse neighboring sensors and consecutive time steps. Manual QC is very often time- and labour- expensive and requires expert knowledge, which introduces subjectivity and limits reproducibility. Therefore, automatic, accurate, and robust QC solutions are in high demand, where among them one can distinguish machine learning techniques. 

In this study, we present a novel approach for the quality control of time series data from multiple spatial, irregularly distributed sensors using graph neural networks (GNNs). Although we applied our method to commercial microwave link attenuation data collected from a network in Germany between April and October 2021, our solution aims to be generic with respect to the number and type of sensors, The proposed approach involves the use of an autoencoder architecture, where the GNN is used to model the spatial relationships between the sensors, allowing for the incorporation of contextual information in the quality control process. 

While our model shows promising results in initial tests, further research is needed to fully evaluate its effectiveness and to demonstrate its potential in a wider range of environmental applications. Eventually, our solution will allow us to further foster the observational basis of our understanding of the natural environment.

How to cite: Lasota, E., Polz, J., Chwala, C., Schmidt, L., Lünenschloß, P., Schäfer, D., and Bumberger, J.: Enhancing environmental sensor data quality control with graph neural networks, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9434, https://doi.org/10.5194/egusphere-egu23-9434, 2023.

X5.239
|
EGU23-13250
|
ECS
|
Julien Lenhardt, Johannes Quaas, and Dino Sejdinovic

Clouds are classified into types, classes, or regimes. The World Meteorological Organization distinguishes stratus and cumulus clouds and three altitude layers. Cloud types exhibit very different radiative properties and interact in numerous ways with aerosol particles in the atmosphere. However, it has proven difficult to define cloud regimes objectively and from remote sensing data, hindering the understanding we have of the processes and adjustments involved.

Building on the method we previously developed, we combine synoptic observations and passive satellite remote-sensing retrievals to constitute a database of cloud types and cloud properties to eventually train a cloud classification algorithm. The cloud type labels come from the global marine meteorological observations dataset (UK Met Office, 2006) which is comprised of near-global synoptic observations. This data record reports back information about cloud type and other meteorological quantities at the surface. The cloud classification model is built on different cloud-top and cloud optical properties (Level 2 products MOD06/MYD06 from the MODIS sensor) extracted temporally close to the observation time and on a 128km x 128km grid around the synoptic observation location. To make full use of the large quantity of remote sensing data available and to investigate the variety in cloud settings, a convolutional variational auto-encoder (VAE) is applied as a dimensionality reduction tool in a first step. Furthermore, such model architecture allows to account for spatial relationships while describing non-linear patterns in the input data. The cloud classification task is subsequently performed drawing on the constructed latent representation of the VAE. Associating information from underneath and above the cloud enables to build a robust model to classify cloud types. For the training we specify a study domain in the Atlantic ocean around the equator and evaluate the method globally. Further experiments and evaluation are done on simulation data produced by the ICON model.

How to cite: Lenhardt, J., Quaas, J., and Sejdinovic, D.: From MODIS cloud properties to cloud types using semi-supervised learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13250, https://doi.org/10.5194/egusphere-egu23-13250, 2023.

X5.240
|
EGU23-13622
|
ECS
Ayush Prasad and Swarnalee Mazumder

In recent years, both the intensity and extent of marine heatwaves have increased across the world. Anomalies in sea surface temperature have an effect on the health of marine ecosystems, which are crucial to the Earth's climate system. Marine Heatwaves' devastating impacts on aquatic life have been increasing steadily in recent years, harming aquatic ecosystems and causing a tremendous loss of marine life. Early warning systems and operational forecasting that can foresee such events can aid in designing effective and better mitigation techniques. Recent studies have shown that machine learning and deep learning-based approaches can be used for forecasting the occurrence of marine heatwaves up to a year in advance. However, these models are black box in nature and do not provide an understanding of the factors influencing MHWs. In this study, we used machine learning methods to forecast marine heatwaves. The developed models were tested across four historical Marine Heatwave events around the world. Explainable AI methods were then used to understand and analyze the relationships between the drivers of these events.

How to cite: Prasad, A. and Mazumder, S.: Towards explainable marine heatwaves forecasts, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-13622, https://doi.org/10.5194/egusphere-egu23-13622, 2023.

X5.241
|
EGU23-15540
|
ECS
|
Highlight
Adrian Höhl, Stella Ofori-Ampofo, Ivica Obadic, Miguel-Ángel Fernández-Torres, Ridvan Salih Kuzu, and Xiaoxiang Zhu

Climate variability and extremes are known to represent major causes for crop yield anomalies. They can lead to the reduction of crop productivity, which results in disruptions in food availability and nutritional quality, as well as in rising food prices. Extreme climates will become even more severe as global warming proceeds, challenging the achievement of food security. These extreme events, especially droughts and heat waves, are already evident in major food-production regions like the United States. Crops cultivated in this country such as corn and soybean are critical for both domestic use and international supply. Considering the sensitivity of crops to climate, here we present a dataset that couples remote sensing surface reflectances with climate variables (e.g. minimum and maximum temperature, precipitation, and vapor pressure) and extreme indicators. The dataset contains the crop yields of various commodities over the USA for nearly two decades. Given the advances and proven success of machine learning in numerous remote sensing tasks, our dataset constitutes a benchmark to advance the development of novel models for crop yield prediction, and to analyze the relationship between climate and crop yields for gaining scientific insights. Other potential use cases include extreme event detection and climate forecasting from satellite imagery. As a starting point, we evaluate the performance of several state-of-the-art machine and deep learning models to form a baseline for our benchmark dataset.

How to cite: Höhl, A., Ofori-Ampofo, S., Obadic, I., Fernández-Torres, M.-Á., Salih Kuzu, R., and Zhu, X.: USCC: A Benchmark Dataset for Crop Yield Prediction under Climate Extremes, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15540, https://doi.org/10.5194/egusphere-egu23-15540, 2023.

X5.242
|
EGU23-15817
|
ECS
|
Gregor Ehrensperger, Tobias Hell, Georg Johann Mayr, and Thorsten Simon

Atmospheric conditions that are typical for lightning are commonly represented by proxies such as cloud top height, cloud ice flux, CAPE times precipitation, or the lightning potential index. While these proxies generally deliver reasonable results, they often need to be adapted for local conditions in order to perform well. This suggests that there is a need for more complex and holistic proxies. Recent research confirms that the use of machine learning (ML) approaches for describing lightning is promising.

In a previous study a deep learning model was trained on single spatiotemporal (30km x 30km x 1h) cells in the summer period of the years 2010--2018 and showed good results for the unseen test year 2019 within Austria. We now improve this model by using multiple neighboring vertical atmospheric columns to also address for horizontal moisture advection. Furthermore data of successive hours is used as input data to enable the model to capture the temporal development of atmospheric conditions such as the build-up and breakdown of convections.

In this work we focus on the summer months June to August and use data from parts of Central Europe. This spatial domain is thought to be representative for Continental Europe since it covers mountainous aswell as coastal regions. We take raw ERA5 parameters beyond the tropopause enriched with a small amount of meta data such as the day of the year and the hour of the day for training. The quality of the resulting paramaterized model is then evaluated on Continental Europe to examine the generalization ability.

Using parts of Central Europe to train the model, we evaluate its ability to generalize on unseen parts of Continental Europe using EUCLID data. Having a model that generalizes well is a building block for a retrospective analysis back into years where the structured recording of accurate lightning observations in a unified way was not established yet.

How to cite: Ehrensperger, G., Hell, T., Mayr, G. J., and Simon, T.: Evaluating the generalization ability of a deep learning model trained to detect cloud-to-ground lightning on raw ERA5 data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15817, https://doi.org/10.5194/egusphere-egu23-15817, 2023.

X5.243
|
EGU23-17333
|
ECS
|
Maura Dewey, Hans Christen Hansson, and Annica M. L. Ekman

Here we develop a statistical model emulating the surface temperature response to changes in emissions of short-lived climate forcers as simulated by an Earth system model. Short-lived climate forcers (SLCFs) are chemical components in the atmosphere that interact with radiation and have both an immediate effect on local air quality, and regional and global effects on the climate in terms of changes in temperature and precipitation distributions. The short atmospheric residence times of SLCFs lead to high atmospheric concentrations in emission regions and a highly variable radiative forcing pattern. Regional Temperature Potentials (RTPs) are metrics which quantify the impact of emission changes in a given region on the temperature or forcing response of another, accounting for spatial inhomogeneities in both forcing and the temperature response, while being easy to compare across models and to use in integrated assessment studies or policy briefs. We have developed a Gaussian-process emulator using output from the Norwegian Earth System Model (NorESM) to predict the temperature responses to regional emission changes in SLCFs (specifically back carbon, organic carbon, sulfur dioxide, and methane) and use this model to calculate regional RTPs and study the sensitivity of surface temperature in a certain region, e.g. the Arctic, to anthropogenic emission changes in key policy regions. The main challenge in developing the emulator was creating the training data set such that we included maximal SLCF variability in a realistic and policy relevant range compared to future emission scenarios, while also getting a significant temperature response. We also had to account for the confounding influence of greenhouse gases (GHG), which may not follow the same future emission trajectories as SLCFs and can overwhelm the more subtle temperature response that comes from the direct and indirect effects of SLCF emissions. The emulator can potentially provide accurate and customizable predictions for policy makers to proposed emission changes with minimized climate impact.

How to cite: Dewey, M., Hansson, H. C., and Ekman, A. M. L.: Emulating the regional temperature responses (RTPs) of short-lived climate forcers, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-17333, https://doi.org/10.5194/egusphere-egu23-17333, 2023.

X5.244
|
EGU23-492
|
ECS
|
Lukas Brunner, Sebastian Sippel, and Aiko Voigt

Climate models are primary tools to investigate processes in the climate system, to project future changes, and to inform decision makers. The latest generation of models provides increasingly complex and realistic representations of the real climate system while there is also growing awareness that not all models produce equally plausible or independent simulations. Therefore, many recent studies have investigated how models differ from observed climate and how model dependence affects model output similarity, typically drawing on climatological averages over several decades.

Here, we show that temperature maps from individual days from climate models from the CMIP6 archive can be robustly identified as “observation” or “model” even after removing the global mean. An important exception is a prototype high-resolution simulation from the ICON model family that can not be so  unambiguously classified into one category. These results highlight that persistent differences between observed and simulated climate emerge at very short time scales already, but very high resolution modelling efforts may be able to overcome some of these shortcomings.

We use two different machine learning classifiers: (1) logistic regression, which allows easy insights into the learned coefficients but has the limitation of being a linear method and (2) a convolutional neural network (CNN) which represents rather the other end of the complexity spectrum, allowing to learn nonlinear spatial relations between features but lacking the easy interpretability logistic regression allows. For CMIP6 both methods perform comparably, while the CNN is also able to recognize about 75% of samples from ICON as coming from a model, while linear regression does not have any skill for this case.

Overall, we demonstrate that the use of machine learning classifiers, once trained, can overcome the need for multiple decades of data to investigate a given model. This opens up novel avenues to test model performance on much shorter times scales.

How to cite: Brunner, L., Sippel, S., and Voigt, A.: Separation of climate models and observations based on daily output using two machine learning classifiers, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-492, https://doi.org/10.5194/egusphere-egu23-492, 2023.

X5.245
|
EGU23-1135
|
ECS
|
Vitus Besel, Milica Todorović, Theo Kurtén, Patrick Rinke, and Hanna Vehkamäki

As cloud and aerosol interactions remain large uncertainties in current climate models (IPCC) they are of special interest for atmospheric science. It is estimated that more than 70% of all cloud condensation nuclei origin from so-called New Particle Formation, which is the process of gaseous precursors clustering together in the atmosphere and subsequent growth into particles and aerosols. After initial clustering this growth is driven strongly by condensation of low volatile organic compounds (LVOC), that is molecules with saturation vapor pressures (pSat) below 10-6 mbar [1]. These origin from organic molecules emitted by vegetation that are subsequently rapidly oxidized in the air, so-called Biogenic LVOC (BLVOC).

We have created a big data set of BLVOC using high-throughput computing and Density Functional Theory (DFT), and use it to train Machine Learning models to predict pSat of previously unseen BLVOC. Figure 1 illustrates some sample molecules form the data.

Figure 1: Sample molecules, for small, medium large sizes.     Figure 2: Histogram of the calculated saturation vapor pressures.

Initially the chemical mechanism GECKO-A provides possible BLVOC molecules in the form of SMILES strings. In a first step the COSMOconf program finds and optimizes structures of possible conformers and provides their energies for the liquid phase on a DFT level of theory. After an additional calculation of the gas phase energies with Turbomole, COSMOtherm calculates thermodynamical properties, such as the pSat, using the COSMO-RS [1] model. We compressed all these computations together in a highly parallelised high-throughput workflow to calculate 32k BLVOC, that include over 7 Mio. molecular conformers. See a histogram of the calculated pSat in Figure 2.

We use the calculated pSat to train a Gaussian Process Regression (GPR) machine learning model with the Topological Fingerprint as descriptor for molecular structures. The GPR incorporates noise and outputs uncertainties for predictions on the pSat. These uncertainties and data cluster techniques allow for the active choosing of molecules to include in the training data, so-called Active Learning. Further, we explore using SLISEMAP [2] explainable AI methods to correlate Machine Learning predictions, the high-dimensional descriptors and human-readable properties, such as functional groups.

[1] Metzger, A. et al. Evidence for the role of organics in aerosol particle formation under atmospheric conditions. Proc. Natl. Acad. Sci. 107, 6646–6651, 10.1073/pnas.0911330107 (2010)
[2] Klamt, A. & Schüürmann, G. Cosmo: a new approach to dielectric screening in solvents with explicit expressions for the
screening energy and its gradient. J. Chem. Soc., Perkin Trans. 2 799–805, 10.1039/P29930000799 (1993).
[3] Björklund, A., Mäkelä, J. & Puolamäki, K. SLISEMAP: supervised dimensionality reduction through local explanations. Mach Learn (2022). https://doi.org/10.1007/s10994-022-06261-1

How to cite: Besel, V., Todorović, M., Kurtén, T., Rinke, P., and Vehkamäki, H.: Curation of High-level Molecular Atmospheric Data for Machine Learning Purposes, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1135, https://doi.org/10.5194/egusphere-egu23-1135, 2023.

X5.246
|
EGU23-1244
Steven Hardiman, Adam Scaife, Annelize van Niekerk, Rachel Prudden, Aled Owen, Samantha Adams, Tom Dunstan, Nick Dunstone, and Melissa Seabrook

There is growing use of machine learning algorithms to replicate sub-grid parametrisation schemes in global climate models.  Parametrisations rely on approximations, thus there is potential for machine learning to aid improvements.  In this study, a neural network is used to mimic the behaviour of the non-orographic gravity wave scheme used in the Met Office climate model, important for stratospheric climate and variability.  The neural network is found to require only two of the six inputs used by the parametrisation scheme, suggesting the potential for greater efficiency in this scheme.  Use of a one-dimensional mechanistic model is advocated, allowing neural network hyperparameters to be trained based on emergent features of the coupled system with minimal computational cost, and providing a test bed prior to coupling to a climate model.  A climate model simulation, using the neural network in place of the existing parametrisation scheme, is found to accurately generate a quasi-biennial oscillation of the tropical stratospheric winds, and correctly simulate the non-orographic gravity wave variability associated with the El Nino Southern Oscillation and stratospheric polar vortex variability.  These internal sources of variability are essential for providing seasonal forecast skill, and the gravity wave forcing associated with them is reproduced without explicit training for these patterns.

How to cite: Hardiman, S., Scaife, A., van Niekerk, A., Prudden, R., Owen, A., Adams, S., Dunstan, T., Dunstone, N., and Seabrook, M.: Machine learning for non-orographic gravity waves in a climate model, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-1244, https://doi.org/10.5194/egusphere-egu23-1244, 2023.

X5.247
|
EGU23-2541
|
ECS
Pauline Bonnet, Fernando Iglesias-Suarez, Pierre Gentine, Marco Giorgetta, and Veronika Eyring

Global climate models use parameterizations to represent the effect of subgrid scale processes on the resolved state. Parameterizations in the atmosphere component usually include radiation, convection, cloud microphysics, cloud cover, gravity wave drag, vertical turbulence in the boundary layer and other processes. Parameterizations are semi-empirical functions that include a number of tunable parameters. Because these parameters are loosely constraint with experimental data, a range of values are typically explored by evaluating model runs against observations and/or high resolution runs. Fine tuning a climate model is a complex inverse problem due to the number of tunable parameters and observed climate properties to fit. Moreover, parameterizations are sources of uncertainties for climate projections, thus fine tuning is a crucial step in model development.

Traditionally, tuning is a time-consuming task done manually, by iteratively updating the values of the parameters in order to investigate the parameter space with user-experience driven choices. To overcome such limitation and search efficiently through the parameter space one can implement automatic techniques. Typical steps in automatic tuning are: (i) constraining the scope of the study (model, simulation setup, parameters, metrics to fit and corresponding reference values); (ii) conducting a sensitivity analysis to reduce the parameter space and/or building an emulator for the climate model; and (iii) conducting a sophisticated grid search to define the optimum parameter set or its distribution (e.g., rejection sampling and history matching). The ICOsahedral Non-hydrostatic (ICON) model is a modelling framework for numerical weather prediction and climate projections. We implement a ML-based automatic tuning technic to tune a recent version of ICON-A with a spatial resolution typically used for climate projections. We evaluate the tuned ICON-A model against satellite observations using the Earth System Model Evaluation Tool (ESMValTool). Although automatic tuning technics allow to reach the optimum parameter values in less steps than with the manual tuning, they still require some experience-driven choices throughout the tuning process. Moreover, the performances of the tuned model is limited by the structural errors of the model, inherent to the mathematical description of the parameterizations included in the model.

How to cite: Bonnet, P., Iglesias-Suarez, F., Gentine, P., Giorgetta, M., and Eyring, V.: Machine learning based automated parameter tuning of ICON-A using satellite data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-2541, https://doi.org/10.5194/egusphere-egu23-2541, 2023.

X5.248
|
EGU23-4296
|
ECS
Towards Super-Resolution SAR Tomography of Forest Areas using Deep Learning-Assisted Compressive Sensing
(withdrawn)
Cédric Léonard, Qi Zhang, Kun Qian, Yuanyuan Wang, and Xiao Xiang Zhu
X5.249
|
EGU23-5583
|
ECS
Johannes Meuer, Claudia Timmreck, Shih-Wei Fang, and Christopher Kadow

Accurately interpreting past climate variability can be a challenging task, particularly when it comes to distinguishing between forced and unforced changes. In the  case of large volcanic eruptions, ice core records are a very valuable tool but still often not sufficient to link reconstructed anomaly patterns to a volcanic eruption at all or to its geographical location. In this study, we developed a convolutional neural network (CNN) that is able to classify whether a volcanic eruption occurred and its location (northern hemisphere extratropical, southern hemisphere extratropical, or tropics) with an accuracy of 92%.

To train the CNN, we used 100 member ensembles of the MPI-ESM-LR global climate model, generated using the easy volcanic aerosol (EVA) model, which provides the radiative forcing of idealized volcanic eruptions of different strengths and locations. The model considered global sea surface temperature and precipitation patterns 12 months after the eruption over a time period of 3 months.

In addition to demonstrating the high accuracy of the CNN, we also applied layer-wise relevance propagation (LRP) to the model to understand its decision-making process and identify the input data that influenced its predictions. Our study demonstrates the potential of using CNNs and interpretability techniques for identifying and locating past volcanic eruptions as well as improving the accuracy and understanding of volcanic climate signals.

How to cite: Meuer, J., Timmreck, C., Fang, S.-W., and Kadow, C.: Identifying and Locating Volcanic Eruptions using Convolutional Neural Networks and Interpretability Techniques, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5583, https://doi.org/10.5194/egusphere-egu23-5583, 2023.

X5.250
|
EGU23-7457
|
ECS
Dusan Fister, Jorge Pérez-Aracil, César Peláez-Rodríguez, Marie Drouard, Pablo G. Zaninelli, David Barriopedro Cepero, Ricardo García-Herrera, and Sancho Salcedo-Sanz

To organise weather data as images, pixels represent coordinates and magnitude of pixels represents the state of the observed variable in a given time. Observed variables, such as air temperature, mean sea level pressure, wind components and others, may be collected into higher dimensional images or even into a motion structure. Codification of formers as a spatial and the latter as a spatio-temporal allows them to be processed using the deep learning methods, for instance autoencoders and autoencoder-like architectures. The objective of the original autoencoder is to reproduce the input image as much as possible, thus effectively equalising the input and output during the training. Then, an advantage of autoencoder can be utilised to calculate the deviations between (1) true states (effectively the inputs), which are derived by nature, and the (2) expected states, which are derived by means of statistical learning. Calculated deviations can then be interpreted to identify the extreme events, such as heatwaves, hot days or any other rare events (so-called anomalies). Additionally, by modelling deviations by statistical distributions, geographical areas with higher probabilities of anomalies can be deduced at the tails of the distribution. The capability of reproduction of the (original input) images is hence crucial in order to avoid addressing arbitrary noise as anomaly. We would like to run experiments to realise the effective architecture that give reasonable solutions, verify the benefits of implementing the variational autoencoder, realise the effect of selecting various statistical loss functions, and find out the effective architecture of the decoder part of the autoencoder.

How to cite: Fister, D., Pérez-Aracil, J., Peláez-Rodríguez, C., Drouard, M., G. Zaninelli, P., Barriopedro Cepero, D., García-Herrera, R., and Salcedo-Sanz, S.: Towards the effective autoencoder architecture to detect weather anomalies, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7457, https://doi.org/10.5194/egusphere-egu23-7457, 2023.

X5.251
|
EGU23-7465
|
ECS
Paolo Pelucchi, Jorge Vicent, J. Emmanuel Johnson, Philip Stier, and Gustau Camps-Valls

The retrieval of atmospheric aerosol properties from satellite remote sensing is a complex and under-determined inverse problem. Traditional retrieval algorithms, based on radiative transfer models, must make approximations and assumptions to reach a unique solution or repeatedly use the expensive forward models to be able to quantify uncertainty. The recently introduced Invertible Neural Networks (INNs), a machine learning method based on Normalizing Flows, appear particularly suited for tackling inverse problems. They simultaneously model both the forward and the inverse branches of the problem, and their generative aspect allows them to efficiently provide non-parametric posterior distributions for the retrieved parameters, which can be used to quantify the retrieval uncertainty. So far INNs have successfully been applied to low-dimensional idealised inverse problems and even to some simpler scientific retrieval problems. Still, satellite aerosol retrievals present particular challenges, such as the high variability of the surface reflectance signal and the often comparatively small aerosol signal in the top-of-the-atmosphere (TOA) measurements.

In this study, we investigate the use of INNs for retrieving aerosol optical depth (AOD) and its uncertainty estimates at the pixel level from MODIS TOA reflectance measurements. The models are trained with custom synthetic datasets of TOA reflectance-AOD pairs made by combining the MODIS Dark Target algorithm’s atmospheric look-up tables and a MODIS surface reflectance product. The INNs are found to perform emulation and inversion of the look-up tables successfully. We initially train models adapted to different surface types by focusing our application on limited regional and seasonal contexts. The models are applied to real measurements from the MODIS sensor, and the generated AOD retrievals and posterior distributions are compared to the corresponding Dark Target and AERONET retrievals for evaluation and discussion.

How to cite: Pelucchi, P., Vicent, J., Johnson, J. E., Stier, P., and Camps-Valls, G.: Invertible neural networks for satellite retrievals of aerosol optical depth, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-7465, https://doi.org/10.5194/egusphere-egu23-7465, 2023.

X5.252
|
EGU23-8085
|
ECS
Guido Ascenso, Andrea Ficchì, Leone Cavicchia, Enrico Scoccimarro, Matteo Giuliani, and Andrea Castelletti

Tropical cyclones (TCs) are one of the costliest and deadliest natural disasters due to the combination of their strong winds and induced storm surges and heavy precipitation, which can cause devastating floods. Unfortunately, due to its high spatio-temporal variability, complex underlying physical process, and lack of high-quality observations, precipitation is still one of the most challenging aspects of a TC to model. However, as precipitation is a key forcing variable for hydrological processes acting across multiple space-time scales, accurate precipitation input is crucial for reliable hydrological simulations and forecasts.

A popular source of precipitation data is the ERA5 reanalysis dataset, frequently used as input to hydrological models when studying floods. However, ERA5 systematically underestimates TC-induced precipitation compared to MSWEP, a multi-source observational dataset fusing gauge, satellite, and reanalysis-based data, currently one of the most accurate precipitation datasets. Moreover, the spatial distribution of TC-rainfall in ERA5 has large room for improvement.

Here, we present a precipitation correction scheme based on U-Net, a popular deep-learning architecture. Rather than only adjusting the per-pixel precipitation values at each timestep of a given TC, we explicitly design our model to also adjust the spatial distribution of the precipitation; to the best of our knowledge, we are the first to do so. The key novelty of our model is a custom-made loss function, based on the combination of the fractions skill score (FSS) and mean absolute error (MAE) metrics. We train and validate the model on 100k time steps (with an 80:20 train:test split) from global TC precipitation events. We show how a U-Net trained with our loss function can reduce the per-pixel MAE of ERA5 precipitation by nearly as much as other state-of-the-art methods, while surpassing them significantly in terms of improved spatial patterns of precipitation. Finally, we discuss how the outputs of our model can be used for future research.

How to cite: Ascenso, G., Ficchì, A., Cavicchia, L., Scoccimarro, E., Giuliani, M., and Castelletti, A.: Improving the spatial accuracy of extreme tropical cyclone rainfall in ERA5 using deep learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8085, https://doi.org/10.5194/egusphere-egu23-8085, 2023.

X5.253
|
EGU23-8921
|
ECS
|
Angelica Caseri and Francisco A. Rodrigues

In Brazil, the water system is essential for the electrical system and agribusiness. Understanding climate changes and predicting long-term hydrometeorological phenomena is vital for developing and maintaining these sectors in the country. This work aims to use data from the SIN system (National Interconnected System) in Brazil, from the main hydrological basins, as well as historical rainfall data, in complex networks and deep learning algorithms, to identify possible climate changes in Brazil and predict future hydrometeorological phenomena. Through the methodology developed in this work, the predictions generated showed satisfactory results, which allows identifying regions more sensitive to climate change and anticipating climate events. This work is expected to help the energy generation system in Brazil and the agronomy sector, the main sectors that drive the country's economy.

How to cite: Caseri, A. and A. Rodrigues, F.: Identification of sensitive regions to climate change and anticipation of climate events in Brazil, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-8921, https://doi.org/10.5194/egusphere-egu23-8921, 2023.

X5.254
|
EGU23-9337
|
ECS
|
Vitus Benson, Christian Requena-Mesa, Claire Robin, Lazaro Alonso, Nuno Carvalhais, and Markus Reichstein

The biosphere displays high heterogeneity at landscape-scale. Vegetation modelers struggle to represent this variability in process-based models because global observations of micrometeorology and plant traits are not available at such fine granularity. However, remote sensing data is available: the Sentinel 2 satellites with a 10m resolution capture aspects of localized vegetation dynamics. The EarthNet challenge (EarthNet2021, [1]) aims at predicting satellite imagery conditioned on coarse-scale weather data. Multiple research groups approached this challenge with deep learning [2,3,4]. Here, we evaluate how well these satellite image models simulate the vegetation response to climate, where the vegetation status is approximated by the NDVI vegetation index.

Achieving the new vegetation-centric evaluation requires three steps. First, we update the original EarthNet2021 dataset to be suitable for vegetation modeling: EarthNet2021x includes improved georeferencing, a land cover map, and a more effective cloud mask. Second, we introduce the interpretable evaluation metric VegetationScore: the Nash Sutcliffe model efficiency (NSE) of NDVI predictions over clear-sky observations per vegetated pixel aggregated through normalization to dataset level. The ground truth NDVI time series achieves a VegetationScore of 1, the target period mean NDVI a VegetationScore of 0. Third, we assess the skill of two deep neural networks with the VegetationScore: ConvLSTM [2,3], which combines convolutions and recurrency, and EarthFormer [4], a Transformer adaptation for Earth science problems. 

Both models significantly outperform the persistence baseline. They do not display systematic biases and generally catch spatial patterns. Yet, both neural networks achieve a negative VegetationScore. Only in about 20% of vegetated pixels, the deep learning models do beat a hypothetical model predicting the true target period mean NDVI. This is partly because models largely underestimate the temporal variability. However, the target variability may partially be inflated by the noisy nature of the observed NDVI. Additionally, increasing uncertainty for longer lead times decreases scores: the mean RMSE in the first 25 days is 50% lower than between 75 and 100 days lead time. In general, consistent with the EarthNet2021 leaderboard, the EarthFormer outperforms the ConvLSTM. With EarthNet2021x, a more narrow perspective to the EarthNet challenge is introduced. Modeling localized vegetation response is a task that requires careful adjustments of off-the-shelf computer vision architectures for them to excel. The resulting specialized approaches can then be used to advance our understanding of the complex interactions between vegetation and climate.



 [1] Requena-Mesa, Benson, Reichstein, Runge and Denzler. EarthNet2021: A large-scale dataset and challenge for Earth surface forecasting as a guided video prediction task. CVPR Workshops, 2021.

 [2] Diaconu, Saha, Günnemann and Zhu. Understanding the Role of Weather Data for Earth Surface Forecasting Using a ConvLSTM-Based Model. CVPR Workshops, 2022.

 [3] Kladny, Milanta, Mraz, Hufkens and Stocker. Deep learning for satellite image forecasting of vegetation greenness. bioRxiv, 2022.

 [4] Gao, Shi, Wang, Zhu, Wang, Li and Yeung. Earthformer: Exploring Space-Time Transformers for Earth System Forecasting. NeurIPS, 2022.

How to cite: Benson, V., Requena-Mesa, C., Robin, C., Alonso, L., Carvalhais, N., and Reichstein, M.: Modeling landscape-scale vegetation response to climate: Synthesis of the EarthNet challenge, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-9337, https://doi.org/10.5194/egusphere-egu23-9337, 2023.

X5.255
|
EGU23-10219
|
ECS
Mohit Anand, Friedrich Bohn, Lily-belle Sweet, Gustau Camps-Valls, and Jakob Zscheischler

Forest health is affected by many interacting and correlated weather variables over multiple temporal scales. Climate change affects weather conditions and their dependencies. To better understand future forest health and status, an improved scientific  understanding of the complex relationships between weather conditions and forest mortality is required. Explainable AI (XAI) methods are increasingly used to understand and simulate physical processes in complex environments given enough data. In this work, an hourly weather generator (AWE-GEN) is used  to simulate 200,000 years of daily weather conditions representative of central Germany. It is capable of simulating low and high-frequency characteristics of weather variables and also captures the inter-annual variability of precipitation. These data are then used to drive an individual-based forest model (FORMIND) to simulate the dynamics of a beech, pine, and spruce forest. A variational autoencoder β-VAE is used to learn representations of the generated weather conditions, which include radiation, precipitation and temperature. We learn shared and specific variable latent representations using a decoder network which remains the same for all the weather variables. The representation learning is completely unsupervised. Using the output of the forest model, we identify single and compounding weather prototypes that are associated with extreme forest mortality. We find that the prototypes associated with extreme mortality are similar for pine and spruce forests and slightly different for beech forests. Furthermore, although the compounding weather prototypes represent a larger sample size (2.4%-3.5%) than the single prototypes (1.7%-2.2%), they are associated with higher levels of mortality on average. Overall, our research illustrates how deep learning frameworks can be used to identify weather patterns that are associated with extreme impacts.

 

How to cite: Anand, M., Bohn, F., Sweet, L., Camps-Valls, G., and Zscheischler, J.: Identifying compound weather prototypes of forest mortality with β-VAE, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10219, https://doi.org/10.5194/egusphere-egu23-10219, 2023.

X5.256
|
EGU23-11355
Hoyoung Cha, Jongyun Byun, Jongjin Baik, and Changhyun Jun

  This study proposes a novel approach on estimation of fine dust concentration from raw video data recorded by surveillance cameras. At first, several regions of interest are defined from specific images extracted from videos in surveillance cameras installed at Chung-Ang University. Among them, sky fields are mainly considered to figure out changes in characteristics of each color. After converting RGB images into BGR images, a number of discrete pixels with brightness intensities in a blue channel is mainly analyzed by investigating any relationships with fine dust concentration measured from automatic monitoring stations near the campus. Here, different values of thresholds from 125 to 200 are considered to find optimal conditions from changes in values of each pixel in the blue channel. This study uses the Pearson correlation coefficient to calculate the correlation between the number of pixels with values over the selected threshold and observed data for fine dust concentration. As an example on one specific date, the coefficients reflect their positive correlations with a range from 0.57 to 0.89 for each threshold. It should be noted that this study is a novel attempt to suggest a new, simple, and efficient method for estimating fine dust concentration from surveillance cameras common in many areas around the world.

 

Keywords: Fine Dust Concentration, BGR Image, Surveillance Camera, Threshold, Correlation Analysis

 

Acknowledgment

  This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2022R1A4A3032838) and this work was funded by the Korea Meteorological Administration Research and Development Program under Grant KMI2022-01910 and this work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (2020R1G1A1013624).

How to cite: Cha, H., Byun, J., Baik, J., and Jun, C.: Estimation of Fine Dust Concentration from BGR Images in Surveillance Cameras, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-11355, https://doi.org/10.5194/egusphere-egu23-11355, 2023.

X5.257
|
EGU23-12137
|
ECS
|
Maksims Pogumirskis, Tija Sīle, and Uldis Bethers

Low-level jets are maximums in the vertical profile of the wind speed profile in the lowest levels of atmosphere. Low-level jets, when present, can make a significant impact on the wind energy. Wind conditions in low-level jets depart from traditional assumptions about wind profile and low-level jets can also influence the stability and turbulence that are important for wind energy applications.

In literature commonly an algorithm of identifying low-level jets is used to estimate frequency of low-level jets. The algorithm searches for maximum in the lowest levels of the atmosphere with a temperature inversion above the jet maximum. The algorithm is useful in identifying the presence of the low-level jets and estimating their frequency. However, low-level jets can be caused by a number of different mechanisms which leads to differences in low-level jet characteristics. Therefore, additional analysis is necessary to distinguish between different types of jets and characterize their properties. We aim to automate this process using Principal Component Analysis (PCA) to identify main patterns of wind speed and temperature. By analyzing diurnal and seasonal cycles of these patterns a better understanding about climatology of low-level jets in the region can be gained.

This study focuses on the central part of the Baltic Sea. Several recent studies have identified the presence of low-level jets near the coast of Kurzeme. Typically, maximums of low-level jets are located several hundred meters above the surface, while near the coast of Kurzeme maximums of low-level jets are usually within the lowest 100 meters of the atmosphere.

Data from UERRA reanalysis with 11 km horizontal resolution on 12 height levels in the lowest 500 meters of atmosphere was used. The algorithm that identifies low-level jets was applied to the data, to estimate frequency of low-level jets in each grid cell of the model. Jet events were grouped by the wind direction to identify main trajectories of low-level jets in the region. Several atmosphere cross-sections that low-level jets frequently flow through were chosen for further analysis.

Model data was interpolated to the chosen cross-sections and PCA was applied to the cross-section data of wind speed, geostrophic wind speed and temperature. Main patterns of these meteorological parameters, such as wind speed maximum, temperature inversion above the surface of the sea and temperature difference between sea and land were identified by the PCA. Differences of principal components between cross-sections and diurnal and seasonal patterns of principal components helped to gain better understanding of climatology, extent and mechanisms of low-level jets in the region.

How to cite: Pogumirskis, M., Sīle, T., and Bethers, U.: Identifying mechanisms of low-level jets near coast of Kurzeme using Principal Component Analysis, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-12137, https://doi.org/10.5194/egusphere-egu23-12137, 2023.

X5.258
|
EGU23-15185
|
ECS
Mohamed Chouai, Felix Simon Reimers, and Sebastian Mieruch-Schnülle

In this study, which is part of the M-VRE [https://mosaic-vre.org/about] project, we aim to improve a quality control (QC) system on arctic ocean temperature profile data using deep learning. For the training, validation, and evaluation of our algorithms, we are using the UDASH dataset [https://essd.copernicus.org/articles/10/1119/2018/]. In the classical QC setting, the ocean expert or "operator", applies a series of thresholding (classical) algorithms to identify, i.e. flag, erroneous data. In the next step, the operator visually inspects every data profile, where suspicious samples have been identified. The goal of this time-consuming visual QC is to find "false positives", i.e. flagged data that is actually good, because every sample/profile has not only a scientific value but also a monetary one. Finally, the operator turns all "false positive" data back to good. The crucial point here is that although these samples/profiles are above certain thresholds they are considered good by the ocean expert. These human expert decisions are extremely difficult, if not impossible, to map by classical algorithms. However, deep-learning neural networks have the potential to learn complex human behavior. Therefore, we have trained a deep learning system to "learn" exactly the expert behavior of finding "false positives" (identified by the classic thresholds), which can be turned back to good accordingly. The first results are promising. In a fully automated setting, deep learning improves the results and fewer data are flagged. In a subsequent visual QC setting, deep learning relieves the expert with a distinct workload reduction and gives the option to clearly increase the quality of the data.
Our long-term goal is to develop an arctic quality control system as a series of web services and Jupyter notebooks to apply automated and visual QC online, efficient, consistent, reproducible, and interactively.

How to cite: Chouai, M., Simon Reimers, F., and Mieruch-Schnülle, S.: Deep learning to support ocean data quality control, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-15185, https://doi.org/10.5194/egusphere-egu23-15185, 2023.

X5.259
|
EGU23-16098
Tobias Hell, Gregor Ehrensperger, Georg J. Mayr, and Thorsten Simon

Atmospheric environments favorable for lightning and convection are commonly represented by proxies or parameterizations based on expert knowledge such as CAPE, wind shears, charge separation, or combinations thereof. Recent developments in the field of machine learning, high resolution reanalyses, and accurate lightning observations open possibilities for identifying tailored proxies without prior expert knowledge. To identify vertical profiles favorable for lightning, a deep neural network links ERA5 vertical profiles of cloud physics, mass field variables and wind to lightning location data from the Austrian Lightning Detection & Information System (ALDIS), which has been transformed to a binary target variable labelling the ERA5 cells as lightning and no lightning cells. The ERA5 parameters are taken on model levels beyond the tropopause forming an input layer of approx. 670 features. The data of 2010 - 2018 serve as training/validation. On independent test data, 2019, the deep network outperforms a reference with features based on meteorological expertise. Shapley values highlight the atmospheric processes learned by the network which identifies cloud ice and snow content in the upper and mid-troposphere as relevant features. As these patterns correspond to the separation of charge in thunderstorm cloud, the deep learning model can serve as physically meaningful description of lightning. 

How to cite: Hell, T., Ehrensperger, G., Mayr, G. J., and Simon, T.: Identifying Lightning Processes in ERA5 Soundings with Deep Learning, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16098, https://doi.org/10.5194/egusphere-egu23-16098, 2023.

X5.260
|
EGU23-16163
|
ECS
|
Emma Boland, Dani Jones, and Erin Atkinson

Unsupervised classification is becoming an increasingly common method to objectively identify coherent structures within both observed and modelled climate data. However, the user must choose the number of classes to fit in advance. Typically, a combination of statistical methods and expertise is used to choose the appropriate number of classes for a given study, however it may not be possible to identify a single ‘optimal’ number of classes. In this
work we present a heuristic method for determining the number of classes unambiguously for modelled data where more than one ensemble member is available. This method requires robustness in the class definition between simulated ensembles of the system of interest. For demonstration, we apply this to the clustering of Southern Ocean potential temperatures in a CMIP6 climate model, and compare with other common criteria such as Bayesian Information Criterion (BIC) and the Silhouette Score.

How to cite: Boland, E., Jones, D., and Atkinson, E.: A comparison of methods for determining the number of classes in unsupervised classification of climate models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16163, https://doi.org/10.5194/egusphere-egu23-16163, 2023.

X5.261
|
EGU23-16186
|
ECS
Jannik Thümmel, Martin Butz, and Bedartha Goswami

Recent years have seen substantial performance-improvements of deep-learning-based
weather prediction models (DLWPs). These models cover a large range of temporal and
spatial resolutions—from nowcasting to seasonal forecasting and on scales ranging from
single to hundreds of kilometers. DLWPs also exhibit a wide variety of neural architec-
tures and training schemes, with no clear consensus on best practices. Focusing on the
short-to-mid-term forecasting ranges, we review several recent, best-performing models
with respect to critical design choices. We emphasize the importance of self-organizing
latent representations and inductive biases in DLWPs: While NWPs are designed to sim-
ulate resolvable physical processes and integrate unresolvable subgrid-scale processes by
approximate parameterizations, DLWPs allow the latent representation of both kinds of
dynamics. The purpose of this review is to facilitate targeted research developments and
understanding of how design choices influence performance of DLWPs. While there is
no single best model, we highlight promising avenues towards accurate spatio-temporal
modeling, probabilistic forecasts and computationally efficient training and infer

How to cite: Thümmel, J., Butz, M., and Goswami, B.: A review of deep learning for weather prediction, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-16186, https://doi.org/10.5194/egusphere-egu23-16186, 2023.

X5.262
|
EGU23-17082
|
ECS
Kristofer Hasel, Marianne Bügelmayer-Blaschek, and Herbert Formayer

Climate change indices (CCI) defined by the expert team on climate change detection and indices (ETCCDI) profoundly contribute to understanding climate and its change. They are used to present climate change in an easy to understand and tangible way, thus facilitating climate communication. Many of the indices are peak over threshold indices needing daily and, if necessary, bias corrected data to be calculated from. We present a method to rapidly estimate specific CCI from monthly data instead of daily while also performing a simple bias correction as well as a localisation (downscaling). Therefore, we used the ERA5 Land data with a spatial resolution of 0.1° supplemented by a CMIP6 ssp5-8.5 climate projection to derive different regression functions which allow a rapid estimation by monthly data. Using a climate projection as a supplement in training the regression functions allows an application not only on historical periods but also on future periods such as those provided by climate projections. Nevertheless, the presented method can be adapted to any data set, allowing an even higher spatial resolution.

How to cite: Hasel, K., Bügelmayer-Blaschek, M., and Formayer, H.: A statistical approach on rapid estimations of climate change indices by monthly instead of daily data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-17082, https://doi.org/10.5194/egusphere-egu23-17082, 2023.

X5.263
|
EGU23-17197
Sebastian Lehner, Katharina Enigl, and Matthias Schlögl

Geoclimatic regions represent climatic forcing zones, which constitute important spatial entities that serve as a basis for a broad range of analyses in earth system sciences. The plethora of geospatial variables that are relevant for obtaining consistent clusters represent a high-dimensionality, especially when working with high-resolution gridded data, which may render the derivation of such regions complex. This is worsened by typical characteristics of geoclimatic data like multicollinearity, nonlinear effects and potentially complex interactions between features. We therefore present a nonparametric machine learning workflow, consisting of dimensionality reduction and clustering for deriving geospatial clusters of similar geoclimatic characteristics. We demonstrate the applicability of the proposed procedure using a comprehensive dataset featuring climatological and geomorphometric data from Austria, aggregated to the recent climatological normal from 1992 to 2021.
 
The modelling workflow consists of three major sequential steps: (1) linear dimensionality reduction using Principal Component Analysis, yielding a reduced, orthogonal sub-space, (2) nonlinear dimensionality reduction applied to the reduced sub-space using Uniform Manifold Approximation and Projection, and (3) clustering the learned manifold projection via Hierarchical Density-Based Spatial Clustering of Applications with Noise. The contribution of the input features to the cluster result is then assessed by means of permutation feature importance of random forest models. These are trained by treating the clustering result as a supervised classification problem. Results show the flexibility of the defined workflow and exhibit good agreement with both quantitatively derived and synoptically informed characterizations of geoclimatic regions from other studies. However, this flexibility does entail certain challenges with respect to hyperparameter settings, which require careful exploration and tuning. The proposed workflow may serve as a blueprint for deriving consistent geospatial clusters exhibiting similar geoclimatic attributes.

How to cite: Lehner, S., Enigl, K., and Schlögl, M.: Machine learning workflow for deriving regional geoclimatic clusters from high-dimensional data, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-17197, https://doi.org/10.5194/egusphere-egu23-17197, 2023.

X5.264
|
EGU23-5967
|
ECS
Sebastian Scher, Andreas Trügler, and Jakob Abermann

Machine Learning (ML) and AI techniques, especially methods based on Deep Learning, have long been considered as black boxes that might be good at predicting, but not explaining predictions. This has changed recently, with more techniques becoming available that explain predictions by ML models – known as Explainable AI (XAI). These have seen adaptation also in climate science, because they could have the potential to help us in understanding the physics behind phenomena in geoscience. It is, however, unclear, how large that potential really is, and how these methods can be incorporated into the scientific process. In our study, we use the exemplary research question of which aspects of the large-scale atmospheric circulation affect specific local conditions. We compare the different answers to this question obtained with a range of different methods, from the traditional approach of targeted data analysis based on physical knowledge (such as using dimensionality reduction based on physical reasoning) to purely data-driven and physics-unaware methods using Deep Learning with XAI techniques. Based on these insights, we discuss the usefulness and potential pitfalls of XAI for understanding and explaining phenomena in geosciences. 

How to cite: Scher, S., Trügler, A., and Abermann, J.: Potentials and challenges of using Explainable AI for understanding atmospheric circulation, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-5967, https://doi.org/10.5194/egusphere-egu23-5967, 2023.

Posters virtual: Fri, 28 Apr, 16:15–18:00 | vHall CL

Chairperson: Duncan Watson-Parris
vCL.6
|
EGU23-10391
|
ECS
|
Jiawen Zhang and Jianyu Liu

Hydrological models and machine learning models are widely used in streamflow simulation and data reconstruction. However, a global assessment of these models is still lacking and no synthesized catchment-scale streamflow product derived from multiple models is available over the globe. In this study, we comprehensively evaluated four conceptual hydrological models (GR2M, XAJ, SAC, Alpine) and four machine learning models (RF, GBDT, DNN, CNN) based on the selected 16,218 gauging stations worldwide, and then applied multi-model weighting ensemble (MWE) method to merge streamflow simulated from these models. Generally, the average performance of the machine learning model for all stations is better than that of the hydrological model, and with more stations having a quantified simulation accuracy (KGE>0.2); However, the hydrological model achieves a higher percentage of stations with a good simulation accuracy (KGE>0.6). Specifically, for the average accuracy during the validation period, there are 67% (27%) and 74% (21%) of stations showed a “quantified” (“good”) level for the hydrological models and machine learning models, respectively. The XAJ is the best-performing model of the four hydrological models, particularly in tropical and temperate zones. Among the machine learning models, the GBDT model shows better performance on the global scale. The MWE can effectively improve the simulation accuracy and perform much better than the traditional multi-model arithmetic ensemble (MAE), especially for the constrained least squares prediction combination method (CLS) with 82% (28%) of the stations having a “qualified” (“good”) accuracy. Furthermore, by exploring the influencing factors of the streamflow simulation, we found that both machine-learning models and hydrological models perform better in wetter areas.

How to cite: Zhang, J. and Liu, J.: Simulation and reconstruction of global monthly runoff based on hydrological models and machine learning models, EGU General Assembly 2023, Vienna, Austria, 23–28 Apr 2023, EGU23-10391, https://doi.org/10.5194/egusphere-egu23-10391, 2023.