Displays

ITS4.3/AS5.2

There are many ways in which machine learning promises to provide insight into the Earth System, and this area of research is developing at a breathtaking pace. If unsupervised, supervised as well as reinforcement learning can hold this promise remains an open question, particularly for predictions. Machine learning could help extract information from numerous Earth System data, such as satellite observations, as well as improve model fidelity through novel parameterisations or speed-ups. This session invites submissions spanning modelling and observational approaches towards providing an overview of the state-of-the-art of the application of these novel methods.

Share:
Co-organized by BG2/CL5/ESSI2/NP4
Convener: Julien Brajard | Co-conveners: Peter Düben, Redouane Lguensat, Francine Schevenhoven, Maike Sonnewald
Displays
| Attendance Wed, 06 May, 14:00–18:00 (CEST)

Files for download

Session materials Download all presentations (107MB)

Chat time: Wednesday, 6 May 2020, 14:00–15:45

Chairperson: Julien Brajard
D2327 |
EGU2020-19339
Rachel Furner, Peter Haynes, Dan Jones, Dave Munday, Brooks Paige, and Emily Shuckburgh

The recent boom in machine learning and data science has led to a number of new opportunities in the environmental sciences. In particular, climate models represent the best tools we have to predict, understand and potentially mitigate climate change, however these process-based models are incredibly complex and require huge amounts of high-performance computing resources. Machine learning offers opportunities to greatly improve the computational efficiency of these models.

Here we discuss our recent efforts to reduce the computational cost associated with running a process-based model of the physical ocean by developing an analogous data-driven model. We train statistical and machine learning algorithms using the outputs from a highly idealised sector configuration of general circulation model (MITgcm). Our aim is to develop an algorithm which is able to predict the future state of the general circulation model to a similar level of accuracy in a more computationally efficient manner.

We first develop a linear regression model to investigate the sensitivity of data-driven approaches to various inputs, e.g. temperature on different spatial and temporal scales, and meta-variables such as location information. Following this, we develop a neural network model to replicate the general circulation model, as in the work of Dueben and Bauer 2018, and Scher 2018.

We present a discussion on the sensitivity of data-driven models and preliminary results from the neural network based model.

 

Dueben, P. D., & Bauer, P. (2018). Challenges and design choices for global weather and climate models based on machine learning. Geoscientific Model Development, 11(10), 3999-4009.

Scher, S. (2018). Toward Data‐Driven Weather and Climate Forecasting: Approximating a Simple General Circulation Model With Deep Learning. Geophysical Research Letters, 45(22), 12-616.

How to cite: Furner, R., Haynes, P., Jones, D., Munday, D., Paige, B., and Shuckburgh, E.: Developing a data-driven ocean model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19339, https://doi.org/10.5194/egusphere-egu2020-19339, 2020.

D2328 |
EGU2020-21754
Dario Lucente, Freddy Bouchet, and Corentin Herbert

There is a growing interest in the climate community to improve the prediction of high impact climate events, for instance ENSO (El-Ni\~no--Southern Oscillation) or extreme events, using a combination of model and observation data. In this talk we present a machine learning approach for predicting the committor function, the relevant concept. 

Because the dynamics of the climate system is chaotic, one usually distinguishes between time scales much shorter than a Lyapunov time for which a deterministic weather forecast is relevant, and time scales much longer than a mixing times beyond which any deterministic forecast is irrelevant and only climate averaged or probabilistic quantities can be predicted. However, for most applications, the largest interest is for intermediate time scales for which some information, more precise than the climate averages, might be predicted, but for which a deterministic forecast is not relevant. We call this range of time scales \it{the predictability margin}. We stress in this talk that the prediction problem at the predictability margin is of a probabilistic nature. Indeed, such time scales might typically be of the order of the Lyapunov time scale or larger, where errors on the initial condition and model errors limit our ability to compute deterministically the evolution. In this talk we explain that, in a dynamical context, the relevant quantity for predicting a future event at the predictability margin is a committor function. A committor function is the probability that an event will occur or not in the future, as a function of the current state of the system. 

We compute and discuss the committor function from data, either through a direct approach or through a machine learning approach using neural networks. We discuss two examples: a) the computation of the Jin and Timmerman model, a low dimensional model proposed to explain the decadal amplitude changes of El-Ni\~no, b) the computation of committor function for extreme heat waves. We compare several machine learning approaches, using neural network or using kernel-based analogue methods.

From the point of view of the climate extremes, our main conclusion is that one should generically distinguish between states with either intrinsic predictability or intrinsic unpredictability. This predictability concept is markedly different from the deterministic unpredictability arising because of chaotic dynamics and exponential sensivity to initial conditions. 

How to cite: Lucente, D., Bouchet, F., and Herbert, C.: Machine Learning of committor functions for predicting high impact climate events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21754, https://doi.org/10.5194/egusphere-egu2020-21754, 2020.

D2329 |
EGU2020-20207
Gregory Duane and Mao-Lin Shen

In a supermodel, different models of the same objective process exchange information in run-time, effecting a form of inter-model data assimilation, with learned connections, that brings the models into partial synchrony and resolves differences.  It has been shown [Chaos Focus Issue, Dec. ‘17], that supermodels can avoid errors of the separate models, even when all the models err qualitatively in the same way.  They can thus surpass results obtained from any ex post facto averaging of model outputs.

Since climate models differ largely in their schemes for parametrization of sub-grid-scale processes, one would expect supermodeling to be most useful when the small-scale processes have the largest effect on the dynamics of the entire model.  According to the self-organized criticality conjecture of Bak [‘87] inter-scale interactions are greatest near critical points of the system, characterized by a power-law form in the amplitude spectrum, and real-world systems naturally tend toward such critical points. Supermodels are therefore expected to be particularly useful near such states.

We validate this hypothesis first in a toy supermodel consisting of two quasigeostrophic channel models of the blocked/zonal flow vacillation, each model forced by relaxation to a jet flow pattern, but with different forcing strengths.  One model, with low forcing, remains in a state of low-amplitude turbulence with no blocking. The other model, with high forcing, remains in the state defined by the forcing jet, again with no blocking.  Yet a model with realistic forcing, and the supermodel formed from the two extreme models by training the connections, exhibit blocking with the desired vacillation. The amplitude or energy spectrum of the supermodel exhibits the power-law dependence on wavenumber, characteristic of critical states, over a larger range of scales than does either of the individual models.

Then we turn to the more realistic case of a supermodel formed by coupling different ECHAM atmospheres to a common MPI ocean model. The atmosphere models differ only in their schemes for parametrizing small-scale convection.  The weights on the energy and momentum fluxes from the two atmospheres, as they affect the ocean, are trained to form a supermodel.  The separate models both exhibit the error of a double inter-tropical convergence zone (ITCZ), i.e. an extended cold tongue. But the trained supermodel (with positive weights) has the single ITCZ found in reality. The double ITCZ error in one model arises from a weak Bjernkes ocean-atmosphere feedback in the 2D tropical circulation. The double ITCZ in the other model arises from a more complex mechanism involving the 3D circulation pattern extending into the sub-tropics. The more correct supermodel behavior, and associated ENSO cycle, are reflected in an energy spectrum with power-law form with a dynamic range and an exponent that are more like those of reality than are the corresponding quantities for the separate models, which are similar to each other. It thus appears that supermodels, in avoiding similar errors made by different constituent models for different reasons, are particularly useful both for emulating critical behavior, and for capturing the correct properties of critical states.

How to cite: Duane, G. and Shen, M.-L.: Learned Criticality in “Supermodels” That Combine Competing Models of the Earth System With Adaptable Inter-Model Connections , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20207, https://doi.org/10.5194/egusphere-egu2020-20207, 2020.

D2330 |
EGU2020-20845
Said Ouala, Lucas Drumetz, Bertrand Chapron, Ananda Pascual, Fabrice Collard, Lucile Gaultier, and Ronan Fablet

Within the geosciences community, data-driven techniques have encountered a great success in the last few years. This is principally due to the success of machine learning techniques in several image and signal processing domains. However, when considering the data-driven simulation of ocean and atmospheric fields, the application of these methods is still an extremely challenging task due to the fact that the underlying dynamics usually depend on several complex hidden variables, which makes the learning and simulation process much more challenging.

In this work, we aim to extract Ordinary Differential Equations (ODE) from partial observations of a system. We propose a novel neural network architecture guided by physical and mathematical considerations of the underlying dynamics. Specifically, our architecture is able to simulate the dynamics of the system from a single initial condition even if the initial condition does not lie in the attractor spanned by the training data. We show on different case studies the effectiveness of the proposed framework both in capturing long term asymptotic patterns of the dynamics of the system and in addressing data assimilation issues which relates to the short term forecasting performance of our model.

How to cite: Ouala, S., Drumetz, L., Chapron, B., Pascual, A., Collard, F., Gaultier, L., and Fablet, R.: Learning Lyapunov stable Dynamical Embeddings of Geophysical Dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20845, https://doi.org/10.5194/egusphere-egu2020-20845, 2020.

D2331 |
EGU2020-7569
Davide Faranda, Mathieu Vrac, Pascal Yiou, Flavio Maria Emanuele Pons, Adnane Hamid, Giulia Carella, Cedric Gacial Ngoungue Langue, Soulivanh Thao, and Valerie Gautard

Recent advances in statistical learning have opened the possibility to forecast the behavior of chaotic systems using recurrent neural networks. In this letter we investigate the applicability of this framework to geophysical flows, known to be intermittent and turbulent.  We show that both turbulence and intermittency introduce severe limitations on the applicability of recurrent neural networks, both for short term forecasts as well as for the reconstruction of the underlying attractor. We test these ideas on global sea-level pressure data for the past 40 years, issued from the NCEP reanalysis datase, a proxy of the atmospheric circulation dynamics.  The performance of recurrent neural network in predicting both short and long term behaviors rapidly drops when the systems are perturbed with noise. However, we found that a good predictability is partially recovered when scale separation is performed via a moving average filter. We suggest that possible strategies to overcome limitations  should be based on separating the smooth large-scale dynamics, from the intermittent/turbulent features. 

How to cite: Faranda, D., Vrac, M., Yiou, P., Pons, F. M. E., Hamid, A., Carella, G., Ngoungue Langue, C. G., Thao, S., and Gautard, V.: Boosting performance in Machine Learning of Turbulent and Geophysical Flows via scale separation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7569, https://doi.org/10.5194/egusphere-egu2020-7569, 2020.

D2332 |
EGU2020-13982
Deep Learning based cloud parametrization for the Community Atmosphere Model
(withdrawn)
Gunnar Behrens, Veronika Eyring, Pierre Gentine, Mike S. Pritchard, Tom Beucler, and Stephan Rasp
D2333 |
EGU2020-14055
Robin Stoffer, Caspar van Leeuwen, Damian Podareanu, Valeriu Codreanu, Menno Veerman, and Chiel van Heerwaarden

Large-eddy simulation (LES) is an often used technique in the geosciences to simulate turbulent oceanic and atmospheric flows. In LES, the effects of the unresolved turbulence scales on the resolved scales (via the Reynolds stress tensor) have to be parameterized with subgrid models. These subgrid models usually require strong assumptions about the relationship between the resolved flow fields and the Reynolds stress tensor, which are often violated in reality and potentially hamper their accuracy.

In this study, using the finite-difference computational fluid dynamics code MicroHH (v2.0) and turbulent channel flow as a test case (friction Reynolds number Reτ 590), we incorporated and tested a newly emerging subgrid modelling approach that does not require those assumptions. Instead, it relies on neural networks that are highly non-linear and flexible. Similar to currently used subgrid models, we designed our neural networks such that they can be applied locally in the grid domain: at each grid point the neural networks receive as an input the locally resolved flow fields (u,v,w), rather than the full flow fields. As an output, the neural networks give the Reynolds stress tensor at the considered grid point. This local application integrates well with our simulation code, and is necessary to run our code in parallel within distributed memory systems.

To allow our neural networks to learn the relationship between the specified input and output, we created a training dataset that contains ~10.000.000 samples of corresponding inputs and outputs. We derived those samples directly from high-resolution 3D direct numerical simulation (DNS) snapshots of turbulent flow fields. Since the DNS explicitly resolves all the relevant turbulence scales, by downsampling the DNS we were able to derive both the Reynolds stress tensor and the corresponding lower-resolution flow fields typical for LES. In this calculation, we took into account both the discretization and interpolation errors introduced by the finite staggered LES grid. Subsequently, using these samples we optimized the parameters of the neural networks to minimize the difference between the predicted and the ‘true’ output derived from DNS.

After that, we tested the performance of our neural networks in two different ways:

  1. A priori or offline testing, where we used a withheld part of the training dataset (10%) to test the capability of the neural networks to correctly predict the Reynolds stress tensor for data not used to optimize its parameters. We found that the neural networks were, in general, well able to predict the correct values.
  2. A posteriori or online testing, where we incorporated our neural networks directly into our LES. To keep the total involved computational effort feasible, we strongly enhanced the prediction speed of the neural network by relying on highly optimized matrix-vector libraries. The full successful integration of the neural networks within LES remains challenging though, mainly because the neural networks tend to introduce numerical instability into the LES. We are currently investigating ways to minimize this instability, while maintaining the high accuracy in the a priori test and the high prediction speed.

How to cite: Stoffer, R., van Leeuwen, C., Podareanu, D., Codreanu, V., Veerman, M., and van Heerwaarden, C.: Large-eddy simulation subgrid modelling using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14055, https://doi.org/10.5194/egusphere-egu2020-14055, 2020.

D2334 |
EGU2020-9820
| Highlight
William Collins, Travis O'Brien, Mr Prabhat, and Karthik Kashinath
Machine learning (ML) has proven to be a very powerful body of techniques for identifying rare but highly impactful weather events in huge volumes of climate model output and satellite data.  When these events and the changes in them are studied in the context of global warming, these phenomena are known as climate extremes.  This talk concerns the challenges in applying ML to identify climate extremes, which often center on how to provide suitable training data to these algorithms.  The challenges are:
  1. In many cases, the official definitions for the weather events in the current climate are either ad hoc and/or subjective, leading to considerable variance in the statistics of these events even in literature concerning the historical record; 
  2. Operational methods for identifying these events are also typically quite ad hoc with very limited quantification of their structural and parametric uncertainties; and
  3. Both the generative mechanisms and physical properties of these events are both predicted to evolve due to well-understood physics, and hence the training data set  should but typically does not reflect these secular trends in the formation and statistical properties of climate extremes.  
We describe several approaches to addressing these issues, including:
  1. The recent creation of the first labeled data set specifically designed for algorithm training on atmospheric extremes, known as ClimateNet;
  2. Probabilistic ML algorithms that identify events based on the level of agreement across an ensemble of operational methods;
  3. Bayesian methods for that identify events based on the level of agreement across an ensemble of human expert-generated labels; and 
  4. The prospects for physics-based detection using fundamental properties of the fluid dynamics (i.e., conserved variables and Lyapunov exponents) and/or information-theoretic concepts.

How to cite: Collins, W., O'Brien, T., Prabhat, M., and Kashinath, K.: Machine learning for detection of climate extremes: New approaches to uncertainty quantification, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9820, https://doi.org/10.5194/egusphere-egu2020-9820, 2020.

D2335 |
EGU2020-10883
Benedikt Knüsel, Christoph Baumberger, Marius Zumwald, David N. Bresch, and Reto Knutti

Due to ever larger volumes of environmental data, environmental scientists can increasingly use machine learning to construct data-driven models of phenomena. Data-driven environmental models can provide useful information to society, but this requires that their uncertainties be understood. However, new conceptual tools are needed for this because existing approaches to assess the uncertainty of environmental models do so in terms of specific locations, such as model structure and parameter values. These locations are not informative for an assessment of the predictive uncertainty of data-driven models. Rather than the model structure or model parameters, we argue that it is the behavior of a data-driven model that should be subject to an assessment of uncertainty.

In this paper, we present a novel framework that can be used to assess the uncertainty of data-driven environmental models. The framework uses argument analysis and focuses on epistemic uncertainty, i.e., uncertainty that is related to a lack of knowledge. It proceeds in three steps. The first step consists in reconstructing the justification of the assumption that the model used is fit for the predictive task at hand. Arguments for this justification may, for example, refer to sensitivity analyses and model performance on a validation dataset. In a second step, this justification is evaluated to identify how conclusively the fitness-for-purpose assumption is justified. In a third step, the epistemic uncertainty is assessed based on the evaluation of the arguments. Epistemic uncertainty emerges due to insufficient justification of the fitness-for-purpose assumption, i.e., if the model is less-than-maximally fit-for-purpose. This lack of justification translates to predictive uncertainty, or first-order uncertainty. Uncertainty also emerges if it is unclear how well the fitness-for-purpose assumption is justified. We refer to this uncertainty as “second-order uncertainty”. In other words, second-order uncertainty is uncertainty that researchers face when assessing first-order uncertainty.

We illustrate how the framework is applied by discussing to a case study from environmental science in which data-driven models are used to make long-term projections of soil selenium concentrations. We highlight that in many applications, the lack of system understanding and the lack of transparency of machine learning can introduce a substantial level of second-order uncertainty. We close by sketching how the framework can inform uncertainty quantification.

How to cite: Knüsel, B., Baumberger, C., Zumwald, M., Bresch, D. N., and Knutti, R.: Assessment of Predictive Uncertainty of Data-Driven Environmental Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10883, https://doi.org/10.5194/egusphere-egu2020-10883, 2020.

D2336 |
EGU2020-9038
Fernando Jaume-Santero, David Barriopedro, Ricardo García-Herrera, Sancho Salcedo-Sanz, and Natalia Calvo

Decades of scientific fieldwork have provided extensive sets of paleoclimate records to reconstruct the climate of the past at local, regional, and global scales. Within this context, the paleoclimate community is continuously undertaking new measuring campaigns to obtain long and reliable proxies. However, as most paleoclimate archives are restricted to land regions of the Northern Hemisphere, increasing the number of proxy records to improve the skill of climate field reconstructions might not always be the best strategy.

 

By generating pseudo-proxies from several model ensembles at the locations matching the records of the PAGES-2k network, we show how biologically-inspired artificial intelligence can be coupled with reconstruction methods to find the set of representative locations that minimizes the bias in global temperature field reconstructions induced by the non-homogeneous distribution of proxy records.

 

Our results indicate that small sets of perfect pseudo-proxies situated over key locations of the PAGES-2k network can outperform the reconstruction skill obtained with all available records. They highlight the importance of high latitudes and major teleconnection areas to reconstruct temperature fields at annual timescales. However, long-term temperature variations are better reconstructed by records situated at lower latitudes. According to our experiments, a careful selection of proxy locations should be performed depending on the targeted time scale of the reconstructed field.

How to cite: Jaume-Santero, F., Barriopedro, D., García-Herrera, R., Salcedo-Sanz, S., and Calvo, N.: How many proxy are necessary to reconstruct the temperature of the last millennium?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9038, https://doi.org/10.5194/egusphere-egu2020-9038, 2020.

D2337 |
EGU2020-21555
Christopher Kadow, David Hall, and Uwe Ulbrich

Nowadays climate change research relies on climate information of the past. Historic climate records of temperature observations form global gridded datasets like HadCRUT4, which is investigated e.g. in the IPCC reports. However, record combining data-sets are sparse in the past. Even today they contain missing values. Here we show that machine learning technology can be applied to refill these missing climate values in observational datasets. We found that the technology of image inpainting using partial convolutions in a CUDA accelerated deep neural network can be trained by large Earth system model experiments from NOAA reanalysis (20CR) and the Coupled Model Intercomparison Project phase 5 (CMIP5). The derived deep neural networks are capable to independently refill added missing values of these experiments. The analysis shows a very high degree of reconstruction even in the cross-reconstruction of the trained networks on the other dataset. The network reconstruction reaches a better evaluation than other typical methods in climate science. In the end we will show the new reconstructed observational dataset HadCRUT4 and discuss further investigations.

How to cite: Kadow, C., Hall, D., and Ulbrich, U.: Image Inpainting for Missing Values in Observational Climate Datasets Using Partial Convolutions in a cuDNN, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21555, https://doi.org/10.5194/egusphere-egu2020-21555, 2020.

D2338 |
EGU2020-5532
| Highlight
Timothy Hewson

This presentation will provide a vision, based around current initiatives, of how post-processing and machine learning could work in tandem to downscale the ensemble output of current-generation global models, to deliver probabilistic analyses and forecasts, of multiple surface weather parameters, at point-scale, worldwide. Skill gains would be achieved by adjusting for gridscale and sub-grid biases. One particularly attractive feature of the vision is that observational data is not required for a site that we forecast for, although the more ‘big data’ that we use, worldwide, the better the forecasts will be overall.

The vision is based on four building blocks - or steps - for each parameter. The first step is a simple proof-of-concept, the second is supervised training, the third is hindcast activation and verification, and the fourth is real-time operational implementation. Here we will provide 3 examples, for 3 fundamental surface weather parameters - rainfall, 2m temperature and 100m wind - although the concepts apply also to other parameters too. We stress that different approaches are needed for different parameters, primarily because what determines model bias depends on the parameter. For some, biases depend primarily on local weather type, for others they depend mainly on local topography.

For rainfall downscaling, work at ECMWF has already passed stage 4, with real-time worldwide probabilistic point rainfall forecasts up to day 10 introduced operationally in April 2019, using a decision-tree-based software suite called “ecPoint”, that uses non-local gridbox weather-type analogues. Further work to improve algorithms is underway within the EU-funded MISTRAL project. For 2m temperature we have reached stage 2, and ecPoint-based downscaling will be used to progress this within the EU-funded HIGHLANDER project. The task of 100m wind downscaling requires a different approach, because local topographic forcing is very strong, and this is being addressed under the umbrella of the German Waves-to-Weather programme, using U-net-type convolutional neural networks for which short-period high-resolution simulations provide the training data. This work has also reached stage 2.

For each parameter discussed we see the potential for substantial gains, for point locations, in forecast accuracy and reliability, relative to the raw output of an operational global model. As such we envisage a bright future where probabilistic forecasts for individual sites (and re-analyses) are much better than hitherto, and where the degree of improvement also greatly exceeds what we can reasonably expect in the next two decades or so from advances in global NWP.

This presentation will give a brief overview of downscaling for the 3 parameters, highlight why we believe heavily supervised approaches offer the greatest potential, illustrate also how they provide invaluable feedback for model developers, illustrate areas where more work is needed (such as cross-parameter consistency), and show what form output could take (e.g. point-relevant EPSgrams, as an adaptation of ECMWF’s most popular product).

Contributors to the above initiatives include: Fatima Pillosu (ECMWF, ecPoint); Estibaliz Gascon and Andrea Montani (ECMWF, MISTRAL); Michael Kern and Kevin Höhlein (Technische Universität München, Waves-to-Weather).

How to cite: Hewson, T.: A Vision for providing Global Weather Forecasts at Point-scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5532, https://doi.org/10.5194/egusphere-egu2020-5532, 2020.

D2339 |
EGU2020-3485
Yang Liu, Laurens Bogaardt, Jisk Attema, and Wilco Hazeleger

Operational Arctic sea ice forecasts are of crucial importance to commercial and scientific activities in the Arctic region. Currently, numerical climate models, including General Circulation Models (GCMs) and regional climate models, are widely used to generate the Arctic sea ice predictions at weather time-scales. However, these numerical climate models require near real-time input of weather conditions to assure the quality of the predictions and these are hard to obtain and the simulations are computationally expensive. In this study, we propose a deep learning approach to forecasts of sea ice in the Barents sea at weather time scales. To work with such spatial-temporal sequence problems, Convolutional Long Short Term Memory Networks (ConvLSTM) are useful.  ConvLSTM are LSTM (Long-Short Term Memory) networks with convolutional cells embedded in the LSTM cells. This approach is unsupervised learning and it can make use of enormous amounts of historical records of weather and climate. With input fields from atmospheric (ERA-Interim) and oceanic (ORAS4) reanalysis data sets, we demonstrate that the ConvLSTM is able to learn the variability of the Arctic sea ice within historical records and effectively predict regional sea ice concentration patterns at weekly to monthly time scales. Based on the known sources of predictability, sensitivity tests with different climate fields were also performed. The influences of different predictors on the quality of predictions are evaluated. This method outperforms predictions with climatology and persistence and is promising to act as a fast and cost-efficient operational sea ice forecast system in the future.

How to cite: Liu, Y., Bogaardt, L., Attema, J., and Hazeleger, W.: Extended Range Arctic Sea Ice Forecast with Convolutional Long-Short Term Memory Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3485, https://doi.org/10.5194/egusphere-egu2020-3485, 2020.

D2340 |
EGU2020-17748
Bing Gong, Severin Hußmann, Amirpasha Mozaffari, Jan Vogelsang, and Martin Schultz

This study explores the adaptation of state-of-the-art deep learning architectures for video frame prediction in the context of weather and climate applications. A proof-of-concept case study was performed to predict surface temperature fields over Europe for up to 20 hours based on ERA5 reanalyses weather data. Initial results have been achieved with a PredNet and a GAN-based architecture by using various combinations of temperature, surface pressure, and 500 hPa geopotential as inputs. The results show that the GAN-based architecture outperforms the PredNet. To facilitate the massive data processing and testing of various deep learning architectures, we have developed a containerized parallel workflow for the full life-cycle of the application, which consists of data extraction, data pre-processing, training, post-processing and visualisation of results. The training for PredNet was parallelized on JUWELS supercomputer at JSC, and the training scalability performance was also evaluated.

How to cite: Gong, B., Hußmann, S., Mozaffari, A., Vogelsang, J., and Schultz, M.: Deep learning for short-term temperature forecasts with video prediction methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17748, https://doi.org/10.5194/egusphere-egu2020-17748, 2020.

D2341 |
EGU2020-1635
Andrey Vlasenko, Volker Mattias, and Ulrich Callies

Chemical substances of anthropogenic and natural origin released into the atmosphere affect air quality and, as a consequence, the health of the population. As a result, there is a demand for reliable air quality simulations and future scenarios investigating the effects of emission reduction measures. Due to high computational costs, the prediction of concentrations of chemical substances with discretized atmospheric chemistry transport models (CTM) is still a great challenge. An alternative to the cumbersome numerical estimates is a computationally efficient neural network (NN). The design of the NN is much simpler than a CTM and allows approximating any bounded continuous function (i.e., concentration time series) with the desired accuracy. In particular, the NN trained on a set of CTM estimates can produce similar to CTM estimates up to the approximation error. We test the ability of a NN to produce CTM concentration estimates with the example of daily mean summer NO2 and SO2 concentrations. The measures of success in these tests are the difference in the consumption of computational resources and the difference between NN and CTM concentration estimates. Relying on the fact that after spin-up, CTM estimates are independent of the initial concentrations, we show that recurrent NN can also spin-up and predict atmospheric chemical state without having input concentration data. Moreover, we show that if the emission scenario does not change significantly from year to year, the NN can predict daily mean concentrations from meteorological data only.

How to cite: Vlasenko, A., Mattias, V., and Callies, U.: Estimation of NO2 and SO2 concentration changes in Europe from meteorological data with Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1635, https://doi.org/10.5194/egusphere-egu2020-1635, 2020.

D2342 |
EGU2020-5574
Menno Veerman, Robert Pincus, Caspar van Leeuwen, Damian Podareanu, Robin Stoffer, and Chiel van Heerwaarden

A fast and accurate treatment of radiation in meteorological models is essential for high quality simulations of the atmosphere. Despite our good understanding of the processes governing the transfer of radiation, full radiative transfer solvers are computationally extremely expensive. In this study, we use machine learning to accelerate the optical properties calculations of the Rapid Radiative Transfer Models for General circulation model applications - Parallel (RRTMGP). These optical properties control the absorption, scattering and emission of radiation within each grid cell. We train multiple neural networks that get as input the pressure, temperature and concentrations of water vapour and ozone of each grid cell and together predict all 224 or 256 quadrature points of each optical property. All networks are multilayer perceptrons and we test various network sizes to assess the trade-off between the accuracy of a neural network and its computational costs. We train two different sets of neural networks. The first set (generic) is trained for a wide range of atmospheric conditions, based on the profiles chosen by the Radiative Forcing Model Intercomparison Project (RFMIP). The second set (case-specific) is trained only for the range in temperature, pressure and moisture found in one large-eddy simulation based on a case with shallow convection over a vegetated surface. This case-specific set is used to explore the possible performance gains of case-specific tuning.

Most neural networks are able to predict the optical properties with high accuracy. Using a network with 2 hidden layers of 64 neurons, predicted optical depths in the longwave spectrum are highly accurate (R2 > 0.99). Similar accuracies are achieved for the other optical properties. Subsequently, we take a set of 100 atmospheric profiles and calculate profiles of longwave and shortwave radiative fluxes based on the optical properties predicted by the neural networks. Compared to fluxes based on the optical properties computed by RRTMGP, the downwelling longwave fluxes have errors within 0.5 W m-2 (<1%) and an average error of -0.011 W m-2 at the surface. The downwelling shortwave fluxes have an average error of -0.0013 W m-2 at the surface. Using Intel’s Math Kernel Library’s (MKL) BLAS routines to accelerate matrix multiplications, our implementation of the neural networks in RRTMGP is about 4 times faster than the original optical properties calculations. It can thus be concluded that neural networks are able to emulate the calculation of optical properties with high accuracy and computational speed.

How to cite: Veerman, M., Pincus, R., van Leeuwen, C., Podareanu, D., Stoffer, R., and van Heerwaarden, C.: Predicting atmospheric optical properties for radiative transfer computations using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5574, https://doi.org/10.5194/egusphere-egu2020-5574, 2020.

D2343 |
EGU2020-13215
Yinxue Liu, Paul Bates, Jeffery Neal, and Dai Yamazaki

Precise representation of global terrain is of great significance for estimating global flood risk. As the most vulnerable areas to flooding, urban areas need GDEMs of high quality. However, current Global Digital Elevation Models (GDEMs) are all Digital Surface Models (DSMs) in urban areas, which will cause substantial blockages of flow pathways within flood inundation models. By taking GPS and LIDAR data as terrain observations, errors of popular GDEMs (including SRTM 1” void-filled version DEM - SRTM, Multi-Error-Removed Improved-Terrain DEM - MERIT and TanDEM-X 3” resolution DEM -TDM3) were analysed in seven varied types of cities. It was found that the RMSE of GDEMs errors are in the range of 2.3 m – 7.9 m, and that MERIT and TDM3 both outperformed SRTM. The error comparison between MERIT and TDM3 showed that the most accurate model varied among the studied cities. Generally, error of TDM3 is slightly lower than MERIT, but TDM3 has more extreme errors (absolute value exceeds 15 m). For cities which have experienced rapid development in the past decade, the RMSE of MERIT is lower than that of TDM3, which is mainly caused by the acquisition time difference between these two models. A machine learning method was adopted to estimate MERIT error. Night Time Light, world population density data, Openstreetmap building data, slope, elevation and neighbourhood elevation values from widely available datasets, comprising 14 factors in total, were used in the regression. Models were trained based on single city and combinations of cities, respectively, and then used to estimate error in a target city. By this approach, the RMSE of corrected MERIT can decline by up to 75% with target city trained model, though less significant a reduction of 35% -68% was shown in the combined model with target city excluded in the training data. Further validation via flood simulation showed improvements in terms of both flood extent and inundation depth by the corrected MERIT over the original MERIT, with a validation in small sized city. However, the corrected MERIT was not as good as TDM3 in this case. This method has the potential to generate a better bare-earth global DEM in urban areas, but the sensitive level about the model extrapolative application needs investigation in more study sites.

How to cite: Liu, Y., Bates, P., Neal, J., and Yamazaki, D.: Bare-earth DEM Generation in Urban Areas Based on a Machine Learning Method, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13215, https://doi.org/10.5194/egusphere-egu2020-13215, 2020.

D2344 |
EGU2020-2135
Wenjin Wu

To generate FluxNet-consistent annual forest GPP and NEE, we have developed a deep neural network that can retrieve estimations globally. Seven parameters considering different aspects of forest ecological and climatic features which include the Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), Evapotranspiration (ET), Land Surface Temperature during Daytime (LSTD), Land Surface Temperature at Night (LSTN), precipitation, and forest type were selected as the input. All these datasets can be acquired from the Google earth engine platform to ensure rapid large-scale analysis. The model has three favorable traits: (1) Based on a multidimensional convolutional block, this model arranges all temporal variables into a two-dimensional feature map to consider phenology and inter-parameter relationships. The model can thus obtain the estimation with encoded meaningful patterns instead of raw input variables. (2) In contrast to filling data gaps with historical values or smoothing methods, the new model is developed and trained to catch signals with certain levels of occlusions; therefore, it can tolerate a relativly large portion of missing data. (3) The model is data-driven and interpretable. Therefore, it can potentially discover unknown mechanisms of forest carbon absorption by showing us how these mechanisms work to make correct estimations. The model was compared to three traditional machine learning models and presented superior performances. With this new model, global forest GPP and NEE in 2003 and 2018 were obtained. Variations of the carbon flux during the 16 years in between were analyzed.

How to cite: Wu, W.: GPP and NEE estimation for global forests based on a deep convolutional neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2135, https://doi.org/10.5194/egusphere-egu2020-2135, 2020.

Chat time: Wednesday, 6 May 2020, 16:15–18:00

Chairperson: Redouane Lguensat
D2345 |
EGU2020-5440
Manuel Schlund, Veronika Eyring, Gustau Camps-Valls, Pierre Friedlingstein, Pierre Gentine, and Markus Reichstein

By absorbing about one quarter of the total anthropogenic CO2 emissions, the terrestrial biosphere is an important carbon sink of Earth’s carbon cycle. A key metric of this process is the terrestrial gross primary production (GPP), which describes the biogeochemical production of energy by photosynthesis. Elevated atmospheric CO2 concentrations will increase GPP in the future (CO2 fertilization effect). However, projections from different Earth system models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) show a large spread in carbon cycle related quantities. In this study, we present a new supervised machine learning approach to constrain multi-model climate projections using observation-driven data. Our method based on Gradient Boosted Regression Trees handles multiple predictor variables of the present-day climate and accounts for non-linear dependencies. Applied to GPP in the representative concentration pathway RCP 8.5 at the end of the 21st century (2081–2100), the new approach reduces the “likely” range (as defined by the Intergovernmental Panel on Climate Change) of the CMIP5 multi-model projection of GPP to 161–203 GtC yr-1. Compared to the unweighted multi-model mean (148–224 GtC yr-1), this is an uncertainty reduction of 45%. Our new method is not limited to projections of the future carbon cycle, but can be applied to any target variable where suitable gridded data is available.

How to cite: Schlund, M., Eyring, V., Camps-Valls, G., Friedlingstein, P., Gentine, P., and Reichstein, M.: Constraining uncertainty in projected gross primary production with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5440, https://doi.org/10.5194/egusphere-egu2020-5440, 2020.

D2346 |
EGU2020-2222
| Highlight
Giancarlo Restreppo, Warren Wood, and Benjamin Phrampus

Observed vertical sediment accumulation rates (SARs; n = 1166) were gathered from ~55 years of peer reviewed literature.  Original methods of rate calculation include long-term isotope geochronology (14C, 210Pb, and 137Cs), pollen analysis, horizon markers, and box coring. These observations are used to create a database of contemporary vertical SARs. Rates were converted to cm yr-1, paired with the observation’s longitude and latitude, and placed into a machine-learning based Geospatial Predictive Seafloor Model (GPSM). GPSM finds correlations between the data and established global “predictors” (quantities known or estimable everywhere; e.g. distance from coast line, river mouths, etc.).  The result, using a k-nearest neighbor (k-NN) algorithm, is a 5-arc-minute global map of predicted vertical SARs.  The map generated provides a global reference for vertical sedimentation from coastal to abyssal depths.  Areas of highest sedimentation, ~3-8 cm yr-1, are generally river mouth proximal coastal zones and continental shelves on passive tectonic margins (e.g. the Gulf of Mexico, eastern United States, eastern continental Asia, the Pacific Islands north of Australia), with rates falling exponentially towards the deepest parts of the oceans.  Coastal zones on active tectonic margins display vertical sedimentation of ~1 cm yr-1, which is limited to near shore when compared to passive margins.  Abyssal depth rates are functionally zero at the time scale examined (~10-4 cm yr-1), and increase one order of magnitude near the Mid-Atlantic ridge and at the conjunction of the Pacific, Nazca, and Cocos tectonic plates.  Predicted sedimentation patterns are then compared to established quantities of fluvial sediment discharge to the oceans, calculated by Milliman and Farnsworth in River Discharge to the Coastal Ocean: A Global Synthesis (2011).

How to cite: Restreppo, G., Wood, W., and Phrampus, B.: Coastal to Abyssal Vertical Sediment Accumulation Rates Predicted via Machine-Learning: Towards Sediment Characterization on a Global Scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2222, https://doi.org/10.5194/egusphere-egu2020-2222, 2020.

D2347 |
EGU2020-12634
Sooyoul Kim, Keishiro Chiyonobu, Hajime Mase, and Masahide Takeda

The present study addresses how one-week later nearshore wave heights and periods are predicted by using a machine learning technique and global wave forecast data. For the machine learning technique, Group Method of Data Handling (GMDH) is used. The GMDH uses computer-based mathematical modeling of multi-parametric regression characterized by fully automatic structural and parametric optimization first introduced by Ivankhnenko (1971). The algorithm of GMDH can be described by a self-selecting procedure deriving a multi-order polynomial to predict an accurate output. Since its procedure is similar to a feed-forward transformation, the algorithm is called a Polynomial Neural Network (Onwubolu, 2016).

For the global wave forecast data, the datasets released by the Japan Meteorological Agency (JMA), National Oceanic and Atmospheric Administration (NOAA), and European Centre for Medium-Range Weather Forecasts (ECMWF). The global wave forecasts are generally available every 6 hours, with forecast out 180 hours in the future. However, since timely available forecasts are produced on synoptic scaled calculation domains, a consistent level of predictive accuracy at specific locations along Japanese coasts cannot be expected from the viewpoint of spatial resolution.

The present study aims to aid harbor and marine construction by establishing a nearshore wave prediction model for 14 stations around Japan that forecast up to one week in the future.

When the GMDH-based wave model uses the input data of global wave data by NOAA and ECMWF, the estimations of significant wave heights agreed well with observations. On the other hand, a combination of JMA and ECMWF wave data gave a good performance for significant wave periods. Since the present method transforms global wave prediction data into local nearshore waves by GMDH, it is possible at any concerned location where the nearshore wave observations can be obtained for the training of GMDH.

How to cite: Kim, S., Chiyonobu, K., Mase, H., and Takeda, M.: Real-time Japanese nearshore wave prediction for one-week later using GMDH and global wave forecast data , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12634, https://doi.org/10.5194/egusphere-egu2020-12634, 2020.

D2348 |
EGU2020-19772
Christoph Jörges, Cordula Berkenbrink, and Britta Stumpe

Sea level rise, a possible increase in frequency and intensity of storms and other effects of global warming exert pressure on the coastal regions of the North Sea. Also storm surges threaten the basis of existence for many people in the affected areas. As well as for building coastal protection or offshore structures, detailed knowledge of wave data, especially the wave height, is of particular interest. Therefore, the nearshore wave climate at the island Norderney is measured by buoys since the early 1990s. Caused by crossing ships or weather impacts, these buoys can be damaged. This leads to a huge amount of missing data in the wave data time series, which are the basis for numerical modelling, statistical analysis and developing coastal protection.
Artificial neural networks are a common method to reconstruct and forecast wave heights nowadays. This study shows a new technique to reconstruct and forecast significant wave height measured by buoys in the nearshore area of the Norderney coastline. Buoy data of the period 2004 to 2017 from the NLWKN – Coastal Research Station at Norderney were used to train three different statistical and machine learning models namely linear regression, feed-forward neural network and long short-term memory (LSTM), respectively. An energy density spectrum was tested against calculated sea state parameter as input. The LSTM – a recurrent neural network – is the proposed algorithm to reconstruct wave height data. It is especially designed for sequential data, but was performed on wave spectral data in this study for the first time. Depending on the input parameter of the respectively model, the LSTM can reconstruct and forecast time series of arbitrary length.
Using information about wind speed and direction and water depth, as well as the wave height of two neighboring buoy stations, the LSTM reconstructs the wave height with a correlation coefficient of 0.98 between measured and reconstructed data.
Unfortunately, the forecasting and reconstruction error of extreme events is highly underestimated, though these events are of great interest for climate and ocean science. Currently, this error is being specifically attempted to improve. Compared to numerical modeling, the machine learning approach requires less computational effort. Results of this study can be used to complete spatial and temporal wave height datasets, providing a better basis for trend analysis in relation to climate change and for validating numerical models for decision making in coastal protection and management.

How to cite: Jörges, C., Berkenbrink, C., and Stumpe, B.: Wave data prediction and reconstruction by recurrent neural networks at the nearshore area of Norderney, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19772, https://doi.org/10.5194/egusphere-egu2020-19772, 2020.

D2349 |
EGU2020-3447
Maria-del-Mar Vich and Romualdo Romero

This work explores the applicability of neural networks (NN) for forecasting atmospherically-driven tsunamis affecting Ciutadella harbor in Menorca (Balearic Islands). These meteotsunamis can lead to wave heights around 1 m, and several episodes in the modern history have reached 2-4 m with catastrophic consequences. A timely and skilled prediction of these phenomena could significantly help to mitigate the damages inflicted to the port facilities and moored vessels. We examine the relevant physical mechanisms that promote meteotsunamis in Ciutadella harbour and choose the input variables of the NN accordingly. Two different NNs are devised and tested: a dry and wet scheme. The difference between schemes resides on the input layer; while the first scheme is exclusively focused on the triggering role of atmospheric gravity waves (governed by temperature and wind profiles across the tropospheric column), the second scheme also incorporates humidity as input information with the purpose of accounting for the occasional influence of moist convection. We train both NNs using resilient backpropagation with weight backtracking method. Their performance is tested by means of classical deterministic verification indexes. We also compare both NN results against the performance of a substantially different prognostic method that relies on a sequence of atmospheric and oceanic numerical simulations. Both NN schemes show a skill comparable to that of computationally expensive approaches based on direct numerical simulation of the physical mechanisms. The expected greater versatility of the wet scheme over the dry scheme cannot be clearly proved owing to the limited size of the training database. The results emphasize the potential of a NN approach and open a clear path to an operational implementation, including probabilistic forecasting strategies.

How to cite: Vich, M.-M. and Romero, R.: Design of a neural network aimed at predicting meteotsunamis in Ciutadella harbour (Balearic Islands, Spain), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3447, https://doi.org/10.5194/egusphere-egu2020-3447, 2020.

D2350 |
EGU2020-15481
Tom Andersson, Fruzsina Agocs, Scott Hosking, María Pérez-Ortiz, Brooks Paige, Chris Russell, Andrew Elliott, Stephen Law, Jeremy Wilkinson, Yevgeny Askenov, David Schroeder, Will Tebbutt, Anita Faul, and Emily Shuckburgh

Over recent decades, the Arctic has warmed faster than any region on Earth. The rapid decline in Arctic sea ice extent (SIE) is often highlighted as a key indicator of anthropogenic climate change. Changes in sea ice disrupt Arctic wildlife and indigenous communities, and influence weather patterns as far as the mid-latitudes. Furthermore, melting sea ice attenuates the albedo effect by replacing the white, reflective ice with dark, heat-absorbing melt ponds and open sea, increasing the Sun’s radiative heat input to the Arctic and amplifying global warming through a positive feedback loop. Thus, the reliable prediction of sea ice under a changing climate is of both regional and global importance. However, Arctic sea ice presents severe modelling challenges due to its complex coupled interactions with the ocean and atmosphere, leading to high levels of uncertainty in numerical sea ice forecasts.

Deep learning (a subset of machine learning) is a family of algorithms that use multiple nonlinear processing layers to extract increasingly high-level features from raw input data. Recent advances in deep learning techniques have enabled widespread success in diverse areas where significant volumes of data are available, such as image recognition, genetics, and online recommendation systems. Despite this success, and the presence of large climate datasets, applications of deep learning in climate science have been scarce until recent years. For example, few studies have posed the prediction of Arctic sea ice in a deep learning framework. We investigate the potential of a fully data-driven, neural network sea ice prediction system based on satellite observations of the Arctic. In particular, we use inputs of monthly-averaged sea ice concentration (SIC) maps since 1979 from the National Snow and Ice Data Centre, as well as climatological variables (such as surface pressure and temperature) from the European Centre for Medium-Range Weather Forecasts reanalysis (ERA5) dataset. Past deep learning-based Arctic sea ice prediction systems tend to overestimate sea ice in recent years - we investigate the potential to learn the non-stationarity induced by climate change with the inclusion of multi-decade global warming indicators (such as average Arctic air temperature). We train the networks to predict SIC maps one month into the future, evaluating network prediction uncertainty by ensembling independent networks with different random weight initialisations. Our model accounts for seasonal variations in the drivers of sea ice by controlling for the month of the year being predicted. We benchmark our prediction system against persistence, linear extrapolation and autoregressive models, as well as September minimum SIE predictions from submissions to the Sea Ice Prediction Network's Sea Ice Outlook. Performance is evaluated quantitatively using the root mean square error and qualitatively by analysing maps of prediction error and uncertainty.

How to cite: Andersson, T., Agocs, F., Hosking, S., Pérez-Ortiz, M., Paige, B., Russell, C., Elliott, A., Law, S., Wilkinson, J., Askenov, Y., Schroeder, D., Tebbutt, W., Faul, A., and Shuckburgh, E.: Deep learning for monthly Arctic sea ice concentration prediction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15481, https://doi.org/10.5194/egusphere-egu2020-15481, 2020.

D2351 |
EGU2020-10366
Arthur Moraux, Steven Dewitte, Bruno Cornelis, and Adrian Munteanu

In the coming years, Artificial Intelligence (AI), for which Deep Learning (DL) is an essential component, is expected to transform society in a way that is compared to the introduction of electricity or the introduction of the internet. The high expectations are founded on the many impressive results of recent DL studies for AI tasks (e.g. computer vision, text translation, image or text generation...). Also for weather and climate observations, a large potential for AI application exists.

We present the results of the recent paper [Moraux et al, 2019], which is one of the first demonstrations of the application of cutting edge deep learning techniques to a practical weather observation problem. We developed a multiscale encoder-decoder convolutional neural network using the three most relevant SEVIRI/MSG spectral images at 8.7, 10.8 and 12.0 micron and in situ rain gauge measurements as input. The network is trained to reproduce precipitation measured by rain gauges in Belgium, the Netherlands and Germany. Precipitating pixels are detected with a POD of 0.75 and a FAR of 0.3. Instantaneous precipitation rate is estimated with a RMSE of 1.6 mm/h.

 

Reference:

[Moraux et al, 2019] Moraux, A.; Dewitte, S.; Cornelis, B.; Munteanu, A. Deep Learning for Precipitation Estimation from Satellite and Rain Gauges Measurements. Remote Sens. 2019, 11, 2463.

How to cite: Moraux, A., Dewitte, S., Cornelis, B., and Munteanu, A.: Deep Learning for Precipitation Estimation from Satellite and Rain Gauges Measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10366, https://doi.org/10.5194/egusphere-egu2020-10366, 2020.

D2352 |
EGU2020-4824
Lauri Tuppi, Pirkka Ollinaho, Madeleine Ekblom, Vladimir Shemyakin, and Heikki Järvinen

Algorithmic model tuning is a promising approach to yield the best possible performance of multiscale multi-phase atmospheric models once the model structure is fixed. We are curious about to what degree one can trust the algorithmic tuning process. We approach the problem by studying the convergence of this process in a semi-realistic case. Let us denote M(x0;θd) as the default model, where x0 and θd are the initial state and default model parameter vectors, respectively. A necessary condition for an algorithmic tuning process to converge in a fully-realistic case is that the default model is recovered if the tuning process is initialised with perturbed model parameters θ and the default model forecasts are used as pseudo-observations. In this paper we study the circumstances where this condition is valid by carrying out a large set of convergence tests using two different tuning methods and the OpenIFS model. These tests are interpreted as guidelines for algorithmic model tuning applications.

The results of this study can be used as recipe for maximising efficiency of algorithmic tuning. In the convergence tests, maximised efficiency was reached with using ensemble initial conditions, cost function that covers entire model domain, short forecast length and medium-sized ensembles.

How to cite: Tuppi, L., Ollinaho, P., Ekblom, M., Shemyakin, V., and Järvinen, H.: Necessary conditions for algorithmic tuning of weather prediction models using OpenIFS as an example, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4824, https://doi.org/10.5194/egusphere-egu2020-4824, 2020.

D2353 |
EGU2020-12601
Wim Wiegerinck

Deep learning is a modeling approach that has shown impressive results in image processing and is arguably a promising tool for dealing with spatially extended complex systems such earth atmosphere with its visually interpretable patterns. A disadvantage of the neural network approach is that it typically requires an enormous amount of training data.

 

Another recently proposed modeling approach is supermodeling. In supermodeling it is assumed that a dynamical system – the truth – is modelled by a set of good but imperfect models. The idea is to improve model performance by dynamically combining imperfect models during the simulation. The resulting combination of models is called the supermodel. The combination strength has to be learned from data. However, since supermodels do not start from scratch, but make use of existing domain knowledge, they may learn from less data.

 

One of the ways to combine models is to define the tendencies of the supermodel as linear (weighted) combinations of the imperfect model tendencies. Several methods including linear regression have been proposed to optimize the weights.  However, the combination method might also be nonlinear. In this work we propose and explore a novel combination of deep learning and supermodeling, in which convolutional neural networks are used as tool to combine the predictions of the imperfect models.  The different supermodeling strategies are applied in simulations in a controlled environment with a three-level, quasi-geostrophic spectral model that serves as ground truth and perturbed models that serve as the imperfect models.

How to cite: Wiegerinck, W.: Neural Supermodeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12601, https://doi.org/10.5194/egusphere-egu2020-12601, 2020.

D2354 |
EGU2020-19085
Johan Sjöberg, Sam Jackson, Karel Adamek, Wesley Armour, and Jeyarajan Thiyagalingam

As the importance of satellite readings grows in fields as varied as meteorology, urban planning and climate change science so has the importance of satellite reading accuracy. This has in turn lead to an increased need for accurate cloud masking algorithms given the large impact that clouds have on the accuracy of these readings. At the moment there are some automatic cloud masking algorithms, including one based on Bayesian statistics. However, they all suffer from precision issues as well as issues with misclassifying normal natural phenomena such as ocean sun glint, sea ice and dust plumes as clouds. Given that these natural phenomena tend to be concentrated in certain regions, this also implies that the precision of most algorithms tends to vary from region to region.

This has led to eyes increasingly turning to machine learning and image segmentation techniques to perform cloud masking. In this presentation it will be described how and with what result these techniques can be applied to Sentinel-3 SLSTR data with the main focus being techniques that are variations of the so called fully convolutional networks (FCNs) originally proposed by Long and Shelhamer in 2015. Given that FCNs have performed well in areas such as medical imaging, facial detection, navigation systems for self-driving cars etc., there should be a large potential for them within cloud detection.

The presentation will also look into the regional variability of these machine learning techniques and whether one can improve the overall cloud masking accuracy by developing models specifically for a region. Furthermore, it will aim to demonstrate how one can, by performing simple perturbation techniques, increase the interpretability of the model predictions something that is a salient issue given the somewhat black box-like nature of many machine learning models.

How to cite: Sjöberg, J., Jackson, S., Adamek, K., Armour, W., and Thiyagalingam, J.: Machine Learning for Cloud Masking, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19085, https://doi.org/10.5194/egusphere-egu2020-19085, 2020.

D2355 |
EGU2020-8492
Hanna Meyer and Edzer Pebesma

Spatial mapping is an important task in environmental science to reveal spatial patterns and changes of the environment. In this context predictive modelling using flexible machine learning algorithms has become very popular. However, looking at the diversity of modelled (global) maps of environmental variables, there might be increasingly the impression that machine learning is a magic tool to map everything. Recently, the reliability of such maps have been increasingly questioned, calling for a reliable quantification of uncertainties.

Though spatial (cross-)validation allows giving a general error estimate for the predictions, models are usually applied to make predictions for a much larger area or might even be transferred to make predictions for an area where they were not trained on. But by making predictions on heterogeneous landscapes, there will be areas that feature environmental properties that have not been observed in the training data and hence not learned by the algorithm. This is problematic as most machine learning algorithms are weak in extrapolations and can only make reliable predictions for environments with conditions the model has knowledge about. Hence predictions for environmental conditions that differ significantly from the training data have to be considered as uncertain.

To approach this problem, we suggest a measure of uncertainty that allows identifying locations where predictions should be regarded with care. The proposed uncertainty measure is based on distances to the training data in the multidimensional predictor variable space. However, distances are not equally relevant within the feature space but some variables are more important than others in the machine learning model and hence are mainly responsible for prediction patterns. Therefore, we weight the distances by the model-derived importance of the predictors. 

As a case study we use a simulated area-wide response variable for Europe, bio-climatic variables as predictors, as well as simulated field samples. Random Forest is applied as algorithm to predict the simulated response. The model is then used to make predictions for entire Europe. We then calculate the corresponding uncertainty and compare it to the area-wide true prediction error. The results show that the uncertainty map reflects the patterns in the true error very well and considerably outperforms ensemble-based standard deviations of predictions as indicator for uncertainty.

The resulting map of uncertainty gives valuable insights into spatial patterns of prediction uncertainty which is important when the predictions are used as a baseline for decision making or subsequent environmental modelling. Hence, we suggest that a map of distance-based uncertainty should be given in addition to prediction maps.

How to cite: Meyer, H. and Pebesma, E.: Mapping (un)certainty of machine learning-based spatial prediction models based on predictor space distances, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8492, https://doi.org/10.5194/egusphere-egu2020-8492, 2020.

D2356 |
EGU2020-7559
| Highlight
Leroy Bird, Greg Bodeker, and Jordis Tradowsky

Frequency based climate change attribution of extreme weather events requires thousands of years worth of model output in order to obtain a statistically sound result. Additionally, extreme precipitation events in particular require a high resolution model as they can occur over a relatively small area. Unfortunately due storage and computational restrictions it is not feasible to run traditional models at a sufficiently high spatial resolution for the complete duration of these simulations. Instead, we suggest that deep learning could be used to emulate a proportion of a high resolution model, at a fraction of the computational cost. More specifically, we use a U-Net, a type of convolutional neural network. The U-Net takes as input, several fields from coarse resolution model output and is trained to predict corresponding high resolution precipitation fields. Because there are many potential precipitation fields associated with the coarse resolution model output, stochasticity is added to the U-Net and a generative adversarial network is employed in order to help create a realistic distribution of events. By sampling the U-Net many times, an estimate of the probability of a heavy precipitation event occurring on the sub-grid scale can be derived.

How to cite: Bird, L., Bodeker, G., and Tradowsky, J.: A deep learning based approach for inferring the distribution of potential extreme events from coarse resolution climate model output , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7559, https://doi.org/10.5194/egusphere-egu2020-7559, 2020.

D2357 |
EGU2020-20177
Marie Déchelle-Marquet, Marina Levy, Patrick Gallinari, Michel Crepon, and Sylvie Thiria

Ocean currents are a major source of impact on climate variability, through the heat transport they induce for instance. Ocean climate models have quite low resolution of about 50 km. Several dynamical processes such as instabilities and filaments which have a scale of 1km have a strong influence on the ocean state. We propose to observe and model these fine scale effects by a combination of satellite high resolution SST observations (1km resolution, daily observations) and mesoscale resolution altimetry observations (10km resolution, weekly observations) with deep neural networks. Whereas the downscaling of climate models has been commonly addressed with assimilation approaches, in the last few years neural networks emerged as powerful multi-scale analysis method. Besides, the large amount of available oceanic data makes attractive the use of deep learning to bridge the gap between scales variability.

This study aims at reconstructing the multi-scale variability of oceanic fields, based on the high resolution NATL60 model of ocean observations at different spatial resolutions: low-resolution sea surface height (SSH) and high resolution SST. As the link between residual neural networks and dynamical systems has recently been established, such a network is trained in a supervised way to reconstruct the high variability of SSH and ocean currents at submesoscale (a few kilometers). To ensure the conservation of physical aspects in the model outputs, physical knowledge is incorporated into the deep learning models training. Different validation methods are investigated and the model outputs are tested with regards to their physical plausibility. The method performance is discussed and compared to other baselines (namely convolutional neural network). The generalization of the proposed method on different ocean variables such as sea surface chlorophyll or sea surface salinity is also examined.

How to cite: Déchelle-Marquet, M., Levy, M., Gallinari, P., Crepon, M., and Thiria, S.: Deep neural networks to downscale ocean climate models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20177, https://doi.org/10.5194/egusphere-egu2020-20177, 2020.

D2358 |
EGU2020-17919
Kevin Debeire, Veronika Eyring, Peer Nowack, and Jakob Runge

Causal discovery algorithms are machine learning methods that estimate the dependencies between different variables. One of these algorithms, the recently developed PCMCI algorithm (Runge et al., 2019) estimates the time-lagged causal dependency structures from multiple time series and is adapted to common properties of Earth System time series data. The PCMCI algorithm has already been successfully applied in climate science to reveal known interaction pathways between Earth regions commonly referred to as teleconnections, and to explore new teleconnections (Kretschmer et al., 2017). One recent study used this method to evaluate models participating in the Coupled Model Intercomparison Project Phase 5  (CMIP5) (Nowack et al., 2019).

Here, we build on the Nowack et al. study and use PCMCI on dimension-reduced meteorological reanalysis data and the CMIP6 ensembles data. The resulting causal networks represent teleconnections (causal links) in each of the CMIP6 climate models. The models’ performance in representing realistic teleconnections is then assessed by comparing the causal networks of the individual CMIP6 models to the one obtained from meteorological reanalysis. We show that causal discovery is a promising and novel approach that complements existing model evaluation approaches.

 

References:

Runge, J., P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic, Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996, 2019.

Kretschmer, M., J. Runge, and D. Coumou, Early prediction of extreme stratospheric polar vortex states based on causal precursors, Geophysical Research Letters, doi:10.1002/2017GL074696, 2017.

Nowack, P. J., J. Runge, V. Eyring, and J. D. Haigh, Causal networks for climate model evaluation and constrained projections, in review, 2019.

How to cite: Debeire, K., Eyring, V., Nowack, P., and Runge, J.: Causal Discovery as a novel approach for CMIP6 climate model evaluation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17919, https://doi.org/10.5194/egusphere-egu2020-17919, 2020.

D2359 |
EGU2020-20132
David Meyer

The use of real data for training machine learning (ML) models are often a cause of major limitations. For example, real data may be (a) representative of a subset of situations and domains, (b) expensive to produce, (c) limited to specific individuals due to licensing restrictions. Although the use of synthetic data are becoming increasingly popular in computer vision, ML models used in weather and climate models still rely on the use of large real data datasets. Here we present some recent work towards the generation of synthetic data for weather and climate applications and outline some of the major challenges and limitations encountered.

How to cite: Meyer, D.: Towards synthetic data generation for machine learning models in weather and climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20132, https://doi.org/10.5194/egusphere-egu2020-20132, 2020.

D2360 |
EGU2020-21329
Laura Martínez Ferrer, Maria Piles, and Gustau Camps-Valls

Providing accurate and spatially resolved predictions of crop yield is of utmost importance due to the rapid increase in the demand of biofuels and food in the foreseeable future. Satellite based remote sensing over agricultural areas allows monitoring crop development through key bio-geophysical variables such as the Enhanced Vegetation Index (EVI), sensitive to canopy greenness, the Vegetation Optical Depth (VOD), sensitive to biomass water-uptake dynamics, and Soil Moisture (SM), which provides direct information of plant available water. The aim of this work is to implement an automatic system for county-based crop yield estimation using time series from multisource satellite observations, meteorological data and available in situ surveys as supporting information. The spatio-temporal resolution of satellite and meteorological observations are fully exploited and synergistically combined for crop yield prediction using machine learning models. Linear and non-linear regression methods are used: least squares, LASSO, random forests, kernel machines and Gaussian processes. Here we are not only interested in the prediction skill, but also on understanding the relative relevance of the covariates. For this, we first study the importance of each feature separately and then propose a global model for operational monitoring of crop status using the most relevant agro-ecological drivers.

 

We selected the Continental U.S. and a four-year time series dataset to perform the research study. Results reveal that the three satellite variables are complementary and that their combination with maximum temperature and precipitation from meteorological stations provides the best estimations. Interestingly, adding information about crop planted area also improved the predictions. A non-linear regression model based on Gaussian processes led to best results for all considered crops (soybean, corn and wheat), with high accuracy (low bias and correlation coefficients ranging from 0.75 to 0.92). The feature ranking allowed understanding the main drivers for crop monitoring and the underlying factors behind a prediction loss or gain.

How to cite: Martínez Ferrer, L., Piles, M., and Camps-Valls, G.: Multisensor crop yield estimation with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21329, https://doi.org/10.5194/egusphere-egu2020-21329, 2020.

D2361 |
EGU2020-11263
Felix Strnad, Wolfram Barfuss, Jonathan Donges, and Jobst Heitzig

The identification of pathways leading to robust mitigation of dangerous anthropogenic climate change is nowadays of particular interest 
not only to the scientific community but also to policy makers and the wider public. 

Increasingly complex, non-linear World-Earth system models are used for describing the dynamics of the biophysical Earth system and the socio-economic and socio-cultural World of human societies and their interactions. Identifying pathways towards a sustainable future in these models is a challenging and widely investigated task in the field of climate research and broader Earth system science.  This problem is especially difficult when caring for both environmental limits and social foundations need to be taken into account.

In this work, we propose to combine recently developed machine learning techniques, namely deep reinforcement learning (DRL), with classical analysis of trajectories in the World-Earth system as an approach to extend the field of Earth system analysis by a new method. Based on the concept of the agent-environment interface, we develop a method for using a DRL-agent that is able to act and learn in variable manageable environment models of the Earth system in order to discover management strategies for sustainable development.

We demonstrate the potential of our framework by applying DRL algorithms to stylized World-Earth system models. The agent can apply management options to an environment, an Earth system model, and learn by rewards provided by the environment. We train our agent with a deep Q-neural network extended by current state-of-the-art algorithms. Conceptually, we thereby explore the feasibility of finding novel global governance policies leading into a safe and just operating space constrained by certain planetary and socio-economic boundaries.  

We find that the agent is able to learn novel, previously undiscovered policies that navigate the system into sustainable regions of the underlying conceptual models of the World-Earth system. In particular, the artificially intelligent agent learns that the timing of a specific mix of taxing carbon emissions and subsidies on renewables is of crucial relevance for finding World-Earth system trajectories that are sustainable in the long term. Overall, we show in this work how concepts and tools from artificial intelligence can help to address the current challenges on the way towards sustainable development.

Underlying publication

[1] Strnad, F. M.; Barfuss, W.; Donges, J. F. & Heitzig, J. Deep reinforcement learning in World-Earth system models to discover sustainable management strategies Chaos: An Interdisciplinary Journal of Nonlinear Science, AIP Publishing LLC, 2019, 29, 123122

How to cite: Strnad, F., Barfuss, W., Donges, J., and Heitzig, J.: Deep reinforcement learning in World-Earth system models to discover sustainable management strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11263, https://doi.org/10.5194/egusphere-egu2020-11263, 2020.