ITS2.6/AS5.1
Machine learning for Earth System modelling

ITS2.6/AS5.1

EDI
Machine learning for Earth System modelling
Co-organized by CL5.3/ESSI1/NP4/OS4
Convener: Julien Brajard | Co-conveners: Alejandro Coca-Castro, Peter Düben, Redouane Lguensat, Emily Lines, Francine Schevenhoven, Maike SonnewaldECSECS
Presentations
| Mon, 23 May, 17:00–18:30 (CEST)
 
Room N1, Tue, 24 May, 08:30–11:50 (CEST), 13:20–14:50 (CEST)
 
Room N1

Presentations: Mon, 23 May | Room N1

Chairpersons: Julien Brajard, Redouane Lguensat
17:00–17:06
|
EGU22-124
|
ECS
|
On-site presentation
|
Malcolm Aranha and Alok Porwal

Traditional mineral prospectivity modelling for mineral exploration and targeting relies heavily on manual data filtering and processing to extract desirable geologic features based on expert knowledge. It involves the integration of geological predictor maps that are manually derived by time-consuming and labour-intensive pre-processing of primary geoscientific data to serve as spatial proxies of mineralisation processes. Moreover, the selection of these spatial proxies is guided by conceptual genetic modelling of the targeted deposit type, which may be biased by the subjective preference of an expert geologist. This study applies Self-Organising Maps (SOM), a neural network-based unsupervised machine learning clustering algorithm, to gridded geophysical and topographical datasets in order to identify and delineate regional-scale exploration targets for carbonatite-alkaline-complex-related REE deposits in northeast India. The study did not utilise interpreted and processed or manually generated data, such as surface or bed-rock geological maps, fault traces, etc., and relies on the algorithm to identify crucial features and delineate prospective areas. The obtained results were then compared with those obtained from a previous supervised knowledge-driven prospectivity analysis. The results were found to be comparable. Therefore, unsupervised machine learning algorithms are reliable tools to automate the manual process of mineral prospectivity modelling and are robust, time-saving alternatives to knowledge-driven or supervised data-driven prospectivity modelling. These methods would be instrumental in unexplored terrains for which there is little or no geological knowledge available. 

How to cite: Aranha, M. and Porwal, A.: Unsupervised machine learning driven Prospectivity analysis of REEs in NE India, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-124, https://doi.org/10.5194/egusphere-egu22-124, 2022.

17:06–17:12
|
EGU22-9833
|
ECS
|
On-site presentation
|
Laura Laurenti, Elisa Tinti, Fabio Galasso, Luca Franco, and Chris Marone

Earthquakes forecasting and prediction have long, and in some cases sordid, histories but recent work has rekindled interest in this area based on advances in short-term early warning, hazard assessment for human induced seismicity and successful prediction of laboratory earthquakes.

In the lab, frictional stick-slip events provide an analog for the full seismic cycle and such experiments have played a central role in understanding the onset of failure and the dynamics of earthquake rupture. Lab earthquakes are also ideal targets for machine learning (ML) techniques because they can be produced in long sequences under a wide range of controlled conditions. Indeed, recent work shows that labquakes can be predicted from fault zone acoustic emissions (AE). Here, we generalize these results and explore additional ML and deep learning (DL) methods for labquake prediction. Key questions include whether improved ML/DL methods can outperform existing models, including prediction based on limited training, or if such methods can successfully forecast beyond a single seismic cycle for aperiodic failure. We describe significant improvements to existing methods of labquake prediction using simple AE statistics (variance) and DL models such as Long-Short Term Memory (LSTM) and Convolution Neural Network (CNN). We demonstrate: 1) that LSTMs and CNNs predict labquakes under a variety of conditions, including pre-seismic creep, aperiodic events and alternating slow and fast events and 2) that fault zone stress can be predicted with fidelity (accuracy in terms of R2 > 0.92), confirming that acoustic energy is a fingerprint of the fault zone stress. We predict also time to start of failure (TTsF) and time to the end of Failure (TTeF). Interestingly, TTeF is successfully predicted in all seismic cycles, while the TTsF prediction varies with the amount of fault creep before an event. We also report on a novel autoregressive forecasting method to predict future fault zone states, focusing on shear stress. This forecasting model is distinct from existing predictive models, which predict only the current state. We compare three modern approaches in sequence modeling framework: LSTM, Temporal Convolution Network (TCN) and Transformer Network (TF). Results are encouraging in forecasting the shear stress at long-term future horizons, autoregressively. Our ML/DL prediction models outperform the state of the art and our autoregressive model represents a novel forecasting framework that could enhance current methods of earthquake forecasting.

How to cite: Laurenti, L., Tinti, E., Galasso, F., Franco, L., and Marone, C.: Deep learning for laboratory earthquake prediction and autoregressive forecasting of fault zone stress, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9833, https://doi.org/10.5194/egusphere-egu22-9833, 2022.

17:12–17:18
|
EGU22-10711
|
ECS
|
Virtual presentation
Mahtab Rashidifard, Jeremie Giraud, Mark Jessell, and Mark Lindsay

Reflection seismic data, although sparsely distributed due to the high cost of acquisition, is the only type of data that can provide high-resolution images of the crust to reveal deep subsurface structures and the architectural complexity that may vector attention to minerally prospective regions. However, these datasets are not commonly considered in integrated geophysical inversion approaches due to computationally expensive forward modeling and inversion. Common inversion techniques on reflection seismic images are mostly utilized and developed for basin studies and have very limited application for hard-rock studies. Post-stack acoustic impedance inversions, for example, rely a lot on extracted petrophysical information along drilling borehole for depth correction purposes which are not necessarily available. Furthermore, the available techniques do not allow simple, automatic integration of seismic inversion with other geophysical datasets. 

 

 We introduce a new methodology that allows the utilization of the seismic images within the gravity inversion technique with the purpose of 3D boundary parametrization of the subsurface. The proposed workflow is a novel approach for incorporating seismic images into the integrated inversion techniques which relies on the image-ray method for depth-to-time domain conversion of seismic datasets. This algorithm uses a convolutional neural network to iterate over seismic images in time and depth domains. This iterative process is functional to compensate for the low depth resolution of the gravity datasets. We use a generalized level-set technique for gravity inversion to link the interfaces of the units with the depth-converted seismic images. The algorithm has been tested on realistic synthetic datasets generated from scenarios corresponding to different deformation histories. The preliminary results of this study suggest that post-stack seismic images can be utilized in integrated geophysical inversion algorithms without the need to run computationally expensive full wave-form inversions.  

How to cite: Rashidifard, M., Giraud, J., Jessell, M., and Lindsay, M.: A new approach toward integrated inversion of reflection seismic and gravity datasets using deep learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10711, https://doi.org/10.5194/egusphere-egu22-10711, 2022.

17:18–17:24
|
EGU22-7044
|
Virtual presentation
Yuri Bregman, Yochai Ben Horin, Yael Radzyner, Itay Niv, Maayan Kahlon, and Neta Rabin

Manifold learning is a branch of machine learning that focuses on compactly representing complex data-sets based on their fundamental intrinsic parameters. One such method is diffusion maps, which reduces the dimension of the data while preserving its geometric structure. In this work, diffusion maps are applied to several seismic event characterization tasks. The first task is automatic earthquake-explosion discrimination, which is an essential component of nuclear test monitoring. We also use this technique to automatically identify mine explosions and aftershocks following large earthquakes. Identification of such events helps to lighten the analysts’ burden and allow for timely production of reviewed seismic bulletins.

The proposed methods begin with a pre-processing stage in which a time–frequency representation is extracted from each seismogram while capturing common properties of seismic events and overcoming magnitude differences. Then, diffusion maps are used in order to construct a low-dimensional model of the original data. In this new low-dimensional space, classification analysis is carried out.

The algorithm’s discrimination performance is demonstrated on several seismic data sets. For instance, using the seismograms from EIL station, we identify arrivals that were caused by explosions at the nearby Eshidiya mine in Jordan. The model provides a visualization of the data, organized by its intrinsic factors. Thus, along with the discrimination results, we provide a compact organization of the data that characterizes the activity patterns in the mine.

Our results demonstrate the potential and strength of the manifold learning based approach, which may be suitable to other in other geophysics domains.

How to cite: Bregman, Y., Ben Horin, Y., Radzyner, Y., Niv, I., Kahlon, M., and Rabin, N.: Seismic Event Characterization using Manifold Learning Methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7044, https://doi.org/10.5194/egusphere-egu22-7044, 2022.

17:24–17:30
|
EGU22-12489
|
ECS
|
Virtual presentation
|
Lukács Kuslits, Lili Czirok, and István Bozsó

As it is well-known, stress fields are responsible for earthquake formation. In order to analyse stress relations in a study area using focal mechanisms’ (FMS) inversions, it is vital to consider three fundamental criteria:

(1)       The investigated area is characterized by a homogeneous stress field.

(2)       The earthquakes occur with variable directions on pre-existing faults.

(3)       The deviation of the fault slip vector from the shear stress vector is minimal (Wallace-Bott hypothesis).

The authors have attempted to develop a “fully-automated” algorithm to carry out the classification of the earthquakes as a prerequisite of stress estimations. This algorithm does not call for the setting of hyper-parameters, thus subjectivity can be reduced significantly and the running time can also decrease. Nevertheless, there is an optional hyper-parameter that is eligible to filter outliers, isolated points (earthquakes) in the input dataset.

In this presentation, they show the operation of this algorithm in case of synthetic datasets consisting of different groups of FMS and a real seismic dataset. The latter come from a survey area in the earthquake-prone Vrancea-zone (Romania). This is a relatively small region (around 30*70 km) in the external part of SE-Carpathians where the distribution of the seismic events is quite dense and heterogeneous.

It shall be noted that though the initial results are promising, further developments are still necessary. The source codes are soon to be uploaded to a public GitHub repository which will be available for the whole scientific community.

How to cite: Kuslits, L., Czirok, L., and Bozsó, I.: “Fully-automated” clustering method for stress inversions (CluStress), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12489, https://doi.org/10.5194/egusphere-egu22-12489, 2022.

17:30–17:36
|
EGU22-1255
|
ECS
|
Virtual presentation
|
Antonio Pérez, Mario Santa Cruz, Johannes Flemming, and Miha Razinger

The degradation of air quality is a challenge that policy-makers face all over the world. According to the World Health Organisation, air pollution causes an estimate of 7 million premature deaths every year. In this context, air quality forecasts are crucial tools for decision- and policy-makers, to achieve data-informed decisions.

Global forecasts, such as the Copernicus Atmosphere monitoring service model (CAMS), usually exhibit biases: systematic deviations from observations. Adjusting these biases is typically the first step towards obtaining actionable air quality forecasts. It is especially relevant in health-related decisions, when the metrics of interest depend on specific thresholds.

AQ (Air quality) - Bias correction was a project funded by the ECMWF Summer of Weather Code (ESOWC) 2021 whose aim is to improve CAMS model forecasts for air quality variables (NO2, O3, PM2.5), using as a reference the in-situ observations provided by OpenAQ. The adjustment, based on machine learning methods, was performed over a set of specific interesting locations provided by the ECMWF, for the period June 2019 to March 2021.

The machine learning approach uses three different deep learning based models, and an extra neural network that gathers the output of the three previous models. From the three DL-based models, two of them are independent and follow the same structure built upon the InceptionTime module: they use both meteorological and air quality variables, to exploit the temporal variability and to extract the most meaningful features of the past [t-24h, t-23h, … t-1h] and future [t, t+1h, …, t+23h] CAMS predictions. The third model uses the station static attributes (longitude, latitude and elevation), and a multilayer perceptron interacts with the station attributes. The extracted features from these three models are fed into another multilayer perceptron, to predict the upcoming errors with hourly resolution [t, t+1h, …, t+23h]. As a final step, 5 different initializations are considered, assembling them with equal weights to have a more stable regressor.

Previous to the modelisation, CAMS forecasts of air quality variables were actually biassed independently from the location of interest and the variable (on average: biasNO2 = -22.76, biasO3 = 44.30, biasPM2.5 = 12.70). In addition, the skill of the model, measured by the Pearson correlation, did not reach 0.5 for any of the variables—with remarkable low values for NO2 and O3 (on average: pearsonNO2 = 0.10, pearsonO3 = 0.14).

AQ-BiasCorrection modelisation properly corrects these biases. Overall, the number of stations that improve the biases both in train and test sets are: 52 out of 61 (85%) for NO2, 62 out of 67 (92%) for O3, and 80 out of 102 (78%) for PM2.5. Furthermore, the bias improves with declines of -1.1%, -9.7% and -13.9% for NO2, O3 and PM2.5 respectively. In addition, there is an increase in the model skill measured through the Pearson correlation, reaching values in the range of 100-400% for the overall improvement of the variable skill.

How to cite: Pérez, A., Santa Cruz, M., Flemming, J., and Razinger, M.: A Deep Learning approach to de-bias Air Quality forecasts, using heterogeneous Open Data sources as reference, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1255, https://doi.org/10.5194/egusphere-egu22-1255, 2022.

17:36–17:42
|
EGU22-12574
|
ECS
|
Highlight
|
Virtual presentation
Helge Mohn, Daniel Kreyling, Ingo Wohltmann, Ralph Lehmann, Peter Maass, and Markus Rex

Common representations of the stratospheric ozone layer in climate modeling are widely considered only in a very simplified way. Neglecting the mutual interactions of ozone with atmospheric temperature and dynamics has the effect of making climate projections less accurate. Although, more elaborate and interactive models of the stratospheric ozone layer are available, they require far too much computation time to be coupled with climate models. Our aim with this project was to break new ground and pursue an interdisciplinary strategy that spans the fields of machine learning, atmospheric physics and climate modelling.

In this work, we present an implicit neural representation of the extrapolar stratospheric ozone chemistry (SWIFT-AI). An implicitly defined hyperspace of the stratospheric ozone chemistry offers a continuous and even differentiable representation that can be parameterized by artificial neural networks. We analysed different parameter-efficient variants of multilayer perceptrons. This was followed by an intensive, as far as possible energy-efficient search for hyperparameters involving Bayesian optimisation and early stopping techniques.

Our data source is the Lagrangian chemistry and transport model ATLAS. Using its full model of stratospheric ozone chemistry, we focused on simulating a wide range of stratospheric variability that will occur in future climate (e.g. temperature and meridional circulation changes). We conducted a simulation for several years and created a data-set with over 200E+6 input and output pairs. Each output is the 24h ozone tendency of a trajectory. We performed a dimensionality reduction of the input parameters by using the concept of chemical families and by performing a sensitivity analysis to choose a set of robust input parameters.

We coupled the resulting machine learning models with the Lagrangian chemistry and transport model ATLAS, substituting the full stratospheric chemistry model. We validated a two-year simulation run by comparing to the differences in accuracy and computation time from both the full stratospheric chemistry model and the previous polynomial approach of extrapolar SWIFT. We found that SWIFT-AI consistently outperforms the previous polynomial approach of SWIFT, both in terms of test data and simulation results. We discovered that the computation time of SWIFT-AI is more than twice as fast as the previous polynomial approach SWIFT and 700 times faster than the full stratospheric chemistry scheme of ATLAS, resulting in minutes instead of weeks of computation time per model year – a speed-up of several orders of magnitude.

To ensure reproducibility and transparency, we developed a machine learning pipeline, published a benchmark dataset and made our repository open to the public.

In summary, we could show that the application of state-of-the-art machine learning methods to the field of atmospheric physics holds great potential. The achieved speed-up of an interactive and very precise ozone layer enables a novel way of representing the ozone layer in climate models. This in turn will increase the quality of climate projections, which are crucial for policy makers and of great importance for our planet.

How to cite: Mohn, H., Kreyling, D., Wohltmann, I., Lehmann, R., Maass, P., and Rex, M.: SWIFT-AI: Significant Speed-up in Modelling the Stratospheric Ozone Layer, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12574, https://doi.org/10.5194/egusphere-egu22-12574, 2022.

17:42–17:48
|
EGU22-5746
|
Virtual presentation
Hervé Petetin, Dene Bowdalo, Pierre-Antoine Bretonnière, Marc Guevara, Oriol Jorba, Jan Mateu armengol, Margarida Samso Cabre, Kim Serradell, Albert Soret, and Carlos Pérez García-Pando

Air quality (AQ) forecasting systems are usually built upon physics-based numerical models that are affected by a number of uncertainty sources. In order to reduce forecast errors, first and foremost the bias, they are often coupled with Model Output Statistics (MOS) modules. MOS methods are statistical techniques used to correct raw forecasts at surface monitoring station locations, where AQ observations are available. In this study, we investigate to what extent AQ forecasts can be improved using a variety of MOS methods, including persistence (PERS), moving average (MA), quantile mapping (QM), Kalman Filter (KF), analogs (AN), and gradient boosting machine (GBM). We apply our analysis to the Copernicus Atmospheric Monitoring Service (CAMS) regional ensemble median O3 forecasts over the Iberian Peninsula during 2018–2019. A key aspect of our study is the evaluation, which is performed using a very comprehensive set of continuous and categorical metrics at various time scales (hourly to daily), along different lead times (1 to 4 days), and using different meteorological input data (forecast vs reanalyzed).

Our results show that O3 forecasts can be substantially improved using such MOS corrections and that this improvement goes much beyond the correction of the systematic bias. Although it typically affects all lead times, some MOS methods appear more adversely impacted by the lead time. When considering MOS methods relying on meteorological information and comparing the results obtained with IFS forecasts and ERA5 reanalysis, the relative deterioration brought by the use of IFS is minor, which paves the way for their use in operational MOS applications. Importantly, our results also clearly show the trade-offs between continuous and categorical skills and their dependencies on the MOS method. The most sophisticated MOS methods better reproduce O3 mixing ratios overall, with lowest errors and highest correlations. However, they are not necessarily the best in predicting the highest O3 episodes, for which simpler MOS methods can give better results. Although the complex impact of MOS methods on the distribution and variability of raw forecasts can only be comprehended through an extended set of complementary statistical metrics, our study shows that optimally implementing MOS in AQ forecast systems crucially requires selecting the appropriate skill score to be optimized for the forecast application of interest.

Petetin, H., Bowdalo, D., Bretonnière, P.-A., Guevara, M., Jorba, O., Armengol, J. M., Samso Cabre, M., Serradell, K., Soret, A., and Pérez Garcia-Pando, C.: Model Output Statistics (MOS) applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores, Atmos. Chem. Phys. Discuss. [preprint], https://doi.org/10.5194/acp-2021-864, in review, 2021.

How to cite: Petetin, H., Bowdalo, D., Bretonnière, P.-A., Guevara, M., Jorba, O., Mateu armengol, J., Samso Cabre, M., Serradell, K., Soret, A., and Pérez García-Pando, C.: Model Output Statistics (MOS) and Machine Learning applied to CAMS O3 forecasts: trade-offs between continuous and categorical skill scores, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5746, https://doi.org/10.5194/egusphere-egu22-5746, 2022.

17:48–17:54
|
EGU22-6553
|
ECS
|
On-site presentation
|
Sebastian Hickman, Paul Griffiths, James Weber, and Alex Archibald

Concentrations of the hydroxyl radical, OH, control the lifetime of methane, carbon monoxide and other atmospheric constituents.  The short lifetime of OH, coupled with the spatial and temporal variability in its sources and sinks, makes accurate simulation of its concentration particularly challenging. To date, machine learning (ML) methods have been infrequently applied to global studies of atmospheric chemistry.

We present an assessment of the use of ML methods for the challenging case of simulation of the hydroxyl radical at the global scale, and show that several approaches are indeed viable.  We use observational data from the recent NASA Atmospheric Tomography Mission to show that machine learning methods are comparable in skill to state of the art forward chemical models and are capable, if appropriately applied, of simulating OH to within observational uncertainty.  

We show that a simple ridge regression model is a better predictor of OH concentrations in the remote atmosphere than a state of the art chemical mechanism implemented in a forward box model. Our work shows that machine learning may be an accurate emulator of chemical concentrations in atmospheric chemistry, which would allow a significant speed up in climate model runtime due to the speed and efficiency of simple machine learning methods. Furthermore, we show that relatively few predictors are required to simulate OH concentrations, suggesting that the variability in OH can be quantitatively accounted for by few observables with the potential to simplify the numerical simulation of atmospheric levels of key species such as methane. 

How to cite: Hickman, S., Griffiths, P., Weber, J., and Archibald, A.: Can simple machine learning methods predict concentrations of OH better than state of the art chemical mechanisms?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6553, https://doi.org/10.5194/egusphere-egu22-6553, 2022.

17:54–18:00
|
EGU22-7113
|
ECS
|
Highlight
|
On-site presentation
|
Charlotte Neubacher, Philipp Franke, Alexander Heinlein, Axel Klawonn, Astrid Kiendler-Scharr, and Anne-Caroline Lange

State of the art atmospheric chemistry transport models on regional scales as the EURAD-IM (EURopean Air pollution Dispersion-Inverse Model) simulate physical and chemical processes in the atmosphere to predict the dispersion of air pollutants. With EURAD-IM’s 4D-var data assimilation application, detailed analyses of the air quality can be conducted. These analyses allow for improvements of atmospheric chemistry forecast as well as emission source strength assessments. Simulations of EURAD-IM can be nested to a spatial resolution of 1 km, which does not correspond to the urban scale. Thus, inner city street canyon observations cannot be exploited since here, anthropogenic pollution vary vastly over scales of 100 m or less.

We address this issue by implementing a machine learning (ML) module into EURAD-IM, forming a hybrid model that enable bridging the representativeness gap between model resolution and inner-city observations. Thus, the data assimilation of EURAD-IM is strengthened by additional observations in urban regions. Our approach of the ML module is based on a neural network (NN) with relevant environmental information of street architecture, traffic density, meteorology, and atmospheric pollutant concentrations from EURAD-IM as well as the street canyon observation of pollutants as input features. The NN then maps the observed concentration from street canyon scale to larger spatial scales.

We are currently working with a fully controllable test environment created from EURAD-IM forecasts of the years 2020 and 2021 at different spatial resolutions. Here, the ML model maps the high-resolution hourly NO2 concentration to the concentration of the low resolution model grid. It turns out that it is very difficult for NNs to learn the hourly concentrations with equal accuracy using diurnal cycles of pollutant concentrations. Thus, we develop a model that uses an independent NN for each hour to support time-of-day learning. This allows to reduce the training error by a factor of 102. As a proof of concept, we trained the ML model in an overfitting regime where the mean squared training error reduce to 0.001% for each hour. Furthermore, by optimizing the hyperparameters and introducing regularization terms to reduce the overfitting, we achieved a validation error of 9−12% during night and 9−16% during day.

How to cite: Neubacher, C., Franke, P., Heinlein, A., Klawonn, A., Kiendler-Scharr, A., and Lange, A.-C.: Coupling regional air quality simulations of EURAD-IM with street canyon observations - a machine learning approach, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7113, https://doi.org/10.5194/egusphere-egu22-7113, 2022.

18:00–18:06
|
EGU22-7755
|
Presentation form not yet defined
Up- and Downscaling of Carbon dioxide (CO2) concentrations in an Urban Environment
(withdrawn)
Alain Retière and H. Gijs van den Dool
18:06–18:12
|
EGU22-5631
|
ECS
|
On-site presentation
Carola Trahms, Patricia Handmann, Willi Rath, Matthias Renz, and Martin Visbeck

Lagrangian experiments for particle tracing in atmosphere or ocean models and their analysis are a cornerstone of earth-system studies. They cover diverse study objectives such as the identification of pathways or source regions. Data for Lagrangian studies are generated by releasing virtual particles in one or in multiple locations of interest and simulating their advective-diffusive behavior backwards or forwards in time. Identifying main pathways connecting two regions of interest is often done by counting the trajectories that reach both regions. Here, the exact source and target region must be defined manually by a researcher. Manually defining the importance and exact location of these regions introduces a highly subjective perspective into the analysis. Additionally, to investigate all major target regions, all of them must be defined manually and the data must be analyzed accordingly. This human element slows down and complicates large scale analyses with many different sections and possible source areas.

We propose to significantly reduce the manual aspect by automatizing this process. To this end, we combine methods from different areas of machine learning and pattern mining into a sequence of steps. First, unsupervised methods, i.e., clustering, identify possible source areas on a randomized subset of the data. In a successive second step, supervised learning, i.e., classification, labels the positions along the trajectories according to their most probable source area using the previously automatically identified clusters as labels. The results of this approach can then be compared quantitatively to the results of analyses with manual definition of source areas and border-hitting-based labeling of the trajectories. Preliminary findings suggest that this approach could indeed help greatly to objectify and fasten the analysis process for Lagrangian Particle Release Experiments.

How to cite: Trahms, C., Handmann, P., Rath, W., Renz, M., and Visbeck, M.: Autonomous Assessment of Source Area Distributions for Sections in Lagrangian Particle Release Experiments, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5631, https://doi.org/10.5194/egusphere-egu22-5631, 2022.

18:12–18:30

Presentations: Tue, 24 May | Room N1

Chairpersons: Redouane Lguensat, Julien Brajard
08:30–08:36
|
EGU22-4493
|
ECS
|
Highlight
|
Virtual presentation
|
Maëlle Coulon--Decorzens, Frédérique Cheruy, and Frédéric Hourdin

The tuning or calibration of General Circulation Models (GCMs) is an essential stage for their proper behavior. The need to have the best climate projections in the regions where we live drives the need to tune the models in particular towards the land surface, bearing in mind that the interactions between the atmosphere and the land surface remain a key source of uncertainty in regional-scale climate projections [1].

For a long time, this tuning has been done by hand, based on scientific expertise and has not been sufficiently documented [2]. Recent tuning tools offer the possibility to accelerate climate model development, providing a real tuning formalism as well as a new way to understand climate models. High Tune explorer is one of these statistic tuning tool, involving machine learning and based on uncertainty quantification. It aims to reduce the range of free parameters that allow realistic model behaviour [3]. A new automatic tuning experiment was developed with this tool for the atmospheric component of the IPSL GCM model, LMDZ. It was first tuned at the process level, using several single column test cases compared to large eddies simulations; and then at the global level by targeting radiative metrics at the top of the atmosphere [4].

We propose to add a new step to this semi-automatic tuning procedure targeting atmosphere and land-surface interactions. The first aspect of the proposition is to compare coupled atmosphere-continent simulations (here running LMDZ-ORCHIDEE) with in situ observations from the SIRTA observatory located southwest of Paris. In situ observations provide hourly joint colocated data with a strong potential for the understanding of the processes at stake and their representation in the model. These data are also subject to much lower uncertainties than the satellite inversions with respect to the surface observations. In order to fully benefit from the site observations, the model winds are nudged toward reanalysis. This forces the simulations to follow the effective meteorological sequence, thus allowing the comparison between simulations and observations at the process time scale. The removal of the errors arising from the representation of large-scale dynamics makes the tuning focus on the representation of physical processes «at a given meteorological situation». Finally, the model grid is zoomed in on the SIRTA observatory in order to reduce the computational cost of the simulations while preserving a fine mesh around this observatory.

We show the results of this new tuning step, which succeeds in reducing the domain of acceptable free parameters as well as the dispersion of the simulations. This method, which is less computationally costly than global tuning, is therefore a good way to precondition the latter. It allows the joint tuning of atmospheric and land surface models, traditionally tuned separately [5], and has the advantage of remaining close to the processes and thus improving their understanding.

References:

[1] Cheruy et al., 2014, https://doi.org/10.1002/2014GL061145

[2] Hourdin et al., 2017, https://doi.org/10.1175/BAMS-D-15-00135.1

[3] Couvreux et al., 2021, https://doi.org/10.1029/2020MS002217

[4] Hourdin et al., 2021, https://doi.org/10.1029/2020MS002225

[5] Cheruy et al., 2020, https://doi.org/10.1029/2019MS002005

How to cite: Coulon--Decorzens, M., Cheruy, F., and Hourdin, F.: Semi-automatic tuning procedure for a GCM targeting continental surfaces: a first experiment using in situ observations, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4493, https://doi.org/10.5194/egusphere-egu22-4493, 2022.

08:36–08:42
|
EGU22-9348
|
ECS
|
On-site presentation
Doran Khamis, Matt Fry, Hollie Cooper, Ross Morrison, and Eleanor Blyth

Improving our understanding of soil moisture and hydraulics is crucial for flood prediction, smart agriculture, modelling nutrient and pollutant spread and evaluating the role of land as a sink or source of carbon and other greenhouse gases. State of the art land surface models rely on poorly-resolved soil textural information to parametrise arbitrarily layered soil models; soils rich in organic matter – key to understanding the role of the land in achieving net zero carbon – are not well modelled. Here, we build a predictive data-driven model of soil moisture using a neural network composed of transformer layers to process time series data from point-sensors (precipitation gauges and sensor-derived estimates of potential evaporation) and convolutional layers to process spatial atmospheric driving data and contextual information (topography, land cover and use, location and catchment behaviour of water bodies). We train the model using data from the COSMOS-UK sensor network and soil moisture satellite products and compare the outputs with JULES to investigate where and why the models diverge. Finally, we predict regions of high peat content and propose a way to combine theory with our data-driven approach to move beyond the sand-silt-clay modelling framework.

How to cite: Khamis, D., Fry, M., Cooper, H., Morrison, R., and Blyth, E.: Data-driven modelling of soil moisture: mapping organic soils, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9348, https://doi.org/10.5194/egusphere-egu22-9348, 2022.

08:42–08:48
|
EGU22-7093
|
ECS
|
Virtual presentation
Junjiang Liu and Xing Yuan

Accurate streamflow forecasts can provide guidance for reservoir managements, which can regulate river flows, manage water resources and mitigate flood damages. One popular way to forecast streamflow is to use bias-corrected meteorological forecasts to drive a calibrated hydrological model. But for cascade reservoirs, such approaches suffer significant deficiencies because of the difficulty to simulate reservoir operations by physical approach and the uncertainty of meteorological forecasts over small catchment. Another popular way is to forecast streamflow with machine learning method, which can fit a statistical model without inputs like reservoir operating rules. Thus, we integrate meteorological forecasts, land surface hydrological model and machine learning to forecast hourly streamflow over the Yantan catchment, which is one of the cascade reservoirs in the Hongshui River with streamflow influenced by both the upstream reservoir water release and the rainfall runoff process within the catchment.

Before evaluating the streamflow forecast system, it is necessary to investigate the skill by means of a series of specific hindcasts that isolate potential sources of predictability, like meteorological forcing and the initial condition (IC). Here, we use ensemble streamflow prediction (ESP)/reverse ESP (revESP) method to explore the impact of IC on hourly stream prediction. Results show that the effect of IC on runoff prediction is 16 hours. In the next step, we evaluate the hourly streamflow hindcasts during the rainy seasons of 2013-2017 performed by the forecast system. We use European Centre for Medium-Range Weather Forecasts perturbed forecast forcing from the THORPEX Interactive Grand Global Ensemble (TIGGE-ECMWF) as meteorological inputs to perform the hourly streamflow hindcasts. Compared with the ESP, the hydrometeorological ensemble forecast approach reduces probabilistic and deterministic forecast errors by 6% during the first 7 days. After integrated the long short-term memory (LSTM) deep learning method into the system, the deterministic forecast error can be further reduced by 6% in the first 72 hours. We also use historically observed streamflow to drive another LSTM model to perform an LSTM-only streamflow forecast. Results show that its skill sharply dropped after the first 24 hours, which indicates that the meteorology-hydrology modeling approach can improve the streamflow forecast.

How to cite: Liu, J. and Yuan, X.: Reservoir inflow forecast by combining meteorological ensemble forecast, physical hydrological simulation and machine learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7093, https://doi.org/10.5194/egusphere-egu22-7093, 2022.

08:48–08:54
|
EGU22-2095
|
Virtual presentation
Leiming Ma

Numerical weather prediction (NWP) models are currently popularly used for operational weather forecast in meteorological centers. The NWP models describe the flow of fluids by employing a set of governing equations, physical parameterization schemes and initial and boundary conditions. Thus, it often face bias of prediction due to insufficient data assimilation, assumptions or approximations of dynamical and physical processes. To make gridded forecast of rainfall with high confidence, in this study, we present a data-driven deep learning model for correction of rainfall from NWP model, which mainly includes a confidence network and a combinatorial network. Meanwhile, a focal loss is introduced to deal with the characteristics of longtail-distribution of rainfall. It is expected to alleviate the impact of the large span of rainfall magnitude by transferring the regression problem into several binary classification problems. The deep learning model is used to correct the gridded forecasts of rainfall from the European Centre for Medium-Range Weather Forecast Integrated Forecasting System global model (ECMWF-IFS) with a forecast lead time of 24 h to 240 h in Eastern China. First, the rainfall forecast correction problem is treated as an image-to-image translation problem in deep learning under the neural networks. Second, the ECMWF-IFS forecasts and rainfall observations in recent years are used as training, validation, and testing datasets. Finally, the correction performance of the new machine learning model is evaluated and compared to several classical machine learning algorithms. By performing a set of experiments for rainfall forecast error correction, it is found that the new model can effectively forecast rainfall over East China region during the flood season of the year 2020. Experiments also demonstrate that the proposed approach generally performs better in bias correction of rainfall prediction than most of the classical machine learning approaches .

How to cite: Ma, L.: A Deep Learning Bias Correction Approach for Rainfall Numerical Prediction, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2095, https://doi.org/10.5194/egusphere-egu22-2095, 2022.

08:54–09:00
|
EGU22-4923
|
ECS
|
Virtual presentation
Philipp Hess, Markus Drüke, Stefan Petri, Felix Strnad, and Niklas Boers

The simulation of precipitation in numerical Earth system models (ESMs) involves various processes on a wide range of scales, requiring high temporal and spatial resolution for realistic simulations. This can lead to biases in computationally efficient ESMs that have a coarse resolution and limited model complexity. Traditionally, these biases are corrected by relating the distributions of historical simulations with observations [1]. While these methods successfully improve the modelled statistics, unrealistic spatial features that require a larger spatial context are not addressed.

Here we apply generative adversarial networks (GANs) [2] to transform precipitation of the CM2Mc-LPJmL ESM [3] into a bias-corrected and more realistic output. Feature attribution shows that the GAN has correctly learned to identify spatial regions with the largest bias during training. Our method presents a general bias correction framework that can be extended to a wider range of ESM variables to create highly realistic but computationally inexpensive simulations of future climates. We also discuss the generalizability of our approach to projections from CMIP6, given that the GAN is only trained on historical data.

[1] A.J. Cannon et al. "Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes?." Journal of Climate 28.17 (2015): 6938-6959.

[2] I. Goodfellow et al. "Generative adversarial nets." Advances in neural information processing systems 27 (2014).

[3] M. Drüke et al. "CM2Mc-LPJmL v1.0: Biophysical coupling of a process-based dynamic vegetation model with managed land to a general circulation model." Geoscientific Model Development 14.6 (2021): 4117--4141.

How to cite: Hess, P., Drüke, M., Petri, S., Strnad, F., and Boers, N.: Constrained Generative Adversarial Networks for Improving Earth System Model Precipitation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4923, https://doi.org/10.5194/egusphere-egu22-4923, 2022.

09:00–09:06
|
EGU22-8719
|
ECS
|
On-site presentation
Sandy Chkeir, Aikaterini Anesiadou, and Riccardo Biondi

Extreme weather nowcasting has always been a challenging task in meteorology. Many research studies have been conducted to accurately forecast extreme weather events, related to rain rates and/or wind speed thresholds, in spatio-temporal scales. Over decades, this field gained attention in the artificial intelligence community which is aiming towards creating more accurate models using the latest algorithms and methods.  

In this work, within the H2020 SESAR ALARM project, we aim to nowcast rain and wind speed as target features using different input configurations of the available sources such as weather stations, lightning detectors, radar, GNSS receivers, radiosonde and radio occultations data. This nowcasting task has been firstly conducted at 14 local stations around Milano Malpensa Airport as a short-term temporal multi-step forecasting. At a second step, all stations will be combined, meaning that the forecasting becomes a spatio-temporal problem. Concretely, we want to investigate the predicted rain and wind speed values using the different inputs for two case scenarios: for each station, and joining all stations together. 

The chaotic nature of the atmosphere, e.g. non-stationarity of the driving series of each weather feature, makes the predictions unreliable and inaccurate and thus dealing with these data is a very delicate task. For this reason, we have devoted some work to cleaning, feature engineering and preparing the raw data before feeding them into the model architectures. We have managed to preprocess large amounts of data for local stations around the airport, and studied the feasibility of nowcasting rain and wind speed targets using different data sources altogether. The temporal multivariate driving series have high dimensionality and we’ve  made multi-step predictions for the defined target functions.

We study and test different machine learning architectures starting from simple multi-layer perceptrons to convolutional models, and Recurrent Neural Networks (RNN) for temporal and spatio-temporal nowcasting. The Long Short-Term Memory (LSTM) encoder decoder architecture outperforms other models achieving more accurate predictions for each station separately.  Furthermore, to predict the targets in a spatio-temporal scale, we will deploy a 2-layer spatio-temporal stacked LSTM model consisting of independent LSTM models per location in the first LSTM layer, and another LSTM layer to finally predict targets for multi-steps ahead. And the results obtained with different algorithm architectures applied to a dense network of sensors are to be reported.

How to cite: Chkeir, S., Anesiadou, A., and Biondi, R.: Multi-station Multivariate Multi-step Convection Nowcasting with Deep Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8719, https://doi.org/10.5194/egusphere-egu22-8719, 2022.

09:06–09:12
|
EGU22-3977
|
ECS
|
Highlight
|
Virtual presentation
|
Hugo Frezat, Julien Le Sommer, Ronan Fablet, Guillaume Balarac, and Redouane Lguensat

Machine learning techniques are now ubiquitous in the geophysical science community. They have been applied in particular to the prediction of subgrid-scale parametrizations using data that describes small scale dynamics from large scale states. However, these models are then used to predict temporal trajectories, which is not covered by this instantaneous mapping. Following the model trajectory during training can be done using an end-to-end approach, where temporal integration is performed using a neural network. As a consequence, the approach is shown to optimize a posteriori metrics, whereas the classical instantaneous training is limited to a priori ones. When applied on a specific energy backscatter problem, found in quasi-geostrophic turbulent flows, the strategy demonstrates long-term stability and high fidelity statistical performance, without any increase in computational complexity during rollout. These improvements may question the future development of realistic subgrid-scale parametrizations in favor of differentiable solvers, required by the a posteriori strategy.

How to cite: Frezat, H., Le Sommer, J., Fablet, R., Balarac, G., and Lguensat, R.: Learning quasi-geostrophic turbulence parametrizations from a posteriori metrics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-3977, https://doi.org/10.5194/egusphere-egu22-3977, 2022.

09:12–09:18
|
EGU22-7135
|
Highlight
|
Virtual presentation
|
|
Blanka Balogh, David Saint-Martin, and Aurélien Ribes

Unlike the traditional subgrid scale parameterizations used in climate models, current neural network (NN) parameterizations are only tuned offline, by minimizing a loss function on outputs from high resolution models. This approach often leads to numerical instabilities and long-term biases. Here, we propose a method to design tunable NN parameterizations and calibrate them online. The calibration of the NN parameterization is achieved in two steps. First, some model parameters are included within the NN model input. This NN model is fitted at once for a range of values of the parameters, using an offline metric. Second, once the NN parameterization has been plugged into the climate model, the parameters included among the NN inputs are optimized with respect to an online metric quantifying errors on long-term statistics. We illustrate our method with two simple dynamical systems. Our approach significantly reduces long-term biases of the climate model with NN based physics.

How to cite: Balogh, B., Saint-Martin, D., and Ribes, A.: How to calibrate a climate model with neural network based physics?, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-7135, https://doi.org/10.5194/egusphere-egu22-7135, 2022.

09:18–09:24
|
EGU22-6479
|
ECS
|
On-site presentation
Benedict Roeder, Jakob Schloer, and Bedartha Goswami

Well-adapted parameters in climate models are essential to make accurate predictions
for future projections. In climate science, the record of precise and comprehensive obser-
vational data is rather short and parameters of climate models are often hand-tuned or
learned from artificially generated data. Due to limited and noisy data, one wants to use
Bayesian models to have access to uncertainties of the inferred parameters. Most popu-
lar algorithms for learning parameters from observational data like the Kalman inversion
approach only provide point estimates of parameters.
In this work, we compare two Bayesian parameter inference approaches applied to the
intermediate complexity model for the El Niño-Southern Oscillation by Zebiak & Cane. i)
The "Calibrate, Emulate, Sample" (CES) approach, an extension of the ensemble Kalman
inversion which allows posterior inference by emulating the model via Gaussian Processes
and thereby enables efficient sampling. ii) The simulation-based inference (SBI) approach
where the approximate posterior distribution is learned from simulated model data and
observational data using neural networks.
We evaluate the performance of both approaches by comparing their run times and the
number of required model evaluations, assess the scalability with respect to the number
of inference parameters, and examine their posterior distributions.

How to cite: Roeder, B., Schloer, J., and Goswami, B.: Parameter inference and uncertainty quantification for an intermediate complexity climate model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6479, https://doi.org/10.5194/egusphere-egu22-6479, 2022.

09:24–09:30
|
EGU22-6674
|
ECS
|
Virtual presentation
|
Ofer Shamir, L. Minah Yang, David S. Connelly, and Edwin P. Gerber

An essential step in implementing any new parameterization is calibration, where the parameterization is adjusted to work with an existing model and yield some desired improvement. In the context of gravity wave (GW) momentum transport, calibration is necessitated by the facts that: (i) Some GWs are always at least partially resolved by the model, and hence a parameterization should only account for the missing waves. Worse, the parameterization may need to correct for the misrepresentation of under-resolved GWs, i.e., coarse vertical resolution can bias GW breaking level, leading to erroneous momentum forcing. (ii) The parameterized waves depend on the resolved solution for both their sources and dissipation, making them susceptible to model biases. Even a "perfect" parameterization could then yield an undesirable result, e.g., an unrealistic Quasi-Biennial Oscillation (QBO).  While model-specific calibration is required, one would like a general "recipe" suitable for most models. From a practical point of view, the adoption of a new parameterization will be hindered by a too-demanding calibration process. This issue is of particular concern in the context of data-driven methods, where the number of tunable degrees of freedom is large (possibly in the millions). Thus, more judicious ways for addressing the calibration step are required. 

To address the above issues, we develop a 1D QBO model, where the "true" gravity wave momentum deposition is determined from a source distribution and critical level breaking, akin to a traditional physics-based GW parameterization. The control parameters associated with the source consist of the total wave flux (related to the total precipitation for convectively generated waves) and the spectrum width (related to the depth of convection). These parameters can be varied to mimic the variability in GW sources between different models, i.e., biases in precipitation variability. In addition, the model’s explicit diffusivity and vertical advection can be varied to mimic biases in model numerics and circulation, respectively. The model thus allows us to assess the ability of a data-driven parameterization to (i) extrapolate, capturing the response of GW momentum transport to a change in the model parameters and (ii) be calibrated, adjusted to maintain the desired simulation of the QBO in response to a change in the model parameters. The first property is essential for a parameterization to be used for climate prediction, the second, for a parameterization to be used at all. We focus in particular on emulators of the GW momentum transport based on neural network and regression trees, contrasting their ability to satisfy both of these goals.  

 

How to cite: Shamir, O., Yang, L. M., Connelly, D. S., and Gerber, E. P.: The gravity wave parameterization calibration problem: A 1D QBO model testbed, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6674, https://doi.org/10.5194/egusphere-egu22-6674, 2022.

09:30–09:36
|
EGU22-5766
|
ECS
|
Virtual presentation
|
Lucia Yang and Edwin Gerber

With the goal of developing a data-driven parameterization of unresolved gravity waves (GW) momentum transport for use in general circulation models (GCMs), we investigate neural network architectures that emulate the Alexander-Dunkerton 1999 (AD99) scheme, an existing physics-based GW parameterization. We analyze the distribution of errors as functions of shear-related metrics in an effort to diagnose the disparity between online and offline performance of the trained emulators, and develop a sampling algorithm to treat biases on the tails of the distribution without adversely impacting mean performance. 

It has been shown in previous efforts [1] that stellar offline performance does not necessarily guarantee adequate online performance, or even stability. Error analysis reveals that the majority of the samples are learned quickly, while some stubborn samples remain poorly represented. We find that the more error-prone samples are those with wind profiles that have large shears– this is consistent with physical intuition as gravity waves encounter a wider range of critical levels when experiencing large shear;  therefore parameterizing gravity waves for these samples is a more difficult, complex task. To remedy this, we develop a sampling strategy that performs a parameterized histogram equalization, a concept borrowed from 1D optimal transport. 

The sampling algorithm uses a linear mapping from the original histogram to a more uniform histogram parameterized by $t \in [0,1]$, where $t=0$ recovers the original distribution and $t=1$ enforces a completely uniform distribution. A given value $t$ assigns each bin a new probability which we then use to sample from each bin. If the new probability is smaller than the original, then we invoke sampling without replacement, but limited to a reduced number consistent with the new probability. If the new probability is larger than the original, then we repeat all the samples in the bin up to some predetermined maximum repeat value (a threshold to avoid extreme oversampling at the tails). We optimize this sampling algorithm with respect to $t$, the maximum repeat value, and the number and distribution (uniform or not) of the histogram bins. The ideal combination of those parameters yields errors that are closer to a constant function of the shear metrics while maintaining high accuracy over the whole dataset. Although we study the performance of this algorithm in the context of training a gravity wave parameterization emulator, this strategy can be used for learning datasets with long tail distributions where the rare samples are associated with low accuracy. Instances of this type of datasets are prevalent in earth system dynamics: launching of gravity waves, and extreme events like hurricanes, heat waves are just a few examples. 

[1] Espinosa, Z. I., A. Sheshadri, G. R. Cain, E. P. Gerber, and K. J. DallaSanta, 2021: A Deep Learning Parameterization of Gravity Wave Drag Coupled to an Atmospheric Global Climate Model,Geophys. Res. Lett., in review. [https://edwinpgerber.github.io/files/espinosa_etal-GRL-revised.pdf]

How to cite: Yang, L. and Gerber, E.: Sampling strategies for data-driven parameterization of gravity wave momentum transport, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5766, https://doi.org/10.5194/egusphere-egu22-5766, 2022.

09:36–09:42
|
EGU22-6859
|
On-site presentation
Dmitri Kondrashov

All oceanic general circulation models (GCMs) include parametrizations of the unresolved subgrid-scale (eddy) effects on the large-scale motions, even at the (so-called) eddy-permitting resolutions. Among the many problems associated with the development of accurate and efficient eddy parametrizations, one problem is a reliable decomposition of a turbulent flow into resolved and unresolved (subgrid) scale components. Finding an objective way to separate eddies is a fundamental, critically important and unresolved problem. 
Here a statistically consistent correlation-based flow decomposition method (CBD) that employs the Gaussian filtering kernel with geographically varying topology – consistent with the observed local spatial correlations – achieves the desired scale separation. CBD is demonstrated for an eddy-resolving solution of the classical midlatitude double-gyre quasigeostrophic (QG) circulation, that possess two asymmetric gyres of opposite circulations and a strong meandering eastward jet, such as the Gulf Stream in the North Atlantic and Kuroshio in the North Pacific. CBD facilitates a comprehensive analysis of the feedbacks of eddies on the large-scale flow via the transient part of the eddy forcing. A  `product integral' based on time-lagged correlation between the diagnosed eddy forcing and the evolving large-scale flow, uncovers robust `eddy backscatter' mechanism. Data-driven augmentation of non-eddy-resolving ocean model by stochastically-emulated eddy fields allows to restore the missing eddy-driven features, such as the merging western boundary currents, their eastward extension and low-frequency variabilities of gyres.

  • N. Argawal, Ryzhov, E.A., Kondrashov, D., and P.S. Berloff, 2021: Correlation-based flow decomposition and statistical analysis of the eddy forcing, Journal of Fluid Mechanics, 924, A5. doi:10.1017/jfm.2021.604

  • N. Argawal, Kondrashov, D., Dueben, P., Ryzhov, E.A., and P.S. Berloff, 2021: A comparison of data-driven approaches to build low-dimensional ocean modelsJournal of Advances in Modelling Earth Systems, doi:10.1029/2021MS002537

 

How to cite: Kondrashov, D.: Towards physics-informed stochastic parametrizations of subgrid physics in ocean models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-6859, https://doi.org/10.5194/egusphere-egu22-6859, 2022.

09:42–09:48
|
EGU22-8279
|
ECS
|
Virtual presentation
Ihor Hromov, Georgy Shapiro, Jose Ondina, Sanjay Sharma, and Diego Bruciaferri

For the ocean models, the increase of spatial resolution is a matter of significant importance and thorough research. Computational resources limit our capabilities of the increase in model resolution. This constraint is especially true for the traditional dynamical models, for which an increase of a factor of two in the horizontal resolution results in simulation times increased approximately tenfold. One of the potential methods to relax this limitation is to use Artificial Intelligence methods, such as Neural Networks (NN). In this research, NN is applied to ocean circulation modelling. More specifically, NN is used on data output from the dynamical model to increase the spatial resolution of the model output. The main dataset being used is Sea Surface Temperature data in 0.05- and 0.02-degree horizontal resolutions for Irish Sea. 

Several NN architectures were applied to address the task. Generative Adversarial Networks (GAN), Convolutional Neural Networks (CNN) and Multi-level Wavelet CNN. They are used in other areas of knowledge in problems related to the increase of resolution. The work will contrast and compare the efficiency of and present a provisional assessment of the efficiency of each of the methods. 

How to cite: Hromov, I., Shapiro, G., Ondina, J., Sharma, S., and Bruciaferri, D.: Using deep learning to improve the spatial resolution of the ocean model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8279, https://doi.org/10.5194/egusphere-egu22-8279, 2022.

09:48–10:00
Coffee break
Chairpersons: Julien Brajard, Alejandro Coca-Castro
10:20–10:26
|
EGU22-11420
|
ECS
|
On-site presentation
|
Redouane Lguensat, Julie Deshayes, and Venkatramani Balaji

The process of relying on experience and intuition to find good sets of parameters, commonly referred to as "parameter tuning" keeps having a central role in the roadmaps followed by dozens of modeling groups involved in community efforts such as the Coupled Model Intercomparison Project (CMIP). 

In this work, we study a tool from the Uncertainty Quantification community that started recently to draw attention in climate modeling: History Matching also referred to as "Iterative Refocussing". The core idea of History Matching is to run several simulations with different set of parameters and then use observed data to rule-out any parameter settings which are "implausible". Since climate simulation models are computationally heavy and do not allow testing every possible parameter setting, we employ an emulator that can be a cheap and accurate replacement. Here a machine learning algorithm, namely, Gaussian Process Regression is used for the emulating step. History Matching is then a good example where the recent advances in machine learning can be of high interest to climate modeling.

One objective of this study is to evaluate the potential for history matching to tune a climate system with multi-scale dynamics. By using a toy climate model, namely, the Lorenz 96 model, and producing experiments in perfect-model setting, we explore different types of applications of HM and highlight the strenghts and challenges of using such a technique. 

How to cite: Lguensat, R., Deshayes, J., and Balaji, V.: Histroy Matching for the tuning of coupled models: experiments on the Lorenz 96 model, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11420, https://doi.org/10.5194/egusphere-egu22-11420, 2022.

10:26–10:32
|
EGU22-5681
|
ECS
|
On-site presentation
|
Sebastian Hoffmann, Yi Deng, and Christian Lessig

The predictability of the atmosphere is a classical problem that has received much attention from both a theoretical and practical point of view. In this work, we propose to use a purely data-driven method based on a neural network to revisit the problem. The analysis is built upon the recently introduced AtmoDist network that has been trained on high-resolution reanalysis data to provide a probabilistic estimate of the temporal difference between given atmospheric fields, represented by vorticity and divergence. We define the skill of the network for this task as a new measure of atmospheric predictability, hypothesizing that the prediction of the temporal differences by the network will be more susceptible to errors when the atmospheric state is intrinsically less predictable. Preliminary results show that for short timescales (3-48 hours) one sees enhanced predictability in warm season compared to cool season over northern midlatitudes, and lower predictability over ocean compared to land. These findings support the hypothesis that across short timescales, AtmoDist relies on the recurrences of mesoscale convection with coherent spatiotemporal structures to connect spatial evolutions to temporal differences. For example, the prevalence of mesoscale convective systems (MCSs) over the central US in boreal warm season can explain the increase of mesoscale predictability there and oceanic zones marked by greater predictability corresponds well to regions of elevated convective activity such as the Pacific ITCZ. Given the dependence of atmospheric predictability on geographic location, season, and most importantly, timescales, we further apply the method to synoptic scales (2-10 days), where excitation and propagation of large-scale disturbances such as Rossby wave packets are expected to provide the connection between temporal and spatial differences. The design of the AtmoDist network is thereby adapted to the prediction range, for example, the size of the local patches that serve as input to AtmoDist is chosen based on the spatiotemporal atmospheric scales that provide the expected time and space connections.

By providing to the community a powerful, purely data-driven technique for quantifying, evaluating, and interpreting predictability, our work lays the foundation for efficiently detecting the existence of sub-seasonal to seasonal (S2S) predictability and, by further analyzing the mechanism of AtmoDist, understanding the physical origins, which bears major scientific and socioeconomic significances.

How to cite: Hoffmann, S., Deng, Y., and Lessig, C.: AtmoDist as a new pathway towards quantifying and understanding atmospheric predictability, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5681, https://doi.org/10.5194/egusphere-egu22-5681, 2022.

10:32–10:38
|
EGU22-2058
|
ECS
|
Presentation form not yet defined
Rüdiger Brecht and Alexander Bihlo
Ensemble prediction systems are an invaluable tool for weather prediction. Practically, ensemble predictions are obtained by running several perturbed numerical simulations. However, these systems are associated with a high computational cost and often involve statistical post-processing steps to improve their qualities.
Here we propose to use a deep-learning-based algorithm to learn the statistical properties of a given ensemble prediction system, such that this system will not be needed to simulate future ensemble forecasts. This way, the high computational costs of the ensemble prediction system can be avoided while still obtaining the statistical properties from a single deterministic forecast. We show preliminary results where we demonstrate the ensemble prediction properties for a shallow water unstable jet simulation on the sphere. 

How to cite: Brecht, R. and Bihlo, A.: Deep learning for ensemble forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2058, https://doi.org/10.5194/egusphere-egu22-2058, 2022.

10:38–10:44
|
EGU22-9734
|
On-site presentation
Cesar Beneti, Jaqueline Silveira, Leonardo Calvetti, Rafael Inouye, Lissette Guzman, Gustavo Razera, and Sheila Paz

In South America, southern parts of Brazil, Paraguay and northeast Argentina are regions particularly prone to high impact weather (intensive lightning activity, high precipitation, hail, flash floods and occasional tornadoes), mostly associated with extra-tropical cyclones, frontal systems and Mesoscale Convective Systems. In the south of Brazil, agricultural industry and electrical power generation are the main economic activities. This region is responsible for 35% of all hydro-power energy production in the country, with long transmission lines to the main consumer regions, which are severely affected by these extreme weather conditions. Intense precipitation events are a common cause of electricity outages in southern Brazil, which ranks as one of the regions in Brazil with the highest annual lightning incidence, as well. Accurate precipitation forecasts can mitigate this kind of problem. Despite improvements in the precipitation estimates and forecasts, some difficulties remain to increase the accuracy, mainly related to the temporal and spatial location of the events. Although several options are available, it is difficult to identify which deterministic forecast is the best or the most reliable forecast. Probabilistic products from large ensemble prediction systems provide a guide to forecasters on how confident they should be about the deterministic forecast, and one approach is using post processing methods such as machine learning (ML), which has been used to identify patterns in historical data to correct for systematic ensemble biases.

In this paper, we present a study, in which we used 20 members from the Global Ensemble Forecast System (GEFS) and 50 members from European Centre for Medium-Range Weather Forecasts (ECMWF)  during 2019-2021,  for seven daily precipitation thresholds: 0-1.0mm, 1.0mm-15mm, 15mm-40mm, 40mm-55mm, 55mm-105mm, 105mm-155mm and over 155mm. A ML algorithm was developed for each day, up to 15 days of forecasts, and several skill scores were calculated, for these daily precipitation thresholds. Initially, to select the best members of the ensembles, a gradient boosting algorithm was applied, in order to improve the skill of the model and reduce processing time. After preprocessing the data, a random forest classifier was used to train the model. Based on hyperparameter sensitivity tests, the random forest required 500 trees, a maximum tree depth of 12 levels, at least 20 samples per leaf node, and the minimization of entropy for splits. In order to evaluate the models, we used a cross-validation on a limited data sample. The procedure has a single parameter that refers to the number of groups that a given data sample is to be split into. In our work we created a twenty-six fold cross validation with 30 days per fold to verify the forecasts. The results obtained by the RF were evaluated through estimated value versus observed value. For the forecast range, we found values above 75% for the precision metrics in the first 3 days, and around 68% in the next days. The recall was also around 80% throughout the entire forecast range,  with promising results to apply this technique operationally, which is our intent in the near future. 

How to cite: Beneti, C., Silveira, J., Calvetti, L., Inouye, R., Guzman, L., Razera, G., and Paz, S.: High Impact Weather Forecasts in Southern Brazil using Ensemble Precipitation Forecasts and Machine Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9734, https://doi.org/10.5194/egusphere-egu22-9734, 2022.

10:44–10:50
|
EGU22-12765
|
ECS
|
Virtual presentation
|
Daniel Ayers, Jack Lau, Javier Amezcua, Alberto Carrassi, and Varun Ojha

Weather and climate are well known exemplars of chaotic systems exhibiting extreme sensitivity to initial conditions. Initial condition errors are subject to exponential growth on average, but the rate and the characteristic of such growth is highly state dependent. In an ideal setting where the degree of predictability of the system is known in real-time, it may be possible and beneficial to take adaptive measures. For instance a local decrease of predictability may be counteracted by increasing the time- or space-resolution of the model computation or the ensemble size in the context of ensemble-based data assimilation or probabilistic forecasting.

Local Lyapunov exponents (LLEs) describe growth rates along a finite-time section of a system trajectory. This makes the LLEs the ideal quantities to measure the local degree of predictability, yet a main bottleneck for their real-time use in  operational scenarios is the huge computational cost. Calculating LLEs involves computing a long trajectory of the system, propagating perturbations with the tangent linear model, and repeatedly orthogonalising them. We investigate if machine learning (ML) methods can estimate the LLEs based only on information from the system’s solution, thus avoiding the need to evolve perturbations via the tangent linear model. We test the ability of four algorithms (regression tree, multilayer perceptron, convolutional neural network and long short-term memory network) to perform this task in two prototypical low dimensional chaotic dynamical systems. Our results suggest that the accuracy of the ML predictions is highly dependent upon the nature of the distribution of the LLE values in phase space: large prediction errors occur in regions of the attractor where the LLE values are highly non-smooth.  In line with classical dynamical systems studies, the neutral LLE is more difficult to predict. We show that a comparatively simple regression tree can achieve performance that is similar to sophisticated neural networks, and that the success of ML strategies for exploiting the temporal structure of data depends on the system dynamics.

How to cite: Ayers, D., Lau, J., Amezcua, J., Carrassi, A., and Ojha, V.: Supervised machine learning to estimate instabilities in chaotic systems: computation of local Lyapunov exponents, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12765, https://doi.org/10.5194/egusphere-egu22-12765, 2022.

10:50–10:56
|
EGU22-5980
|
Highlight
|
Virtual presentation
George Miloshevich, Valerian Jacques-Dumas, Pierre Borgnat, Patrice Abry, and Freddy Bouchet
Extreme events such as storms, floods, cold spells and heat waves are expected to have an increasing societal impact with climate change. However the study of rare events is complicated due to computational costs of highly complex models and lack of observations. However, with the help of machine learning synthetic models for forecasting can be constructed and cheaper resampling techniques can be developed. Consequently, this may also clarify more regional impacts of climate change. .

In this work, we perform detailed analysis of how deep neural networks (DNNs) can be used in intermediate-range forecasting of prolonged heat waves of duration of several weeks over synoptic spatial scales. In particular, we train a convolutional neural network (CNN) on the 7200 years of a simulation of a climate model. As such, we are interested in probabilistic prediction (committor function in transition theory). Thus we discuss the proper forecasting scores such as Brier skill score, which is popular in weather prediction, and cross-entropy skill, which is based on information-theoretic considerations. They allow us to measure the success of various architectures and investigate more efficient pipelines to extract the predictions from physical observables such as geopotential, temperature and soil moisture. A priori, the committor is hard to visualize as it is a high dimensional function of its inputs, the grid points of the climate model for a given field. Fortunately, we can construct composite maps conditioned to its values which reveal that the CNN is likely relying on the global teleconnection patterns of geopotential. On the other hand, soil moisture signal is more localized with predictive capability over much longer times in future (at least a month). The latter fact relates to the soil-atmosphere interactions. One expects the performance of DNNs to greatly improve with more data. We provide quantitative assessment of this fact. In addition, we offer more details on how the undersampling of negative events affects the knowledge of the committor function. We show that transfer learning helps ensure that the committor is a smooth function along the trajectory. This will be an important quality when such a committor will be applied in rare event algorithms for importance sampling. 
 
While DNNs are universal function approximators the issue of extrapolation can be somewhat problematic. In addressing this question we train a CNN on a dataset generated from a simulation without a diurnal cycle, where the feedbacks between soil moisture and heat waves appear to be significantly stronger. Nevertheless, when the CNN with the given weights is validated on a dataset generated from a simulation with a daily cycle the predictions seem to generalize relatively well, despite a small reduction in skill. This generality validates the approach. 
 

How to cite: Miloshevich, G., Jacques-Dumas, V., Borgnat, P., Abry, P., and Bouchet, F.: Probabilistic forecasting of heat waves with deep learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5980, https://doi.org/10.5194/egusphere-egu22-5980, 2022.

10:56–11:02
|
EGU22-12628
|
ECS
|
Virtual presentation
|
|
Clara Hauke, Bodo Ahrens, and Clementine Dalelane

Recently, an increase in forecast skill of the seasonal climate forecast for winter in Europe has been achieved through an ensemble subsampling approach by way of predicting the mean winter North Atlantic Oscillation (NAO) index through linear regression (based on the autumn state of the four predictors sea surface temperature, Arctic sea ice volume, Eurasian snow depth and stratospheric temperature) and the sampling of the ensemble members which are able to reproduce this NAO state. This thesis shows that the statistical prediction of the NAO index can be further improved via nonlinear methods using the same predictor variables as in the linear approach. This likely also leads to an increase in seasonal climate forecast skill. The data used for the calculations stems from the global reanalysis by the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA5. The available time span for use in this thesis covered only 40 years from 1980 till 2020, hence it was important to use a method that still yields statistically significant and meaningful results under those circumstances. The nonlinear method chosen was k-nearest neighbor, which is a simple, yet powerful algorithm when there is not a lot of data available. Compared to other methods like neural networks it is easy to interpret. The resulting method has been developed and tested in a double cross-validation setting. While sea ice in the Barents-Kara sea in September-October shows the most predictive capability for the NAO index in the subsequent winter as a single predictor, the highest forecast skill is achieved through a combination of different predictor variables.

How to cite: Hauke, C., Ahrens, B., and Dalelane, C.: Prediction of the North Atlantic Oscillation index for the winter months December-January-February via nonlinear methods, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12628, https://doi.org/10.5194/egusphere-egu22-12628, 2022.

11:02–11:08
|
EGU22-13228
|
ECS
|
Highlight
|
On-site presentation
|
Rachel Furner, Peter Haynes, Dan Jones, Dave Munday, Brooks Paige, and Emily Shuckburgh

The recent boom in machine learning and data science has led to a number of new opportunities in the environmental sciences. In particular, process-based weather and climate models (simulators) represent the best tools we have to predict, understand and potentially mitigate the impacts of climate change and extreme weather. However, these models are incredibly complex and require huge amounts of High Performance Computing resources. Machine learning offers opportunities to greatly improve the computational efficiency of these models by developing data-driven emulators.

Here I discuss recent work to develop a data-driven model of the ocean, an integral part of the weather and climate system. Much recent progress has been made with developing data-driven forecast systems of atmospheric weather, highlighting the promise of these systems. These techniques can also be applied to the ocean, however modelling of the ocean poses some fundamental differences and challenges in comparison to modelling the atmosphere, for example, oceanic flow is bathymetrically constrained across a wide range of spatial and temporal scales.

We train a neural network on the output from an expensive process-based simulator of an idealised channel configuration of oceanic flow. We show the model is able to learn well the complex dynamics of the system, replicating the mean flow and details within the flow over single prediction steps. We also see that when iterating the model, predictions remain stable, and continue to match the ‘truth’ over a short-term forecast period, here around a week.

 

How to cite: Furner, R., Haynes, P., Jones, D., Munday, D., Paige, B., and Shuckburgh, E.: Developing a data-driven ocean forecast system, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-13228, https://doi.org/10.5194/egusphere-egu22-13228, 2022.

11:08–11:14
|
EGU22-654
|
On-site presentation
Said Ouala, Bertrand Chapron, Fabrice Collard, Lucile Gaultier, and Ronan Fablet

When considering the modeling of dynamical systems, the increasing interest in machine learning, artificial intelligence and more generally, data-driven representations, as well as the increasing availability of data, motivated the exploration and definition of new identification techniques. These new data-driven representations aim at solving modern questions regarding the modeling, the prediction and ultimately, the understanding of complex systems such as the ocean, the atmosphere and the climate. 

In this work, we focus on one question regarding the ability to define a (deterministic) dynamical model from a sequence of observations. We focus on sea surface observations and show that these observations typically relate to some, but not all, components of the underlying state space, making the derivation of a deterministic model in the observation space impossible. In this context, we formulate the identification problem as the definition, from data, of an embedding of the observations, parameterized by a differential equation. When compared to state-of-the-art techniques based on delay embedding and linear decomposition of the underlying operators, the proposed approach benefits from all the advances in machine learning and dynamical systems theory in order to define, constrain and tune the reconstructed sate space and the approximate differential equation. Furthermore, the proposed embedding methodology naturally extends to cases in which a dynamical prior (derived for example using physical principals) is known, leading to relevant physics informed data-driven models. 

How to cite: Ouala, S., Chapron, B., Collard, F., Gaultier, L., and Fablet, R.: On the derivation of data-driven models for partially observed systems, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-654, https://doi.org/10.5194/egusphere-egu22-654, 2022.

11:14–11:20
|
EGU22-5219
|
ECS
|
Highlight
|
Virtual presentation
Maximilian Gelbrecht and Niklas Boers

When predicting complex systems such as parts of the Earth system, one typically relies on differential equations which can often be incomplete, missing unknown influences or higher order effects. Using the universal differential equations framework, we can augment the equations with artificial neural networks that can compensate these deficiencies. We show that this can be used to predict the dynamics of high-dimensional spatiotemporally chaotic partial differential equations, such as the ones describing atmospheric dynamics. In a first step towards a hybrid atmospheric model, we investigate the Marshall Molteni Quasigeostrophic Model in the form of a Neural Partial Differential Equation. We use it in synthetic examples where parts of the governing equations are replaced with artificial neural networks (ANNs) and demonstrate how the ANNs can recover those terms.

How to cite: Gelbrecht, M. and Boers, N.: Neural Partial Differential Equations for Atmospheric Dynamics, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5219, https://doi.org/10.5194/egusphere-egu22-5219, 2022.

11:20–11:26
|
EGU22-4062
|
ECS
|
On-site presentation
Peter Mlakar, Davide Bonaldo, Antonio Ricchi, Sandro Carniel, and Matjaž Ličer

We present a numerically cheap machine-learning model which accurately emulates the performances of the surface wave model Simulating WAves Near Shore (SWAN) in the Adriatic basin (north-east Mediterranean Sea).

A ResNet50 inspired deep network architecture with customized spatio-temporal attention layers was used, the network being trained on a 1970-1997 dataset of time-dependent features based on wind fields retrieved from the COSMO-CLM regional climate model (The authors acknowledge Dr. Edoardo Bucchignani (Meteorology Laboratory, Centro Italiano Ricerche Aerospaziali -CIRA-, Capua, Italy), for providing the COSMO-CLM wind fields). SWAN surface wave model outputs for the period of 1970-1997 are used as labels. The period 1998-2000 is used to cross-validate that the network very accurately reproduces SWAN surface wave features (i.e. significant wave height, mean wave period, mean wave direction) at several locations in the Adriatic basin. 

After successful cross validation, a series of projections of ocean surface wave properties based on climate model projections for the end of 21st century (under RCP 8.5 scenario) are performed, and shifts in the emulated wave field properties are discussed.

How to cite: Mlakar, P., Bonaldo, D., Ricchi, A., Carniel, S., and Ličer, M.: Climatological Ocean Surface Wave Projections using Deep Learning, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-4062, https://doi.org/10.5194/egusphere-egu22-4062, 2022.

11:26–11:32
|
EGU22-2893
|
ECS
|
Highlight
|
On-site presentation
|
Paulina Tedesco, Jean Rabault, Martin Lilleeng Sætra, Nils Melsom Kristensen, Ole Johan Aarnes, Øyvind Breivik, and Cecilie Mauritzen

Storm surges can give rise to extreme floods in coastal areas. The Norwegian Meteorological Institute (MET Norway) produces 120-hour regional operational storm surge forecasts along the coast of Norway based on the Regional Ocean Modeling System (ROMS). Despite advances in the development of models and computational capability, forecast errors remain large enough to impact response measures and issued alerts, in particular, during the strongest storm events. Reducing these errors will positively impact the efficiency of the warning systems while minimizing efforts and resources spent on mitigation.

Here, we investigate how forecasts can be improved with residual learning, i.e., training data-driven models to predict, and correct, the error in the ROMS output. For this purpose, sea surface height data from stations around Norway were collected and compared with the ROMS output.

We develop two different residual learning frameworks that can be applied on top of the ROMS output. In the first one, we perform binning of the model error, conditionalized by pressure, wind, and waves. Clear error patterns are visible when the error conditioned by the wind is plotted in a polar plot for each station. These error maps can be stored as correction lookup tables to be applied on the ROMS output. However, since wind, pressure, and waves are correlated, we cannot simultaneously correct the error associated with each variable using this method. To overcome this limitation, we develop a second method, which resorts to Neural Networks (NNs) to perform nonlinear modeling of the error pattern obtained at each station. 

The residual NN method strongly outperforms the error map method, and is a promising direction for correcting storm surge models operationally. Indeed, i) this method is applied on top of the existing model and requires no changes to it, ii) all predictors used for NN inference are available operationally, iii) prediction by the NN is very fast, typically a few seconds per station, and iv) the NN correction can be provided to a human expert who gets to inspect it, compare it with the ROMS output, and see how much correction is brought by the NN. Using this NN residual error correction method, the RMS error in the Oslofjord is reduced by typically 7% for lead times of 24 hours, 17% for 48 hours, and 35% for 96 hours.

How to cite: Tedesco, P., Rabault, J., Sætra, M. L., Kristensen, N. M., Aarnes, O. J., Breivik, Ø., and Mauritzen, C.: Bias Correction of Operational Storm Surge Forecasts Using Neural Networks, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-2893, https://doi.org/10.5194/egusphere-egu22-2893, 2022.

11:32–11:38
|
EGU22-11924
|
ECS
|
On-site presentation
Stefan Niebler, Peter Spichtinger, Annette Miltenberger, and Bertil Schmidt

Automatic determination of fronts from atmospheric data is an important task for weather prediction as well as for research of synoptic scale phenomena. We developed a deep neural network to detect and classify fronts from multi-level ERA5 reanalysis data. Model training and prediction is evaluated using two different regions covering Europe and North America with data from two weather services. Due to a label deformation step performed during training we are able to directly generate frontal lines with no further thinning during post processing. Our network compares well against the weather service labels with a Critical Success Index higher than 66.9% and a Object Detection Rate of more than 77.3%. Additionally the frontal climatologies generated from our networks ouput are highly correlated (greater than 77.2%) to climatologies created from weather service data. Evaluation of cross sections of our detection results provide further insight in the characteristics of our predicted fronts and show that our networks classification is physically plausible.

How to cite: Niebler, S., Spichtinger, P., Miltenberger, A., and Schmidt, B.: Automated detection and classification of synoptic scale fronts from atmospheric data grids, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11924, https://doi.org/10.5194/egusphere-egu22-11924, 2022.

11:38–11:50
Lunch break
Chairpersons: Alejandro Coca-Castro, Julien Brajard
13:20–13:26
|
EGU22-20
|
On-site presentation
Francesco Chianucci, Francesca Giannetti, Clara Tattoni, Nicola Puletti, Achille Giorcelli, Carlo Bisaglia, Elio Romano, Massimo Brambilla, Piermario Chiarabaglio, Massimo Gennaro, Giovanni d'Amico, Saverio Francini, Walter Mattioli, Domenico Coaloa, Piermaria Corona, and Gherardo Chirici

Poplar (Populus spp.) plantations are globally widespread in the Northern Hemisphere, and provide a wide range of benefits and products, including timber, carbon sequestration and phytoremediation. Because of poplar specific features (fast growth, short rotation) the information needs require frequent updates, which exceed the traditional scope of National Forest Inventories, implying the need for ad-hoc monitoring solutions.

Here we presented a regional-level multi-scale monitoring system developed for poplar plantations, which is based on the integration of different remotely-sensed informations at different spatial scales, developed in Lombardy (Northern Italy) region. The system is based on three levels of information: 1) At plot scale, terrestrial laser scanning (TLS) was used to develop non-destructive tree stem volume allometries in calibration sites; the produced allometries were then used to estimate plot-level stand parameters from field inventory; additional canopy structure attributes were derived using field digital cover photography. 2) At farm level, unmanned aerial vehicles (UAVs) equipped with multispectral sensors were used to upscale results obtained from field data. 3) Finally, both field and unmanned aerial estimates were used to calibrate a regional-scale supervised continuous monitoring system based on multispectral Sentinel-2 imagery, which was implemented and updated in a Google Earth Engine platform.

The combined use of multi-scale information allowed an effective management and monitoring of poplar plantations. From a top-down perspective, the continuous satellite monitoring system allowed the detection of early warning poplar stress, which are suitable for variable rate irrigation and fertilizing scheduling. From a bottom-up perspective, the spatially explicit nature of TLS measurements allows better integration with remotely sensed data, enabling a multiscale assessment of poplar plantation structure with different levels of detail, enhancing conventional tree inventories, and supporting effective management strategies. Finally, use of UAV is key in poplar plantations as their spatial resolution is suited for calibrating metrics from coarser remotely-sensed products, reducing or avoiding the need of ground measurements, with a significant reduction of time and costs.

How to cite: Chianucci, F., Giannetti, F., Tattoni, C., Puletti, N., Giorcelli, A., Bisaglia, C., Romano, E., Brambilla, M., Chiarabaglio, P., Gennaro, M., d'Amico, G., Francini, S., Mattioli, W., Coaloa, D., Corona, P., and Chirici, G.: PRECISIONPOP: a multi-scale monitoring system for poplar plantations integrating field, aerial and satellite remote sensing, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-20, https://doi.org/10.5194/egusphere-egu22-20, 2022.

13:26–13:32
|
EGU22-5632
|
ECS
|
Virtual presentation
Joe Phillips, Ce Zhang, Bryan Williams, and Susan Jarvis

Despite being a vital part of ecosystems, insects are dying out at unprecedented rates across the globe. To help address this in the UK, UK Centre for Ecology & Hydrology (UKCEH) are creating a tool to utilise insect species distribution models (SDMs) for better facilitating future conservation efforts via volunteer-led insect tracking procedures. Based on these SDM models, we explored the inclusion of additional covariate information via 10-20m2 bands of temporally-aggregated Sentinel-2 data taken over the North of England in 2017 to improve the predictive performance. Here, we matched the 10-20m2 resolution of the satellite data to the coarse 1002 insect observation data via four methodologies of increasing complexity. First, we considered standard pixel-based approaches, performing aggregation by taking both the mean and standard deviation over the 10m2 pixels. Second, we explored object-based approaches to address the modifiable areal unit problem by applying the SNIC superpixels algorithm over the extent, with the mean and standard deviation of the pixels taken within each segment. The resulting dataset was then re-projected to a resolution of 100m2 by taking the modal values of the 10m2 pixels, which were provided with the aggregated values of their parent segment. Third, we took the UKCEH-created 2017 Land Cover Map (LCM) dataset and sampled 42,000, random 100m2 areas, evenly distributed about their modal land cover classes. We trained the U-Net Deep Learning model using the Sentinel-2 satellite images and LCM classes, by which data-driven features were extracted from the network over each 100m2 extent. Finally, as with the second approach, we used the superpixels segments instead as the units of analysis, sampling 21,000 segments, and taking the smallest bounding box around each of them. An attention-based U-Net was then adopted to mask each of the segments from their background and extract deep features. In a similar fashion to the second approach, we then re-projected the resulting dataset to a resolution of 100m2, taking the modal segment values accordingly. Using cross-validated AUCs over various species of moths and butterflies, we found that the object-based deep learning approach achieved the best accuracy when used with the SDMs. As such, we conclude that the novel approach of spatially aggregating satellite data via object-based, deep feature extraction has the potential to benefit similar, model-based aggregation needs and catalyse a step-change in ecological and environmental applications in the future.

How to cite: Phillips, J., Zhang, C., Williams, B., and Jarvis, S.: Data-Driven Sentinel-2 Based Deep Feature Extraction to Improve Insect Species Distribution Models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-5632, https://doi.org/10.5194/egusphere-egu22-5632, 2022.

13:32–13:38
|
EGU22-1992
|
Virtual presentation
|
Vasilisa Koshkina, Mikhail Krinitskiy, Nikita Anikin, Mikhail Borisov, Natalia Stepanova, and Alexander Osadchiev

Solar radiation is the main source of energy on Earth. Cloud cover is the main physical factor limiting the downward short-wave radiation flux. In modern models of climate and weather forecasts, physical models describing the passage of radiation through clouds may be used. This is a computationally extremely expensive option for estimating downward radiation fluxes. Instead, one may use parameterizations which are simplified schemes for approximating environmental variables. The purpose of this work is to improve the accuracy of the existing parametrizations of downward shortwave radiation fluxes. We solve the problem using various machine learning (ML) models for approximating downward shortwave radiation flux using all-sky optical imagery. We assume that an all-sky photo contains complete information about the downward shortwave radiation. We examine several types of ML models that we trained on dataset of all-sky imagery accompanied by short-wave radiation flux measurements. The Dataset of All-Sky Imagery over the Ocean (DASIO) is collected in Indian, Atlantic and Arctic oceans during several oceanic expeditions from 2014 till 2021. The quality of the best classic ML model is better compared to existing parameterizations known from literature. We will show the results of our study regarding classic ML models as well as the results of an end-to-end ML approach involving convolutional neural networks. Our results allow us to assume one may acquire downward shortwave radiation fluxes directly from all-sky imagery. We will also cover some downsides and limitations of the presented approach.

How to cite: Koshkina, V., Krinitskiy, M., Anikin, N., Borisov, M., Stepanova, N., and Osadchiev, A.: Approximating downward short-wave radiation flux using all-sky optical imagery using machine learning trained on DASIO dataset., EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-1992, https://doi.org/10.5194/egusphere-egu22-1992, 2022.

13:38–13:44
|
EGU22-8334
|
Presentation form not yet defined
Barak Fishbain, Ziv Mano, and Shai Kendler

Urbanization and industrialization processes are accompanied by adverse environmental effects, such as air pollution. The first action in reducing air pollution is the detection of its source(s). This is achievable through monitoring. When deploying a sensor array, one must balance between the array's cost and performance. This optimization problem is known as the location-allocation problem. Here, a new solution approach, which draws its foundation from information theory is presented. The core of the method is air-pollution levels computed by a dispersion model in various meteorological conditions. The sensors are then placed in the locations which information theory identifies as the most uncertain. The method is compared with two other heuristics typically applied for solving the location-allocation problem. In the first, sensors are randomly deployed, in the second, the sensors are placed according to the maximal cumulative pollution levels (i.e., hot spot). For the comparison two simulated scenes were evaluated, one contains point sources and buildings, and the other also contains line sources (i.e., roads). It shows that the Entropy method resulted in a superior sensors' deployment compared to the other two approaches in terms of source apportionment and dense pollution field reconstruction from the sensors' network measurements.

How to cite: Fishbain, B., Mano, Z., and Kendler, S.: Information theory solution approach for air-pollution sensors' location-allocation problem, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8334, https://doi.org/10.5194/egusphere-egu22-8334, 2022.

13:44–13:50
|
EGU22-8852
|
ECS
|
Virtual presentation
|
Freddie Kalaitzis, Gonzalo Mateo-Garcia, Kevin Dobbs, Dolores Garcia, Jason Stoker, and Giovanni Marchisio

We show that machine learning models learn and perform better when they know where to expect shadows, through hillshades modeled to the time of imagery acquisition.

Shadows are detrimental to all machine learning applications on satellite imagery. Prediction tasks like semantic / instance segmentation, object detection, counting of rivers, roads, buildings, trees, all rely on crisp edges and colour gradients that are confounded by the presence of shadows in passive optical imagery, which rely on the sun’s illumination for reflectance values.

Hillshading is a standard technique for enriching a mapped terrain with relief effects, which is done by emulating the shadow caused by steep terrain and/or tall vegetation. A hillshade that is modeled to the time of day and year can be easily derived through a basic form of ray tracing on a Digital Terrain Model (DTM) (also known as a bare-earth DEM) or Digital Surface Model (DSM) given the sun's altitude and azimuth angles. In this work, we use lidar-derived DSMs. A DSM-based hillshade conveys a lot more information on shadows than a bare-earth DEM alone, namely any non-terrain vertical features (e.g. vegetation, buildings) resolvable at a 1-m resolution. The use of this level of fidelity of DSM for hillshading and its input to a machine learning model is novel and the main contribution of our work. Any uncertainty over the angles can be captured through a composite multi-angle hillshade, which shows the range where shadows can appear throughout the day.

We show the utility of time-dependent hillshades in the daily mapping of rivers from Very High Resolution (VHR) passive optical and lidar-derived terrain data [1]. Specifically, we leverage the acquisition timestamps within a daily 3m PlanetScope product over a 2-year period. Given a datetime and geolocation, we model the sun’s azimuth and elevation relative to that geolocation at that time of day and year. We can then generate a time-dependent hillshade and therefore locate shadows in any given time within that 2-year period. In our ablation study we show that, out of all the lidar-derived products, the time-dependent hillshades contribute a 8-9% accuracy improvement in the semantic segmentation of rivers. This indicates that a semantic segmentation machine learning model is less prone to errors of commission (false positives), by better disambiguating shadows from dark water.

Time-dependent hillshades are not currently used in ML for EO use-cases, yet they can be useful. All that is needed to produce them is access to high-resolution bare-earth DEMs, like that of the US National 3D Elevation Program covering the entire continental U.S at 1-meter resolution, or creation of DSMs from the lidar point cloud data itself. As the coverage of DSM and/or DEM products expands to more parts of the world, time-dependent hillshades could become as commonplace as cloud masks in EO use cases.


[1] Dolores Garcia, Gonzalo Mateo-Garcia, Hannes Bernhardt, Ron Hagensieker, Ignacio G. Lopez-Francos, Jonathan Stock, Guy Schumann, Kevin Dobbs and Freddie Kalaitzis Pix2Streams: Dynamic Hydrology Maps from Satellite-LiDAR Fusion. AI for Earth Sciences Workshop, NeurIPS 2020

How to cite: Kalaitzis, F., Mateo-Garcia, G., Dobbs, K., Garcia, D., Stoker, J., and Marchisio, G.: Time-dependent Hillshades: Dispelling the Shadow Curse of Machine Learning Applications in Earth Observation, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-8852, https://doi.org/10.5194/egusphere-egu22-8852, 2022.

13:50–13:56
|
EGU22-9452
|
ECS
|
Virtual presentation
|
Adili Abulaitijiang, Eike Bolmer, Ribana Roscher, Jürgen Kusche, Luciana Fenoglio, and Sophie Stolzenberger

Eddies are circular rotating water masses, which are usually generated near the large ocean currents, e.g., Gulf Stream. Monitoring eddies and gaining knowledge on eddy statistics over a large region are important for fishery, marine biology studies, and testing ocean models.

At mesoscale, eddies are observed in radar altimetry, and methods have been developed to identify, track and classify them in gridded maps of sea surface height derived from multi-mission data sets. However, this procedure has drawbacks since much information is lost in the gridded maps. Inevitably, the spatial and temporal resolution of the original altimetry data degrades during the gridding process. On the other hand, the task of identifying eddies has been a post-analysis process on the gridded dataset, which is, by far, not meaningful for near-real time applications or forecasts. In the EDDY project at the University of Bonn, we aim to develop methods for identifying eddies directly from along track altimetry data via a machine (deep) learning approach.

At the early stage of the project, we started with gridded altimetry maps to set up and test the machine learning algorithm. The gridded datasets are not limited to multi-mission gridded maps from AVISO, but also include the high resolution (~6 km) ocean modeling simulation dataset (e.g., FESOM, Finite Element Sea ice Ocean Model). Later, the gridded maps are sampled along the real altimetry ground tracks to obtain the single-track altimetry data. Reference data, as the training set for machine learning, will be produced by open-source geometry-based approach (e.g., py-eddy-tracker, Mason et al., 2014) with additional constraints like Okubo-Weiss parameter and Sea Surface Temperature (SST) profile signatures.

In this presentation, we introduce the EDDY project and show the results from the machine learning approach based on gridded datasets for the Gulf stream area for the period 2017, and first results of single-track eddy identification in the region.

How to cite: Abulaitijiang, A., Bolmer, E., Roscher, R., Kusche, J., Fenoglio, L., and Stolzenberger, S.: Eddy identification from along track altimeter data using deep learning: EDDY project, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9452, https://doi.org/10.5194/egusphere-egu22-9452, 2022.

13:56–14:02
|
EGU22-10157
|
ECS
|
On-site presentation
Andreas Krause, Phillip Papastefanou, Konstantin Gregor, Lucia Layritz, Christian S. Zang, Allan Buras, Xing Li, Jingfeng Xiao, and Anja Rammig

Historically, many forests worldwide were cut down and replaced by agriculture. While this substantially reduced terrestrial carbon storage, the impacts of land-use change on ecosystem productivity have not been adequately resolved yet.

Here, we apply the machine learning algorithm Random Forests to predict the potential gross primary productivity (GPP) of forests, grasslands, and croplands around the globe using high-resolution datasets of satellite-derived GPP, land cover, and 20 environmental predictor variables.

With a mean potential GPP of around 2.0 kg C m-2 yr-1 forests are the most productive land cover on two thirds of the global suitable area, while grasslands and croplands are on average 23 and 9% less productive, respectively. These findings are robust against alternative input datasets and algorithms, even though results are somewhat sensitive to the underlying land cover map.

Combining our potential GPP maps with a land-use reconstruction from the Land-Use Harmonization project (LUH2) we estimate that historical agricultural expansion reduced global GPP by around 6.3 Gt C yr-1 (4.4%). This reduction in GPP induced by land cover changes is amplified in some future scenarios as a result of ongoing deforestation but partly reversed in other scenarios due to agricultural abandonment.

Finally, we compare our potential GPP maps to simulations from eight CMIP6 Earth System Models with an explicit representation of land management. While the mean GPP values of the ESM ensemble show reasonable agreement with our estimates, individual Earth System Models simulate large deviations both in terms of mean GPP values of different land cover types as well as in their spatial variations. Reducing these model biases would lead to more reliable simulations concerning the potential of land-based mitigation policies.

How to cite: Krause, A., Papastefanou, P., Gregor, K., Layritz, L., Zang, C. S., Buras, A., Li, X., Xiao, J., and Rammig, A.: How land cover changes affect ecosystem productivity, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10157, https://doi.org/10.5194/egusphere-egu22-10157, 2022.

14:02–14:08
|
EGU22-10519
|
ECS
|
Highlight
|
Virtual presentation
Soukayna Mouatadid, Paulo Orenstein, Genevieve Flaspohler, Miruna Oprescu, Judah Cohen, Franklyn Wang, Sean Knight, Maria Geogdzhayeva, Sam Levang, Ernest Fraenkel, and Lester Mackey

Improving our ability to forecast the weather and climate is of interest to all sectors of the economy and government agencies from the local to the national level. In fact, weather forecasts 0-10 days ahead and climate forecasts seasons to decades ahead are currently used operationally in decision-making, and the accuracy and reliability of these forecasts has improved consistently in recent decades. However, many critical applications require subseasonal forecasts with lead times in between these two timescales. Subseasonal forecasting—predicting temperature and precipitation 2-6 weeks ahead—is indeed critical for effective water allocation, wildfire management, and drought and flood mitigation. Yet, accurate forecasts for the subseasonal regime are still lacking due to the chaotic nature of weather.

While short-term forecasting accuracy is largely sustained by physics-based dynamical models, these deterministic methods have limited subseasonal accuracy due to chaos. Indeed, subseasonal forecasting has long been considered a “predictability desert” due to its complex dependence on both local weather and global climate variables. Nevertheless, recent large-scale research efforts have advanced the subseasonal capabilities of operational physics-based models, while parallel efforts have demonstrated the value of machine learning and deep learning methods in improving subseasonal forecasting.

To counter the systematic errors of dynamical models at longer lead times, we introduce an adaptive bias correction (ABC) method that combines state-of-the-art dynamical forecasts with observations using machine learning. We evaluate our adaptive bias correction method in the contiguous U.S. over the years 2011-2020 and demonstrate consistent improvement over standard meteorological baselines, state-of-the-art learning models, and the leading subseasonal dynamical models, as measured by root mean squared error and uncentered anomaly correlation skill. When applied to the United States’ operational climate forecast system (CFSv2), ABC improves temperature forecasting skill by 20-47% and precipitation forecasting skill by 200-350%. When applied to the leading subseasonal model from the European Centre for Medium-Range Weather Forecasts (ECMWF), ABC improves temperature forecasting skill by 8-38% and precipitation forecasting skill by 40-80%.

Overall, we find that de-biasing dynamical forecasts with our learned adaptive bias correction method yields an effective and computationally inexpensive strategy for generating improved subseasonal forecasts and building the next generation of subseasonal forecasting benchmarks. To facilitate future subseasonal benchmarking and development, we release our model code through the subseasonal_toolkit Python package and our routinely updated SubseasonalClimateUSA dataset through the subseasonal_data Python package.

How to cite: Mouatadid, S., Orenstein, P., Flaspohler, G., Oprescu, M., Cohen, J., Wang, F., Knight, S., Geogdzhayeva, M., Levang, S., Fraenkel, E., and Mackey, L.: Adaptive Bias Correction for Improved Subseasonal Forecasting, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-10519, https://doi.org/10.5194/egusphere-egu22-10519, 2022.

14:08–14:14
|
EGU22-11043
|
Virtual presentation
Fabian Romahn, Victor Molina Garcia, Ana del Aguila, Ronny Lutz, and Diego Loyola

In remote sensing, the quantities of interest (e.g. the composition of the atmosphere) are usually not directly observable but can only be inferred indirectly via the measured spectra. To solve these inverse problems, retrieval algorithms are applied that usually depend on complex physical models, so-called radiative transfer models (RTMs). RTMs are very accurate, however also computationally very expensive and therefore often not feasible in combination with the strict time requirements of operational processing of satellite measurements. With the advances in machine learning, the methods of this field, especially deep neural networks (DNN), have become very promising for accelerating and improving the classical remote sensing retrieval algorithms. However, their application is not straightforward but instead quite challenging as there are many aspects to consider and parameters to optimize in order to achieve satisfying results.

In this presentation we show a general framework for replacing the RTM, used in an inversion algorithm, with a DNN that offers sufficient accuracy while at the same time increases the processing performance by several orders of magnitude. The different steps, sampling and generation of the training data, the selection of the DNN hyperparameters, the training and finally the integration of the DNN into an operational environment are explained in detail. We will also focus on optimizing the efficiency of each step: optimizing the generation of training samples through smart sampling techniques, accelerating the training data generation through parallelization and other optimizations of the RTM, application of tools for the DNN hyperparameter optimization as well as the use of automation tools (source code generation) and appropriate interfaces for the efficient integration in operational processing systems.

This procedure has been continuously developed throughout the last years and as a use case, it will be shown how it has been applied in the operational retrieval of cloud properties for the Copernicus satellite sensors Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P).

How to cite: Romahn, F., Molina Garcia, V., del Aguila, A., Lutz, R., and Loyola, D.: Framework for the deployment of DNNs in remote sensing inversion algorithms applied to Copernicus Sentinel-4 (S4) and TROPOMI/Sentinel-5 Precursor (S5P), EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11043, https://doi.org/10.5194/egusphere-egu22-11043, 2022.

14:14–14:20
|
EGU22-11465
|
ECS
|
On-site presentation
|
Zhao-Yue Chen, Raul Méndez-Turrubiates, Hervé Petetin, Aleks Lacima, Albert Soret Miravet, Carlos Pérez García-Pando, and Joan Ballester

Air pollution is a major environmental risk factor for human health. Among the different air pollutants, Particulate Matter (PM) arises as the most prominent one, with increasing health effects over the last decades. According to the Global Burden of Disease, PM contributed to 4.14 million premature deaths globally in 2019, over twice as much as in 1990 (2.04 million). With these numbers in mind, the assessment of ambient PM exposure becomes a key issue in environmental epidemiology. However, the limited number of ground-level sites measuring daily PM values is a major constraint for the development of large-scale, high-resolution epidemiological studies.

In the last five years, there has been a growing number of initiatives estimating ground-level PM concentrations based on satellite Aerosol Optical Depth (AOD) data, representing a low-cost alternative with higher spatial coverage compared to ground-level measurements. At present, the most popular AOD product is NASA’s MODIS (Moderate Resolution Imaging Spectroradiometer), but the data that it provides is restricted to Total Aerosol Optical Depth (TAOD). Compared with TAOD, Fine-mode Aerosol Optical Depth (FAOD) better describes the distribution of small-diameter particles (e.g. PM10 and PM2.5), which are generally those associated with anthropogenic activity. Complementarily, AERONET (AErosol RObotic NETwork, which is the network of ground-based sun photometers), additionally provide Fine- and Coarse-mode Aerosol Optical Depth (FAOD and CAOD) products based on Spectral Deconvolution Algorithms (SDA).

Within the framework of the ERC project EARLY-ADAPT (https://early-adapt.eu/), which aims to disentangle the association between human health, climate variability and air pollution to better estimate the early adaptation response to climate change, here we develop quantile machine learning models to further advance in the association between AERONET FAOD and satellite AOD over Europe during the last two decades. Due to large missing data form satellite estimations, we also included the AOD estimates from ECMWF’s Copernicus Atmosphere Monitoring Service Global Reanalysis (CAMSRA) and NASA’s Modern-Era Retrospective Analysis for Research and Applications v2 (MERRA-2), together with atmosphere, land and ocean variables such as boundary layer height, downward UV radiation and cloud cover from ECMWF’s ERA5-Land.

The models were thoroughly validated with spatial cross-validation. Preliminary results show that the R2 of the three AOD estimates (TAOD, FAOD and CAOD) predicted with quantile machine learning models range between 0.61 and 0.78, and the RMSE between 0.02 and 0.03. For the Pearson correlation with ground-level PM2.5, the predicted FAOD is highest (0.38), while 0.18, 0.11 and 0.09 are for Satellite, MERRA-2, CAMSRA AOD, respectively. This study provides three useful indicators for further estimating PM, which could improve our understanding of air pollution in Europe and open new avenues for large-scale, high-resolution environmental epidemiology studies.

How to cite: Chen, Z.-Y., Méndez-Turrubiates, R., Petetin, H., Lacima, A., Soret Miravet, A., Pérez García-Pando, C., and Ballester, J.: Quantile machine learning models for predicting European-wide, high resolution fine-mode Aerosol Optical Depth (AOD) based on ground-based AERONET and satellite AOD data, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-11465, https://doi.org/10.5194/egusphere-egu22-11465, 2022.

14:20–14:26
|
EGU22-12043
|
ECS
|
Virtual presentation
|
Chandrabali Karmakar, Gottfried Schwartz, Corneliu Octavian Dumitru, and Mihai Datcu

For many years, image classification – mainly based on pixel brightness statistics – has been among the most popular remote sensing applications. However, during recent years, many users were more and more interested in the application-oriented semantic labelling of remotely sensed image objects being depicted in given images.


In parallel, the development of deep learning algorithms has led to several powerful image classification and annotation tools that became popular in the remote sensing community. In most cases, these publicly available tools combine efficient algorithms with expert knowledge and/or external information ingested during an initial training phase, and we often encounter two alternative types of deep learning approaches, namely Autoencoders (AEs) and Convolutional Neural Networks (CNNs). Both approaches try to convert the pixel data of remote sensing images into semantic maps of the imaged areas. In our case, we made an attempt to provide an efficient new semantic annotation tool that helps in the semantic interpretation of newly recorded images with known and/or possibly unknown content.


Typical cases are remote sensing images depicting unexpected and hitherto uncharted phenomena such as flooding events or destroyed infrastructure. When we resort to the commonly applied AE or CNN software packages we cannot expect that existing statistics, or a few initial ground-truth annotations made by an image interpreter, will automatically lead to a perfect understanding of the image content. Instead, we have to discover and combine a number of additional relationships that define the actual content of a selected image and many of its characteristics.

Our approach consists of a two-stage domain-change approach where we first convert an image into a purely mathematical ‘topic representation’ initially introduced by Blei [1]. This representation provides statistics-based topics that do not yet require final application-oriented labelling describing physical categories or phenomena and support the idea of explainable machine learning [2]. Then, during a second stage, we try to derive physical image content categories by exploiting a weighted multi-level neural network approach that converts weighted topics into individual application-oriented labels. This domain-changing learning stage limits label noise and is initially supported by an image interpreter allowing the joint use of pixel statistics and expert knowledge [3]. The activity of the image interpreter can be limited to a few image patches. We tested our approach on a number of different use cases (e.g., polar ice, agriculture, natural disasters) and found that our concept provides promising results.  


[1] D.M. Blei, A.Y. Ng, and M.I. Jordan, (2003). Latent Dirichlet Allocation, Journal of Machine Learning Research, Vol. 3, pp. 993-1022.
[2] C. Karmakar, C.O. Dumitru, G. Schwarz, and M. Datcu (2020). Feature-free explainable data mining in SAR images using latent Dirichlet allocation, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14, pp. 676-689.
[3] C.O. Dumitru, G. Schwarz, and M. Datcu (2021). Semantic Labelling of Globally Distributed Urban and Non-Urban Satellite Images Using High-Resolution SAR Data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 15, pp. 6009-6068.

How to cite: Karmakar, C., Schwartz, G., Dumitru, C. O., and Datcu, M.: A Domain-Change Approach to the Semantic Labelling of Remote Sensing Images, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12043, https://doi.org/10.5194/egusphere-egu22-12043, 2022.

14:26–14:32
|
EGU22-12549
|
ECS
|
On-site presentation
Quentin Febvre, Ronan Fablet, Julien Le Sommer, and Clément Ubelmann

Satellite radar altimeters are a key source of observation of ocean surface dynamics. However, current sensor technology and mapping techniques do not yet allow to systematically resolve scales smaller than 100km. With their new sensors, upcoming wide-swath altimeter missions such as SWOT should help resolve finer scales. Current mapping techniques rely on the quality of the input data, which is why the raw data go through multiple preprocessing stages before being used. Those calibration stages are improved and refined over many years and represent a challenge when a new type of sensor start acquiring data.

We show how a data-driven variational data assimilation framework could be used to jointly learn a calibration operator and an interpolator from non-calibrated data . The proposed framework significantly outperforms the operational state-of-the-art mapping pipeline and truly benefits from wide-swath data to resolve finer scales on the global map as well as in the SWOT sensor geometry.

 

How to cite: Febvre, Q., Fablet, R., Le Sommer, J., and Ubelmann, C.: Joint calibration and mapping of satellite altimetry data using trainable variaitional models, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12549, https://doi.org/10.5194/egusphere-egu22-12549, 2022.

14:32–14:38
|
EGU22-9578
|
Presentation form not yet defined
Alexander Barth, Aida Alvera-Azcárate, Charles Troupin, and Jean-Marie Beckers

DINCAE (Data INterpolating Convolutional Auto-Encoder) is a neural network to reconstruct missing data (e.g. obscured by clouds or gaps between tracks) in satellite data. Contrary to standard image reconstruction (in-painting) with neural networks, this application requires a method to handle missing data (or data with variable accuracy) already in the training phase. Instead of using a cost function based on the mean square error, the neural network (U-Net type of network) is optimized by minimizing the negative log likelihood assuming a Gaussian distribution (characterized by a mean and a variance). As a consequence, the neural network also provides an expected error variance of the reconstructed field (per pixel and per time instance).

 

In this updated version DINCAE 2.0, the code was rewritten in Julia and a new type of skip connection has been implemented which showed superior performance with respect to the previous version. The method has also been extended to handle multivariate data (an example will be shown with sea-surface temperature, chlorophyll concentration and wind fields). The improvement of this network is demonstrated in the Adriatic Sea. 

 

Convolutional networks work usually with gridded data as input. This is however a limitation for some data types used in oceanography and in Earth Sciences in general, where observations are often irregularly sampled.  The first layer of the neural network and the cost function have been modified so that unstructured data can also be used as inputs to obtain gridded fields as output. To demonstrate this, the neural network is applied to along-track altimetry data in the Mediterranean Sea. Results from a 20-year reconstruction are presented and validated. Hyperparameters are determined using Bayesian optimization and minimizing the error relative to a development dataset.

How to cite: Barth, A., Alvera-Azcárate, A., Troupin, C., and Beckers, J.-M.: A multivariate convolutional autoencoder to reconstruct satellite data with an error estimate based on non-gridded observations: application to sea surface height, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-9578, https://doi.org/10.5194/egusphere-egu22-9578, 2022.

14:38–14:50