NP5.1

Inverse problems, Predictability, and Uncertainty Quantification in Geosciences using data assimilation and its combination with machine learning

Inverse Problems are encountered in many fields of geosciences. One class of inverse problems, in the context of predictability, is assimilation of observations in dynamical models of the system under study. Furthermore, objective quantification of the uncertainty during data assimilation, prediction and validation is the object of growing concern and interest.
This session will be devoted to the presentation and discussion of methods for inverse problems, data assimilation and associated uncertainty quantification, in ocean and atmosphere dynamics, atmospheric chemistry, hydrology, climate science, solid earth geophysics and, more generally, in all fields of geosciences.
We encourage presentations on advanced methods, and related mathematical developments, suitable for situations in which local linear and Gaussian hypotheses are not valid and/or for situations in which significant model or observation errors are present. Specific problems arise in situations where coupling is present between different components of the Earth system, which gives rise to the so called coupled data assimilation.
We also welcome contributions dealing with algorithmic aspects and numerical implementation of the solution of inverse problems and quantification of the associated uncertainty, as well as novel methodologies at the crossroad between data assimilation and purely data-driven, machine-learning-type algorithms.

This year, our solicited speaker is Ross Bannister from University of Reading / UK National Centre for Earth Observation.

Convener: Javier Amezcua | Co-conveners: Alberto Carrassi, Sergey Frolov, Tijana Janjic, Lars Nerger, Olivier Talagrand
vPICO presentations
| Tue, 27 Apr, 09:00–12:30 (CEST)

Session assets

Session materials

vPICO presentations: Tue, 27 Apr

Chairpersons: Javier Amezcua, Olivier Talagrand, Alberto Carrassi
09:00–09:05
Covariance matrices and covariance models
09:05–09:15
|
EGU21-1778
|
solicited
Ross Bannister and Ruth Petrie
Data assimilation systems are progressively getting better, resulting in improved analyses and forecasts. One important reason for this is thought to be the improved representation of the multivariate PDF of a-priori errors seen by the assimilation. This means that observations can influence the trajectory/ies of the numerical model in more physically meaningful ways. While some improvement is gained by modelling deviations of the PDF from Gaussianity, and by statistical modelling of Gaussian covariances with ensembles, there is still scope to improve the structure of the `B-matrix' used in pure and hybrid versions of 3D/4D-Var.
Our hypothesis is that a good B-matrix for geophysical data assimilation applications should have multivariate structure functions that reflect the dynamics of the underlying physical system. So, if the underlying system is close to some balanced manifold, then the assimilation should not disturb that property. Existing practice is to impose any balances explicitly, but this is difficult when the balances are weak or difficult to determine, such as in convective-scale or tropical applications, etc. In this talk we look at how such covariances can be modelled, including an approach that uses the normal modes of the underlying dynamics.

How to cite: Bannister, R. and Petrie, R.: Dynamically informed covariance modelling in data assimilation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-1778, https://doi.org/10.5194/egusphere-egu21-1778, 2021.

09:15–09:17
|
EGU21-13764
|
ECS
Francesco Sardelli and Craig Bishop

Hybrid error covariance models construct the covariance matrix to be used in variational data assimilation methods through a linear combination Ph= αcPc + αePl of the climatological error covariance matrix Pc and the localized ensemble covariance matrix Pl = CP with scalar weights αc and αe.

This work aims to provide a theoretical justication for current hybrid error covariance models and identify a critical issue in them in order to improve them in future research. In the framework of Bayes' theorem, a theory is developed by modelling the climatological distribution of true forecast error covariance matrix Pf as an inverse matrix gamma distribution (prior distribution) and the distribution of the localized ensemble covariance matrix Pl given a true forecast error covariance matrix Pf as a Wishart or matrix gamma distribution (likelihood distribution). The following formulas for the expected values of the prior and likelihood distributions are assumed: E [Pf ] = Pc and E [Pl Pf ]= Pf , respectively. The posterior distribution for the true forecast error covariance matrix Pf given the localised covariance matrix Pl is derived: it turns out to be an inverse matrix gamma distribution. Within this theory, a formula for the expected value E [PfP ] of the true forecast error covariance matrix Pf given the ensemble covariance matrix P is derived: E [PfP]= βcPc+ βePl (where βc  and βe are scalar weights). This provides a theoretical justication for hybrid error covariance models. Moreover, expressions (and thus an interpretation) for the scalar weights βc and βe in terms of the relative variances of the diagonal elements of the prior and likelihood distributions are obtained.

Hence, the consistency of current hybrid covariance models with the assumption E [Pl Pf ]= Pf is showed. This assumption is, in turn, inconsistent with E [PPf]= Pf , which ensemble DA schemes are meant to satisfy, and it is falsiable.

To illustrate the above theory, an experiment is run to simulate 3200 replicate Earth's all having the same true state trajectory, weather prediction system and observational network, but different realizations of the observations. Each replicate Earth is simulated through a 10-variable Lorenz '96 model with an ETKF data assimilation system. From the set of the true forecast errors of all replicate Earth's, the (otherwise hidden) true forecast error covariance matrix Pf is computed at each time step and the (dis)similarity of its climatological distribution from the best-fit inverse matrix gamma distribution is considered. It is found that (i) the inverse-matrix gamma distribution overestimates the probability of signicant error correlations between widely separated model variables; (ii) it is the un-localized ETKF ensemble covariance matrix that equals the mean climatological covariance matrix, not the localized ensemble covariance matrix. These findings motivate research to discover more accurate approximations to the climatological distribution of the true forecast error covariance matrix and more accurate hybrid covariance models.

How to cite: Sardelli, F. and Bishop, C.: Insights about the hybrid error covariance models, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13764, https://doi.org/10.5194/egusphere-egu21-13764, 2021.

09:17–09:19
|
EGU21-9169
|
ECS
Diego Saul Carrio Carrio, Craig Bishop, and Shunji Kotsuki

The replacement of climatological background error covariance models with Hybrid error covariance models that linearly combine a localized ensemble covariance matrix and a climatological error covariance matrix has led to significant forecast improvements at several forecasting centres. To deepen understanding of why the Hybrid’s superficially ad-hoc mix of ensemble based covariances and climatological covariances yielded such significant improvements, we derive the linear state estimation equations that minimize analysis error variance given an imperfect ensemble covariance. For high dimensional models, the computational cost of the very large sample sizes required to empirically estimate the terms in these equations is prohibitive. However, a reasonable and computationally feasible approximation to these equations can be obtained from empirical estimates of the true error covariance between two model variables given an imperfect ensemble covariance between the same two variables.   Here, using a Data Assimilation (DA) system featuring a simplified Global Circulation Model (SPEEDY), pseudo-observations of known error variance and an ensemble data assimilation scheme (LETKF),  we quantitatively demonstrate that the traditional Hybrid used by many operational centres is a much better approximation to the true covariance given the ensemble covariance than either the static climatological covariance or the localized ensemble covariance. These quantitative findings help explain why operational centres have found such large forecast improvements when switching from a static error covariance model to a Hybrid forecast error covariance model. Another fascinating finding of our empirical study is that the form of current Hybrid error covariance models is fundamentally incorrect in that the weight given to the static covariance matrix is independent of the separation distance of model variables. Our results show that this weight should be an increasing function of variable separation distance.  It is found that for ensemble covariances significantly different to zero, the true error covariance of spatially separated variables is an approximately linear function of the corresponding ensemble covariance, However, for small ensemble sizes and ensemble covariances near zero, the true covariance is an increasing function of the magnitude of the ensemble covariance and reaches a local minimum at the precise point where the ensemble covariance is equal to zero. It is hypothesized that this behaviour is a consequence of small ensemble size and, specifically, associated spurious fluctuations of the ensemble covariances and variances. Consistent with this hypothesis, this local minimum is almost eliminated by quadrupling the ensemble size.

How to cite: Carrio Carrio, D. S., Bishop, C., and Kotsuki, S.: Empirical determination of the covariance of forecast errors: an empirical justification and reformulation of Hybrid covariance models, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9169, https://doi.org/10.5194/egusphere-egu21-9169, 2021.

09:19–09:21
|
EGU21-13672
Koji Terasaki and Takemasa Miyoshi

Recent developments in sensing technology increased the number of observations both in space and time. It is essential to effectively utilize the information from observations to improve numerical weather prediction (NWP). It is known to have correlated errors in observations measured with a single instrument, such as satellite radiances. The observations with the horizontal error correlation are usually thinned to compensate for neglecting the error correlation in data assimilation. This study explores to explicitly include the horizontal observation error correlation of Advanced Microwave Sounding Unit-A (AMSU-A) radiances using a global atmospheric data assimilation system NICAM-LETKF, which comprises the Nonhydrostatic ICosahedral Atmospheric Model (NICAM) and the Local Ensemble Transform Kalman Filter (LETKF). This study performs the data assimilation experiments at 112-km horizontal resolution and 38 vertical layers up to 40 km and with 32 ensemble members.

In this study, we estimate the horizontal observation error correlation of AMSU-A radiances using innovation statistics. The computation cost of inverting the observation error covariance matrix will increase when non-zero off-diagonal terms are included. In this study, we assume uncorrelated observation errors between different instruments and observation variables, so that the observation error covariance matrix becomes block diagonal with only horizontal error correlations included. The computation time of the entire LETKF analysis procedure is increased only by up to 10 % compared with the case using the diagonal observation error covariance matrix. The analyses and forecasts of temperature and zonal wind in the mid- and upper-troposphere are improved by including the horizontal error correlations. We will present the most recent results at the workshop.

How to cite: Terasaki, K. and Miyoshi, T.: Including the spatial observation error correlation in data assimilation of AMSU-A radiances, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13672, https://doi.org/10.5194/egusphere-egu21-13672, 2021.

Mathematics and methods
09:21–09:23
|
EGU21-16125
|
ECS
Marie Turčičová, Jan Mandel, and Kryštof Eben

A widely popular group of data assimilation methods in meteorological and geophysical sciences is formed by filters based on Monte-Carlo approximation of the traditional Kalman filter, e.g. Ensemble Kalman filter (EnKF), Ensemble square-root filter and others. Due to the computational cost, ensemble size is usually small compared to the dimension of the state vector. Traditional EnKF implicitly uses the sample covariance which is a poor estimate of the background covariance matrix - singular and contaminated by spurious correlations.

We focus on modelling the background covariance matrix by means of a linear model for its inverse. This is particularly useful in Gauss-Markov random fields (GMRF), where the inverse covariance matrix has a banded structure. The parameters of the model are estimated by the score matching method which provides estimators in a closed form, cheap to compute. The resulting estimate is a key component of the proposed ensemble filtering algorithms. Under the assumption that the state vector is a GMRF in every time-step, the Score matching filter with Gaussian resampling (SMF-GR) gives in every time-step a consistent (in the large ensemble limit) estimator of mean and covariance matrix of the forecast and analysis distribution. Further, we propose a filtering method called Score matching ensemble filter (SMEF), based on regularization of the EnKF. This filter performs well even for non-Gaussian systems with non-linear dynamics. The performance of both filters is illustrated on a simple linear convection model and Lorenz-96.

How to cite: Turčičová, M., Mandel, J., and Eben, K.: Score matching filters, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16125, https://doi.org/10.5194/egusphere-egu21-16125, 2021.

09:23–09:25
|
EGU21-2463
Lars Nerger

The second-order exact particle filter NETF (nonlinear ensemble transform filter) is combined with local ensemble transform Kalman filter (LETKF) to build a hybrid filter scheme (LKNETF). The filter combines the stability of the LETKF with the nonlinear properties of the NETF to obtain improved assimilation results for smaller ensembles. Both filter components are localized in a consistent way so that the filter can be applied with high-dimensional models. The degree of filter nonlinearity is defined by a hybrid weight, which shifts the analysis between the LETKF and NETF. Since the NETF is more sensitive to sampling errors than the LETKF, the latter filter should be preferred in linear cases. It is discussed how an adaptive hybrid weight can be defined based on the nonlinearity of the system so that the adaptivity yields a good filter performance in linear and nonlinear situations. The filter behavior is exemplified based on experiments with the chaotic Lorenz-63 and Lorenz-96 models, in which the nonlinearity can be controlled by the length of the forecast phase.

How to cite: Nerger, L.: Ensemble data assimilation for systems with different degrees of nonlinearity with a hybrid nonlinear-Kalman ensemble transform filter, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2463, https://doi.org/10.5194/egusphere-egu21-2463, 2021.

09:25–09:27
|
EGU21-10591
|
ECS
Saori Nakashita and Takeshi Enomoto

Satellite observations have been a growing source for data assimilation in the operational numerical weather prediction. Remotely sensed observations require a nonlinear observation operator.  Most ensemble-based data assimilation methods are formulated for tangent linear observation operators, which are often substituted by nonlinear observation operators. By contrast, the Maximum Likelihood Ensemble Filter (MLEF), which has features of both variational and ensemble approaches, is formulated for linear and nonlinear operators in an identical form and can use non-differentiable observation operators. 

In this study, we investigate the performance of MLEF and Ensemble Transform Kalman Filter (ETKF) with the tangent linear and nonlinear observation operators in assimilation experiments of nonlinear observations with a one-dimensional Burgers model.

The ETKF analysis with the nonlinear operator diverges when the observation error is small due to unrealistically large increments associated with the high order observation terms. The filter divergence can be avoided by localization of the extent of observation influence, but the analysis error is still larger than that of MLEF. In contrast, MLEF is found to be more stable and accurate without localization owing to the minimization of the cost function. Notably, MLEF can make an accurate analysis solution even without covariance inflation, eliminating the labor of parameter adjustment. In addition, the smaller observation error is, or the stronger observation nonlinearity is, MLEF with the nonlinear operators can assimilate observations more effectively than MLEF with the tangent linear operators. This result indicates that MLEF can incorporate nonlinear effects and evaluate the observation term in the cost function appropriately. These encouraging results imply that MLEF is suitable for assimilation of satellite observations with high nonlinearity.

How to cite: Nakashita, S. and Enomoto, T.: Assimilation of Nonlinear Observations with the Maximum Likelihood Ensemble Filter, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10591, https://doi.org/10.5194/egusphere-egu21-10591, 2021.

09:27–09:29
|
EGU21-129
Juan Restrepo and Jorge Ramirez

A homotopy schedule is proposed, wherein from a known probability distribution the normalization constant for an improper probability density function can be found. An improper distribution is one for which the normalization is not known, but its functional form is. In the statistical mechanics constant this amounts to finding the canonical ensemble for the improper distribution. Along the way, the method will generate samples from the target distribution.

This homotopy schedule can be adopted to particle filters used for Bayesian estimation with the aim of improving estimates of the mean path and the uncertainty of a noisy dynamical system, for which noisy observations are available. The method is useful when the dynamics are highly nonlinear, especially if the observations that inform the likelihood have low uncertainty. In the context of data assimilation we require that the stochastic dynamics of the system have an asymptotic stationary distribution, which we use as a the known distribution in the homotopy procedure.

In this talk we present the methodology, apply it to the estimation of canonical ensembles and present numerical comparisons of the standard particle filter estimates with those of the homotopy data assimilation. 

 

How to cite: Restrepo, J. and Ramirez, J.: Homotopy Particle Filter and Data Assimilation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-129, https://doi.org/10.5194/egusphere-egu21-129, 2020.

09:29–09:31
|
EGU21-4414
|
ECS
Ieva Dauzickaite, Amos Lawless, Jennifer Scott, and Peter Jan van Leeuwen

There is growing awareness that errors in the model equations cannot be ignored in data assimilation methods such as four-dimensional variational assimilation (4D-Var). If allowed for, more information can be extracted from observations, longer time windows are possible, and the minimization process is easier, at least in principle. Weak constraint 4D-Var estimates the model error and minimizes a series of linear least-squares cost functions using the conjugate gradient (CG) method; minimising each cost function is called an inner loop. CG needs preconditioning to improve its performance. In previous work, limited memory preconditioners (LMPs) have been constructed using approximations of the eigenvalues and eigenvectors of the Hessian in the previous inner loop. If the Hessian changes signicantly in consecutive inner loops, the LMP may be of limited usefulness. To circumvent this, we propose using randomised methods for low rank eigenvalue decomposition and use these approximations to cheaply construct LMPs using information from the current inner loop. Three randomised methods are compared. Numerical experiments in idealized systems show that the resulting LMPs perform better than the existing LMPs. Using these methods may allow more efficient and robust implementations of incremental weak constraint 4D-Var.

How to cite: Dauzickaite, I., Lawless, A., Scott, J., and van Leeuwen, P. J.: Randomised preconditioning for the forcing formulation of weak constraint 4D-Var, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4414, https://doi.org/10.5194/egusphere-egu21-4414, 2021.

09:31–09:33
|
EGU21-7510
Alexey Penenko, Vladimir Penenko, Elena Tsvetova, Alexander Gochakov, Elza Pyanova, and Viktoriia Konopleva

Air quality monitoring systems vary in temporal and spatial coverage, the composition of the observed chemicals, and the data's accuracy. The developed inverse modeling approach [1] is based on sensitivity operators and ensembles of adjoint equations solutions. An inverse problem is transformed to a quasi-linear operator equation with the sensitivity operator. The sensitivity operator is composed of the sensitivity functions, which are evaluated on the adjoint ensemble members. The members correspond to the measurement data elements. 

This ensemble construction allows working in a unified way with heterogeneous measurement data in a single operator equation. The quasi-linear structure of the resulting operator equation allows both solving and analyzing the inverse problem. More specifically, by analyzing the sensitivity operator's singular structure, we can estimate the informational content in the measurement data with respect to the considered process model. This type of analysis can estimate the inverse problem solution before its actual solution and evaluate the monitoring system efficiency with respect to the considered inverse modeling task [1,2]. 

Numerical experiments with the emission source identification problem for air pollution transport and transformation model were carried out to illustrate the developed framework. In the numerical experiments, we considered in-situ, image-type, and integral-type measurement data.

The work was supported by the grant №075-15-2020-787 in the form of a subsidy for a Major scientific project from Ministry of Science and Higher Education of Russia (project "Fundamentals, methods and technologies for digital monitoring and forecasting of the environmental situation on the Baikal natural territory").

References

[1] Penenko, A. Convergence analysis of the adjoint ensemble method in inverse source problems for advection-diffusion-reaction models with image-type measurements // Inverse Problems & Imaging, American Institute of Mathematical Sciences (AIMS), 2020, 14, 757-782 doi: 10.3934/ipi.2020035

[2] Penenko, A.; Gochakov, A. & Penenko, V. Algorithms based on sensitivity operators for analyzing and solving inverse modeling problems of transport and transformation of atmospheric pollutants // IOP Conference Series: Earth and Environmental Science, IOP Publishing, 2020, 611, 012032 doi: 10.1088/1755-1315/611/1/012032

How to cite: Penenko, A., Penenko, V., Tsvetova, E., Gochakov, A., Pyanova, E., and Konopleva, V.: Sensitivity Operator Inverse Modeling Framework for Advection-Diffusion-Reaction Models with Heterogeneous Measurement Data, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7510, https://doi.org/10.5194/egusphere-egu21-7510, 2021.

09:33–09:35
|
EGU21-13994
|
ECS
Vishnu Kant Verma and Anand Singh

A Geophysical model (subsurface imaging) is built up by a combination of many units that reflect the distribution of a certain physical property in the earth subsurface. The physical property can be any type like density, magnetic susceptibility, velocity, resistivity, or other properties. All the quantities which describe a geophysical model are termed as 'model parameters.' A geophysical model should explain the set of measurements recorded on the earth's surface to understand the subsurface structures. The set of all measurements is termed as 'data vector.' The present work deals with the inversion procedure to obtain a reliable model from the measured data sets. Regular grid discretization is an obstacle to define complex geological models and topography as well. In this context complex geological model can be generated through a triangular grid. Also, any type of complex geological model can be represented using triangular grids, which are difficult using a common discretization approach. In the present work, we have used Delaunay triangulation to discretize the subsurface to overcomes the problems encountered by the regular grid discretization. We have coded our forward formulation in such a way that multiple geophysical datasets can be generated on the same setup. Further, we have developed a common inversion framework to handle many geophysical datasets like Gravity, Magnetic, and VLF EM methods. This framework is utilizing the optimization scheme of the Conjugate Gradient Method. Since potential field anomalies decay with increasing depth of source, we have provided preconditioning to our kernel matrix to counteract the decay effect. We also noted that the preconditioned conjugate gradient method effectively deals with large matrices as it reduces the storage space and computation time. We demonstrated the developed approach using synthetic and real field data sets.

 

Keywords: Gravity, Magnetic, VLF EM, Geophysical inversionSubsurface discretization, Delaunay triangulation, Preconditioned Conjugate Gradient method

How to cite: Verma, V. K. and Singh, A.: Triangular grid-based common inversion framework for different geophysical data to improve subsurface imaging, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13994, https://doi.org/10.5194/egusphere-egu21-13994, 2021.

09:35–09:37
|
EGU21-16403
Christian Sampson, Alberto Carrassi, Ali Aydogdu, and Chris Jones

Numerical solvers using adaptive meshes can focus computational power on important regions of a model domain capturing important or unresolved physics. The adaptation can be informed by the model state, external information, or made to depend on the model physics. 
 In this latter case, one can think of the mesh configuration  as part of the model state. If observational data is to be assimilated into the model, the question of updating the mesh configuration with the physical values arises. Adaptive meshes present significant challenges when using popular ensemble Data Assimilation (DA) methods. We develop a novel strategy for ensemble-based DA for which the adaptive mesh is updated along with the physical values. This involves including the node locations as a part of the model state itself allowing them to be updated automatically at the analysis step. This poses a number of challenges which we resolve to produce an effective approach that promises to apply with some generality. We evaluate our strategy with two testbed models in 1-d comparing to a strategy that we previously developed that does not update the mesh configuration. We find updating the mesh improves the fidelity and convergence of the filter. An extensive analysis on the performance of our scheme beyond just the RMSE error is also presented.

How to cite: Sampson, C., Carrassi, A., Aydogdu, A., and Jones, C.: Ensemble Kalman Filter for non-conservative moving mesh solvers with a joint physics and mesh location update, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16403, https://doi.org/10.5194/egusphere-egu21-16403, 2021.

Uncertainty Quantification
09:37–09:39
|
EGU21-13077
Tijana Janjic, Maria Lukacova, Yvonne Ruckstuhl, Peter Spichtinger, and Bettina Wiebe

Quantification of evolving uncertainties is required for both probabilistic forecasting and data assimilation in weather prediction. In current practice, the ensemble of model simulations is often used as primary tool to describe the required uncertainties. In this work, we explore an alternative approach, so called stochastic Galerkin method which integrates uncertainties forward in time using a spectral approximation in the stochastic space. 

In an idealized two-dimensional model that couples compressible non-hydrostatic Navier-Stokes equations to cloud dynamics, we investigate the propagation of initial uncertainty. The propagation of initial perturbations is followed through time for all model variables during two types of forecasts: the ensemble forecast and stochastic Galerkin forecast. Since model simulations are very expensive in weather forecasting, our hypothesis is that the stochastic Galerkin would provide more accurate and cheaper forecast statistics than the ensemble simulations. Results indicate that uncertainty as represented with mean, standard deviation and evolution of trace through time provides almost identical results if a 10000-member ensemble is used and truncation of stochastic Galerkin is made at ten spectral modes.  However, for coarser approximations,  for example if 50 ensemble members are used or the stochastic Galerkin is truncated at two modes, differences in standard deviations become significant in both approaches.  A series of experiments indicates that differences in performance of the two methods depend on the system state. For example, for stable flows, the stochastic Galerkin outperforms the ensemble of simulations for every truncation and every variable. In very unstable,  turbulent flows the estimate of the mean between the two methods still remains similar. However,  the ensemble of simulations needs more than 100 members (depending on the model variable) and the stochastic Galerkin a truncation with more than five spectral modes, to produce accurate results.

How to cite: Janjic, T., Lukacova, M., Ruckstuhl, Y., Spichtinger, P., and Wiebe, B.: A test of an alternative approach for uncertainty representation in weather forecasting, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-13077, https://doi.org/10.5194/egusphere-egu21-13077, 2021.

09:39–09:41
|
EGU21-16199
|
ECS
Gregory Wagner, Andre Souza, Adeline Hillier, Ali Ramadhan, and Raffaele Ferrari

Parameterizations of turbulent mixing in the ocean surface boundary layer (OSBL) are key Earth System Model (ESM) components that modulate the communication of heat and carbon between the atmosphere and ocean interior. OSBL turbulence parameterizations are formulated in terms of unknown free parameters estimated from observational or synthetic data. In this work we describe the development and use of a synthetic dataset called the “LESbrary” generated by a large number of idealized, high-fidelity, limited-area large eddy simulations (LES) of OSBL turbulent mixing. We describe how the LESbrary design leverages a detailed understanding of OSBL conditions derived from observations and large scale models to span the range of realistically diverse physical scenarios. The result is a diverse library of well-characterized “synthetic observations” that can be readily assimilated for the calibration of realistic OSBL parameterizations in isolation from other ESM model components. We apply LESbrary data to calibrate free parameters, develop prior estimates of parameter uncertainty, and evaluate model errors in two OSBL parameterizations for use in predictive ESMs.

How to cite: Wagner, G., Souza, A., Hillier, A., Ramadhan, A., and Ferrari, R.: LESbrary: A library of large eddy simulation data for the calibration and uncertainty quantification of ocean surface boundary layer turbulence parameterizations, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-16199, https://doi.org/10.5194/egusphere-egu21-16199, 2021.

09:41–09:43
|
EGU21-9254
Juan Ruiz, Maximiliano Sacco, Yicun Zhen, Pierre Tandeo, and Manuel Pulido

Quantifying forecast uncertainty is a key aspect of state-of-the-art data assimilation systems which has a large impact on the quality of the analysis and then the following forecast. In recent years, most operational data assimilation systems incorporate state-dependent uncertainty quantification approaches based on 4-dimensional variational approaches, ensemble-based approaches, or their combination. However, these quantifications of state-dependent uncertainties have a large computational cost. Machine learning techniques consist of trainable statistical models that can represent complex functional dependencies among different groups of variables. In this work, we use a fully connected two hidden layer neural network for the state-dependent quantification of forecast uncertainty in the context of data assimilation. The input to the network is a set of three consecutive forecasted states centered at the desired lead time and the network’s output is a corrected forecasted state and an estimation of its uncertainty. We train the network using a loss function based on the observation likelihood and a large database of forecasts and their corresponding analysis. We perform observing system simulation experiments using the Lorenz 96 model as a proof-of-concept and for an evaluation of the technique in comparison with classic ensemble-based approaches.

 Results show that our approach can produce state-dependent estimations of the forecast uncertainty without the need for an ensemble of states (at a much lower computational cost),  particularly in the presence of model errors. This opens opportunities for the development of a new type of hybrid data assimilation system combining the capabilities of machine learning and ensembles.

How to cite: Ruiz, J., Sacco, M., Zhen, Y., Tandeo, P., and Pulido, M.: Machine learning-based uncertainty quantification for data assimilation: a simple model experiment, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9254, https://doi.org/10.5194/egusphere-egu21-9254, 2021.

09:43–09:45
|
EGU21-4807
Francesco Rizzi, Eric Parish, Patrick Blonigan, and John Tencer

This talk focuses on the application of projection-based reduced-order models (pROMs) to seismic elastic shear waves. Specifically, we present a method to efficiently propagate parametric uncertainties through the system using a novel formulation of the Galerkin ROM that exploits modern many-core computing nodes.

Seismic modeling and simulation is an active field of research because of its importance in understanding the generation, propagation and effects of earthquakes as well as artificial explosions. We stress two main challenges involved: (a) physical models contain a large number of parameters (e.g., anisotropic material properties, signal forms and parametrizations); and (b) simulating these systems at global scale with high-accuracy requires a large computational cost, often requiring days or weeks on a supercomputer. Advancements in computing platforms have enabled researchers to exploit high-fidelity computational models, such as highly-resolved seismic simulations, for certain types of analyses. Unfortunately, for analyses requiring many evaluations of the forward model (e.g., uncertainty quantification, engineering design), the use of high-fidelity models often remains impractical due to their high computational cost. Consequently, analysts often rely on lower-cost, lower-fidelity surrogate models for such problems.

Broadly speaking, surrogate models fall under three categories, namely (a) data fits, which construct an explicit mapping (e.g., using polynomials, Gaussian processes) from the system's parameters to the system response of interest, (b) lower-fidelity models, which simplify the high-fidelity model (e.g., by coarsening the mesh, employing a lower finite-element order, or neglecting physics), and (c) pROMs which reduce the number of degrees of freedom in the high-fidelity model by a projection process of the full-order model onto a subspace identified from high-fidelity data. The main advantage of pROMs is that they apply a projection process directly to the equations governing the high-fidelity model, thus enabling stronger guarantees (e.g., of structure preservation or of accuracy) and more accurate a posteriori error bounds.

State-of-the-art Galerkin ROM formulations express the state as a rank-1 tensor (i.e., a vector), leading to computational kernels that are memory bandwidth bound and, therefore, ill-suited for scalable performance on modern many-core and hybrid computing nodes. In this work, we introduce a reformulation, called rank-2 Galerkin, of the Galerkin ROM for linear time-invariant (LTI) dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound, and apply it to elastic seismic shear waves in an axisymmetric domain. Specifically, we present an end-to-end demonstration of using the rank-2 Galerkin ROM in a Monte Carlo sampling study, showing that the rank-2 Galerkin ROM is 970 times more efficient than the full order model, while maintaining excellent accuracy in both the mean and statistics of the field.

How to cite: Rizzi, F., Parish, E., Blonigan, P., and Tencer, J.: Enabling efficient uncertainty quantification for seismic modeling via projection-based model reduction, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4807, https://doi.org/10.5194/egusphere-egu21-4807, 2021.

Novel applications
09:45–09:47
|
EGU21-10424
|
ECS
|
Highlight
Luyu Sun, Stephen Penny, and Matthew Harrison

Accurate forecast of ocean circulation is important in many aspects. A lack of direct ocean velocity observations has been one of the overarching issues in nowadays operational ocean data assimilation (DA) system. Satellite-tracked surface drifters, providing measurement of near-surface ocean currents, have been of increasing importance in global ocean observation system. In this work, the impact of an augmented-state Lagrangian data assimilation (LaDA) method using Local Ensemble Transform Filter (LETKF) is investigated within a realistic ocean DA system. We use direct location data from 300 surface drifters released in the Gulf of Mexico (GoM) by the Consortium for Advanced Research on Transport of Hydrocarbon in the Environment (CARTHE) during the summer 2012 Grand Lagrangian Deployment (GLAD) experiment. These drifter observations are directly assimilated into a realistic eddy-resolving GoM configuration of the Modular Ocean Model version 6 (MOM6) of the Geophysical Fluid Dynamics Laboratory (GFDL). Ocean states (T/S/U/V) are updated at both the surface and at depth by utilizing dynamic forecast error covariance statistics. Four experiments are conducted: (1) a free run generated by MOM6; 2) a DA experiment assimilating temperature and salinity profile observations from World Ocean Database 2018 (WOD18); and 3) a DA experiment assimilating both drifter and the profile observations. The LaDA results are then compared with the traditional assimilation using the drifter-derived velocity field from the same GLAD database. In addition, we evaluate the impact of the LaDA algorithm on different eddy-permitting and eddy-resolving model resolutions to determine the most effective horizontal resolutions for assimilating drifter position data using LaDA.

How to cite: Sun, L., Penny, S., and Harrison, M.: Improving Ocean Circulations Using Lagrangian Data Assimilation of Surface Drifters During Grand Lagrangian Deployments, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10424, https://doi.org/10.5194/egusphere-egu21-10424, 2021.

09:47–09:49
|
EGU21-9947
|
ECS
Tarkeshwar Singh, Francois Counillon, Jerry F. Tjiputra, and Mohamad El Gharamti

Ocean biogeochemical (BGC) models utilize a large number of poorly-constrained global parameters to mimic unresolved processes and reproduce the observed complex spatio-temporal patterns. Large model errors stem primarily from inaccuracies in these parameters whose optimal values can vary both in space and time. This study aims to demonstrate the ability of ensemble data assimilation (DA) methods to provide high-quality and improved BGC parameters within an Earth system model in idealized twin experiment framework.  We use the Norwegian Climate Prediction Model (NorCPM), which combines the Norwegian Earth System Model with the Dual-One-Step ahead smoothing-based Ensemble Kalman Filter (DOSA-EnKF). The work follows on Gharamti et al. (2017) that successfully demonstrates the approach for one-dimensional idealized ocean BGC models. We aim to estimate five spatially varying BGC parameters by assimilating Salinity and Temperature hydrographic profiles and surface BGC (Phytoplankton, Nitrate, Phosphorous, Silicate, and Oxygen) observations in a strongly coupled DA framework – i.e., jointly updating ocean and BGC state-parameters during the assimilation. The method converges quickly (less than a year), largely reducing the errors in the BGC parameters and eventually it is shown to perform nearly as well as that of the system with true parameter values. Optimal parameter values can also be recovered by assimilating climatological BGC observations and challenging sparse observational networks. The findings of this study demonstrate the applicability of the approach for tuning the system in a real framework.

 

References:

Gharamti, M. E., Tjiputra, J., Bethke, I., Samuelsen, A., Skjelvan, I., Bentsen, M., & Bertino, L. (2017). Ensemble data assimilation for ocean biogeochemical state and parameter estimation at different sites. Ocean Modelling, 112, 65-89.

How to cite: Singh, T., Counillon, F., Tjiputra, J. F., and Gharamti, M. E.: Parameter estimation for ocean biogeochemical component in a global model using Ensemble Kalman Filter: a twin experiment, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9947, https://doi.org/10.5194/egusphere-egu21-9947, 2021.

09:49–09:51
|
EGU21-3851
|
ECS
Masaki Ito, Tatsu Kuwatani, Ryosuke Oyanagi, and Toshiaki Omori

Heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface area between different phases. In earth science, it is important to understand heterogeneous reactions in order to figure out the dynamics of rock formation near surface of the earth. We employ sparse modeling algorithm and sequential Monte Carlo algorithm to partial observation problem, in order to simultaneously extract substantial reaction terms and surface models from a number of candidates. Using our proposed method, we show that heterogeneous reactions can be estimated successfully from noisy observable data under conditions that the number of observed variables is less than that of hidden variables.

How to cite: Ito, M., Kuwatani, T., Oyanagi, R., and Omori, T.: Extraction of Nonlinear Dynamics of Heterogeneous Reactions Based on Sparse Modeling, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3851, https://doi.org/10.5194/egusphere-egu21-3851, 2021.

09:51–09:53
|
EGU21-7057
|
ECS
|
Highlight
Lachlan Astfalck, Daniel Williamson, Niall Gandy, Lauren Gregoire, and Ruza Ivanovic

Recent geoscience and palaeoclimatic modelling advances in have seen an increasing demand for spatio-temporal reconstructions of climatic variables. Satisfactory reconstructions should consider all sources of information: both numerical model ensembles and measured data. The difficulty in modelling climatic variables often gives rise to a multiplicity of models due to large uncertainty in the inputs. Climate proxy-based measurements are similarly uncertain due to both measurement noise and reconstruction error. It is thus vital to provide a reconstruction methodology in which these uncertainties are appropriately quantified. Instead of utilising probability based approaches that can be very computationally demanding for geospatio-temporal problems, we have developed a new approach to do this utilising a second-order framework; namely, Bayes linear analysis. This framework avoids the explicit specification of probability distributions and allows reconstructions to be described simply by means and variances. Methodological advances are made to the traditional Bayes linear mechanics to allow for non-linearity. To demonstrate the methodology, average monthly spatial reconstructions of sea-surface temperature and sea-ice concentration are estimated for the Last Glacial Maximum (21 ka), combining PMIP3 and PMIP4 outputs and available palaeodata syntheses. The methodology presented is generalisable to many spatio-temporal quantities and is highly germane to the geoscience community. 

How to cite: Astfalck, L., Williamson, D., Gandy, N., Gregoire, L., and Ivanovic, R.: A Statistical Reconstruction of Sea-Surface Temperature and Sea-Ice Concentration for the Last Glacial Maximum, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7057, https://doi.org/10.5194/egusphere-egu21-7057, 2021.

09:53–09:55
|
EGU21-3880
|
ECS
|
Highlight
Yue Ying and Laurent Bertino

A multiscale alignment (MSA) method was proposed by Ying (2019) for ensemble data assimilation to reduce the errors caused by displacement of coherent features. The MSA method decomposes a model state into components ranging from large to small spatial scales, then applies ensemble filters to update each scale component sequentially. After a larger scale component analysis increment is derived from the observations, displacement vectors are computed from the analysis increments through an optical flow algorithm. These displacement vectors are then used to warp the model mesh, which reduces position errors in the smaller scale components before the ensemble filter is applied again.

The MSA method is now applied to a sea ice prediction problem at NERSC to assimilate satellite-derived sea ice deformation observations into the next generation Sea Ice Model (neXtSIM) simulations. Preliminary results show that the MSA can more effectively reduce the position errors of the linear kinematic features of sea ice than the traditional ensemble Kalman filter. The alignment step is shown to be a big contributor for error reduction in our test case. We will also discuss the remaining challenges of tuning parameters in the MSA method and dealing with model deficiencies.

How to cite: Ying, Y. and Bertino, L.: Assimilating sea ice deformation observations using a multiscale alignment ensemble data assimilation method, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3880, https://doi.org/10.5194/egusphere-egu21-3880, 2021.

09:55–09:57
|
EGU21-14539
Alessandro Comunian and Mauro Giudici

Indirect inversion approaches are widely used in Geosciences, and in particular also for the identification of the hydraulic properties of aquifers. Nevertheless, their application requires a substantial number of model evaluation (forward problem) runs, a task that for complex problems can be computationally intensive. Reducing this computational burden is an active research topic, and many solutions, including the use of hybrid optimization methods, the use of physical proxies or again machine-learning tools allow to avoid considering the full physics of the problem when running a numerical implementation of the forward problem.

Direct inversion approaches represent computationally frugal alternatives to indirect approaches, because in general they require a smaller number of runs of the forward problem. The classical drawbacks of these methods can be alleviated by some implementation approaches and in particular by using multiple sets of data, when available.

This work is an effort to improve the robustness of the Comparison Model Method (CMM), a direct inversion approach aimed at the identification of the hydraulic transmissivity of a confined aquifer. The robustness of the CMM is here ameliorated by (i) improving the parameterization required to handle small hydraulic gradients; (ii) investigating the role of different criteria aimed at merging multiple data-sets corresponding to different flow conditions.

On a synthetic case study, it is demonstrated that correcting a small percentage of the small hydraulic gradients (about 10%) allows to obtain reliable results, and that a criteria based on the geometric mean is adequate to merge the results coming from multiple data-sets. In addition, the use of multiple-data sets allows to noticeably improve the robustness of the CMM when the input data are affected by noise.

All the tests are performed by using open source and widely used tools like the USGS Modflow6 and its Python interface flopy to foster the application of the CMM. The scripts and corresponding package, named cmmpy, is available on the Python Package Index (PyPI) and on bitbucket at the following address: https://bitbucket.org/alecomunian/cmmpy.

How to cite: Comunian, A. and Giudici, M.: Approaches to improve the robustness of the Comparison Model Method for the inverse problem of groundwater hydrology, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14539, https://doi.org/10.5194/egusphere-egu21-14539, 2021.

09:57–10:30
Break
Chairpersons: Lars Nerger, Sergey Frolov, Tijana Janjic
Coupled data assimilation
11:00–11:10
|
EGU21-3316
|
ECS
|
solicited
Xingchao Chen

Air-sea interactions are critical to tropical cyclone (TC) energetics. However, oceanic state variables are still poorly initialized, and are inconsistent with atmospheric initial fields in most operational coupled TC forecast models. In this study, we first investigate the forecast error covariance across the oceanic and atmospheric domains during the rapid intensification of Hurricane Florence (2018) using a 200-member ensemble of convection-permitting forecasts from a coupled atmosphere-ocean regional model. Meaningful and dynamically consistent cross domain ensemble error correlations suggest that it is possible to use atmospheric and oceanic observations to simultaneously update model state variables associated with the coupled ocean-atmosphere prediction of TCs using strongly coupled data assimilation (DA). A regional-scale strongly coupled DA system based on the ensemble Kalman filter (EnKF) is then developed for TC prediction. The potential impacts of different atmospheric and oceanic observations on TC analysis and prediction are examined through observing system simulation experiments (OSSEs) of Hurricane Florence (2018). Results show that strongly coupled DA resulted in better analysis and forecast of both the oceanic and atmospheric variables than weakly coupled DA. Compared to weakly coupled DA in which the analysis update is performed separately for the atmospheric and oceanic domains, strongly coupled DA reduces the forecast errors of TC track and intensity. Results show promise in potential further improvement in TC prediction through assimilation of both atmospheric and oceanic observations using the ensemble-based strongly coupled DA system.

How to cite: Chen, X.: Air-sea strongly coupled data assimilation for tropical cyclone prediction, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3316, https://doi.org/10.5194/egusphere-egu21-3316, 2021.

11:10–11:12
|
EGU21-3170
|
ECS
Tsz Yan Leung, Polly J. Smith, Amos S. Lawless, Nancy K. Nichols, and Matthew J. Martin

In variational data assimilation, background-error covariance structures have the ability to spread information from an observed part of the system to unobserved parts.  Hence an accurate specification of these structures is crucially important for the success of assimilation systems and therefore of forecasts that their outputs initiate.  For oceanic models, background-error covariances have traditionally been modelled by parametrisations which mainly depend on macroscopic properties of the ocean and have limited dependence on local conditions.  This can be problematic during passage of tropical cyclones, when the spatial and temporal variability of the ocean state depart from their characteristic structures.  Furthermore, the traditional method of estimating oceanic background-error covariances could amplify imbalances across the air-sea interface when weakly coupled data assimilation is applied, thereby bringing a detrimental impact to forecasts of cyclones.  Using the case study of Cyclone Titli, which affected the Bay of Bengal in 2018, we explore hybrid methods that combine the traditional modelling strategy with flow-dependent estimates of the ocean's error covariance structures based on the latest-available short-range ensemble forecast.  This hybrid approach is investigated in the idealised context of a single-column model as well as in the UK Met Office’s state-of-the-art system.  The idealised model helps inform how the inclusion of ensemble information can improve coupled forecasts.  Different methods for producing the ensemble are explored, with the goal of generating a limited-sized ensemble that best represents the uncertainty in the ocean fields.  We then demonstrate the power of this hybrid approach in changing the analysed structure of oceanic fields in the Met Office system, and explain the difference between the traditional and hybrid approaches in light of the ways the assimilation systems respond to single synthetic observations.  Finally, we discuss the benefits that the hybrid approach in ocean data assimilation can bring to atmospheric forecasts of the cyclone.

How to cite: Leung, T. Y., Smith, P. J., Lawless, A. S., Nichols, N. K., and Martin, M. J.: The role of flow-dependent oceanic background-error covariance information in air-sea coupled data assimilation during tropical cyclones: a case study, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3170, https://doi.org/10.5194/egusphere-egu21-3170, 2021.

11:12–11:14
|
EGU21-14181
|
ECS
Qi Tang, Longjiang Mu, Helge Goessling, Tido Semmler, and Lars Nerger

We compare the results of strongly coupled data assimilation and weakly coupled data assimilation by analyzing the assimilation effect on the prediction of the ocean as well as the atmosphere variables. The AWI climate model (AWI-CM), which couples the ocean model FESOM and the atmospheric model ECHAM, is coupled with the parallel data assimilation framework (PDAF, http://pdaf.awi.de). The satellite sea surface temperature is assimilated. For the weakly coupled data assimilation, only the ocean variables are directly updated by the assimilation while the atmospheric variables are influenced through the model. For the strongly coupled data assimilation, both the ocean and the atmospheric variables are directly updated by the assimilation algorithm. The results are evaluated by comparing the estimated ocean variables with the dependent/independent observational data, and the estimated atmospheric variables with the ERA-interim data. In the ocean, both the WCDA and the SCDA improve the prediction of the temperature and SCDA and WCDA give the same RMS error of SST. In the atmosphere, WCDA gives slightly better results for the 2m temperature and 10m wind velocity than the SCDA. In the free atmosphere, SCDA yields smaller errors for the temperature, wind velocity and specific humidity than the WCDA in the Arctic region, while in the tropical region, the error are larger in general.

How to cite: Tang, Q., Mu, L., Goessling, H., Semmler, T., and Nerger, L.: Strongly coupled data assimilation with the coupled ocean-atmosphere model AWI-CM: comparison with the weakly coupled data assimilation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14181, https://doi.org/10.5194/egusphere-egu21-14181, 2021.

11:14–11:16
|
EGU21-9306
Vassili Kitsios, Paul Sandery, Terence O'Kane, and Russell Fiedler

Coupled general circulation models (GCM) of the atmosphere, ocean, land and sea-ice have many parameters. Some of which govern the numerics of the dynamical core, whilst others represent the influence of unresolved subgrid process based on our current fundamental physical understanding. The spatio-temporal structure of many of these parameters are known with little precision, which contributes to the inherent model biases in the underlying GCM. To address this problem we use the CSIRO Climate re-Analysis and Forecast Ensemble (CAFE) system to estimate both the climate state (atmosphere, ocean, sea-ice) and also spatio-temporally varying parameter maps of the ocean surface albedo and shortwave radiation e-folding length scale in a coupled climate GCM of CMIP resolution and complexity. The CAFE system adopts a 96 member ensemble transform Kalman filter within a strongly coupled data assimilation (DA) framework. The parameters (and states) are determined by minimising the error between short term DA cycle forecasts of the climate model and a network of real world atmospheric, oceanic, and sea-ice observations.  Several DA cycle lengths are tested between 3 to 28 days. The DA system has an improved fit to observations over the period from 2010 to 2012, when estimating both of the ocean optical parameters either individually or simultaneously. However, only individually estimated maps of shortwave e-folding length scale attain systematically reduced bias in multi-year climate forecasts during the out-of-sample period from 2012 to 2020. Parameter maps determined from longer DA cycle lengths also have further reduced multi-year forecast bias. Such improved climate forecasts would potentially enable policy makers to make better informed decisions on water, energy and agricultural infrastructure and planning.

How to cite: Kitsios, V., Sandery, P., O'Kane, T., and Fiedler, R.: Strongly coupled ensemble transform Kalman filter estimation of ocean optical parameters in a coupled GCM, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9306, https://doi.org/10.5194/egusphere-egu21-9306, 2021.

11:16–11:18
|
EGU21-8360
|
ECS
Luke Phillipson, Yi Li, and Ralf Toumi

The forecast of tropical cyclone (TC) intensity is a significant challenge.  In this study, we showcase the impact of strongly coupled data assimilation with hypothetical ocean currents on analyses and forecasts of Typhoon Hato (2017). 

Several observation simulation system experiments were undertaken with a regional coupled ocean-atmosphere model. We assimilated combinations of (or individually) a hypothetical coastal current HF radar network, a dense array of drifter floats and minimum sea-level pressure. During the assimilation, instant updates of many important atmospheric variables (winds and pressure) are achieved from the assimilation of ocean current observations using the cross-domain error covariance, significantly improving the track and intensity analysis of Typhoon Hato. As compared to a control experiment (with no assimilation), the error of minimum pressure decreased by up to 13 hPa (4 hPa / 57 % on average). The maximum wind speed error decreased by up to 18 knots (5 knots / 41 % on average). 

By contrast, weakly coupled implementations cannot match these reductions (10% on average). Although traditional atmospheric observations were not assimilated, such improvements indicate there is considerable potential in assimilating ocean currents from coastal HF radar, and surface drifters within a strongly coupled framework for intense landfalling TCs.

How to cite: Phillipson, L., Li, Y., and Toumi, R.: Strongly Coupled Assimilation of a Hypothetical Ocean Current Observing Network within a Regional Ocean-Atmosphere Coupled Model: An OSSE Case Study of Typhoon Hato, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-8360, https://doi.org/10.5194/egusphere-egu21-8360, 2021.

Machine learning
11:18–11:20
|
EGU21-10475
|
ECS
Jianyu Liang, Koji Terasaki, and Takemasa Miyoshi

The ‘observation operator’ is essential in data assimilation (DA) to derive the model equivalent of the observations from the model variables. For satellite radiance observations, it is usually based on complex radiative transfer model (RTM) with a bias correction procedure. Therefore, it usually takes time to start using new satellite data after launching the satellites. Here we take advantage of the recent fast development of machine learning (ML) which is good at finding the complex relationships within data. ML can potentially be used as the ‘observation operator’ to reveal the relationships between the model variables and the observations without knowing their physical relationships. In this study, we test with the numerical weather prediction system composed of the Nonhydrostatic Icosahedral Atmospheric Model (NICAM) and the Local Ensemble Transform Kalman Filter (LETKF). We focus on the satellite microwave brightness temperature (BT) from the Advanced Microwave Sounding Unit-A (AMSU-A). Conventional observations and AMSU-A data were assimilated every 6 hours. The reference DA system employed the observation operator based on the RTTOV and an online bias correction method.

We used this reference system to generate 1-month data to train the machine learning model. Since the reference system includes running a physically-based RTM, we implicitly used the information from RTM for training the ML model in this study, although in our future research we will explore methods without the use of RTM. The machine learning model is artificial neural networks with 5 fully connected layers. The input of the ML model includes the NICAM model variables and predictors for bias correction, and the output of the ML model is the corresponding satellite BT in 3 channels from 5 satellites. Next, we ran the DA cycle for the same month the following year to test the performance of the ML model. Two experiments were conducted. The control experiment (CTRL) was performed with the reference system. In the test experiment (TEST), the ML model was used as the observation operator and there is no separate bias correction procedure since the training includes biased differences between the model and observation. The results showed no significant bias of the simulated BT by the ML model. Using the ECMWF global atmospheric reanalysis (ERA-interim) as a benchmark to evaluate the analysis accuracy, the global-mean RMSE, bias, and ensemble spread for temperature in TEST are 2% higher, 4% higher, and 1% lower respectively than those in CTRL. The result is encouraging since our ML can emulate the RTM. The limitation of our study is that we rely on the physically-based RTM in the reference DA system, which is used for training the ML model. This is the first result and still preliminary. We are currently considering other methods to train the ML model without using the RTM at all.

How to cite: Liang, J., Terasaki, K., and Miyoshi, T.: A machine learning approach to the observation operator for satellite radiance data assimilation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10475, https://doi.org/10.5194/egusphere-egu21-10475, 2021.

11:20–11:22
|
EGU21-7432
|
ECS
Quentin Malartic, Marc Bocquet, and Alban Farchi
In a recent methodological paper, we have shown how a (local) ensemble Kalman filter can be used to learn both the state and the dynamics of a system in an online framework. The surrogate model is fully parametrised (for example, this could be a neural network) and the update is a two-step process: (i) a state update, possibly localised, and (ii) a parameter update consistent with the state update. In this framework, the parameters of the surrogate model are assumed to be global.

In this presentation, we show how to extend the method to the case where the surrogate model, still fully parametrised, admits both global and local parameters (typically forcing parameters). In this case, localisation can be applied not only to the state update, but also to the local parameters update. This results in a collection of new algorithms, depending on the localisation method (covariance localisation or domain localisation) and on whether localisation is applied to the state update, or to both the state and local parameter update. The algorithms are implemented and tested with success on the 40-variable Lorenz model. Finally, we show a two-dimensional illustration of the method using a multi-layer Lorenz model with radiance-like non-local observations.

How to cite: Malartic, Q., Bocquet, M., and Farchi, A.: State, global and local parameter estimation using local ensemble Kalman filters: applications to online machine learning of chaotic dynamics, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-7432, https://doi.org/10.5194/egusphere-egu21-7432, 2021.

11:22–11:24
|
EGU21-15678
Ronan Fablet, Bertrand Chapron, Lucas Drumetz, Etienne Memin, Olivier Pannekoucke, and François Rousseau

This paper addresses representation learning for the resolution of inverse problems  with geophysical dynamics. Among others, examples of inverse problems of interest include space-time interpolation, short-term forecasting, conditional simulation w.r.t. available observations, downscaling problems… From a methodological point of view, we rely on a variational data assimilation framework. Data assimilation (DA) aims to reconstruct the time evolution of some state given a series of  observations, possibly noisy and irregularly-sampled. Here, we investigate DA from a machine learning point of view backed by an underlying variational representation.  Using automatic differentiation tools embedded in deep learning frameworks, we introduce end-to-end neural network architectures for variational data assimilation. It comprises two key components: a variational model and a gradient-based solver both implemented as neural networks. A key feature of the proposed end-to-end learning architecture is that we may train the neural networks models using both supervised and unsupervised strategies. We first illustrate applications to the reconstruction of Lorenz-63 and Lorenz-96 systems from partial and noisy observations. Whereas the gain issued from the supervised learning setting emphasizes the relevance of groundtruthed observation dataset for real-world case-studies, these results also suggest new means to design data assimilation models from data. Especially, they suggest that learning task-oriented representations of the underlying dynamics may be beneficial. We further discuss applications to short-term forecasting and sampling design along with preliminary results for the reconstruction of sea surface currents from satellite altimetry data. 

This abstract is supported by a preprint available online: https://arxiv.org/abs/2007.12941

How to cite: Fablet, R., Chapron, B., Drumetz, L., Memin, E., Pannekoucke, O., and Rousseau, F.: Jointly learning variational data assimilation models and solvers for geophysical dynamics, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-15678, https://doi.org/10.5194/egusphere-egu21-15678, 2021.

11:24–11:26
|
EGU21-2585
|
ECS
Sébastien Barthélémy, Julien Brajard, and Laurent Bertino

Going from low- to high-resolution models is an efficient way to improve the data assimilation process in three ways: it makes better use of high-resolution observations, it represents more accurately the small scale features of the dynamics and it provides a high-resolution field that can further be used as an initial condition of a forecast. Of course, the pitfall of such an approach is the cost of computing a forecast with a high-resolution numerical model. This drawback is even more acute when using an ensemble data assimilation approach, such as the ensemble Kalman filter, for which an ensemble of forecasts is to be issued by the numerical model.

In our approach, we propose to use a cheap low-resolution model to provide the forecast while still performing the assimilation step in a high-resolution space. The principle of the algorithm is based on a machine learning approach: from a low-resolution forecast, a neural network (NN) emulates a high-resolution field that can then be used to assimilate high-resolution observations. This NN super-resolution operator is trained on one high-resolution simulation. This new data assimilation approach denoted "Super-resolution data assimilation" (SRDA), is built on an ensemble Kalman filter (EnKF) algorithm.

We applied SRDA to a quasi-geostrophic model representing simplified ocean dynamics of the surface layer, with a low-resolution up to four times smaller than the reference high-resolution (so the cost of the model is divided by 64). We show that this approach outperforms the standard low-resolution data assimilation approach and the SRDA method using standard interpolation instead of a neural network as a super-resolution operator. For the reduced cost of a low-resolution model, SRDA provides a high-resolution field with an error close to that of the field that would be obtained using a high-resolution model.

How to cite: Barthélémy, S., Brajard, J., and Bertino, L.: High-resolution Ensemble Kalman Fiter with a low-resolution model using a machine learning super-resolution approach., EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-2585, https://doi.org/10.5194/egusphere-egu21-2585, 2021.

11:26–11:28
|
EGU21-3560
|
Highlight
Georg Gottwald and Sebastian Reich

Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic dynamical systems.  In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data-driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems.

How to cite: Gottwald, G. and Reich, S.: Supervised learning from noisy observations: Combining machine-learning techniques with data assimilation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-3560, https://doi.org/10.5194/egusphere-egu21-3560, 2021.

11:28–11:30
|
EGU21-14158
|
ECS
Doğukan Durdağ and Ertan Pekşen

There are some parameters that affect the resistivity values in the electrical resistivity method which is one of the most fundamental methods in near surface geophysics. One of these parameters is electrical anisotropy which is defined as the change in resistivity depending on the direction. The anisotropy coefficient is calculated by square root of the vertical resistivity to the horizontal resistivity of the layer. Average resistivity in anisotropic media is the geometric mean of the vertical resistivity and the horizontal resistivity of the layer. Artificial Neural Networks (ANN) is a method uses in many different areas for learning, classification, generalization and optimization etc. ANN available to estimate the thickness, vertical and horizontal resistivity values of layers. In this study, a MATLAB code was developed for the inversion of one-dimensional electrical resistivity data in anisotropic medium by using artificial neural networks. Neural Network Toolbox of MATLAB was utilized in the developed program. The code was tested on both noisy-free and five percent noisy synthetic data. Thicknesses, vertical and horizontal resistivity of the layers are estimated by using the code. The mean resistivity values and anisotropy coefficients of each layer were calculated via the estimated parameters. The estimated parameters and the parameters of the subsurface model were similar with acceptable error rates.

How to cite: Durdağ, D. and Pekşen, E.: Inversion of One Dimensional Electrical Resistivity Data in Anisotropic Media via Artificial Neural Networks , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-14158, https://doi.org/10.5194/egusphere-egu21-14158, 2021.

11:30–11:32
|
EGU21-913
A Neural Network-Based Observation Operator for Coupled Ocean-Acoustic Variational Data Assimilation
(withdrawn)
Andrea Storto, Giovanni De Magistris, Silvia Falchetti, and Paolo Oddo
11:32–11:34
|
EGU21-4350
|
ECS
Yvonne Ruckstuhl, Tijana Janjic, and Stephan Rasp

In previous work, it was shown that preservation of physical properties  in the data assimilation framework can significantly reduce forecast errors. Proposed data assimilation methods, such as the quadratic programming ensemble (QPEns) that can impose such constraints on the calculation of the analysis, are computationally more expensive, severely limiting their application to high dimensional prediction systems as found in earth sciences. We therefore propose to use a convolutional neural network (CNN) trained on the difference between the analysis produced by a standard ensemble Kalman Filter (EnKF) and the QPEns to correct any violations of imposed constraints. On this poster, we focus on conservation of mass and show in an idealized setup that the hybrid of a CNN and the EnKF is capable of reducing analysis and background errors to the same level as the QPEns. 

How to cite: Ruckstuhl, Y., Janjic, T., and Rasp, S.: Training a convolutional neural network to conserve mass in data assimilation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-4350, https://doi.org/10.5194/egusphere-egu21-4350, 2021.

11:34–11:36
|
EGU21-10932
Marcin Chrust, Massimo Bonavita, and Patrick Laloyaux

In both Numerical Weather Prediction and Climate Prediction, achieving improved accuracy and reliability is fundamentally dependent on identifying the sources and reducing the effects of model error. It has been recently demonstrated (Laloyaux et al., 2020) that weak constraint 4D-Var can estimate and correct for a large fraction of model error in the stratosphere, where the current global observing system is sufficiently dense and homogeneous. Accounting for the model error in the entire atmospheric column, specifically in the troposphere, remains challenging due to the difficulty in disentangling different sources of errors with similar spatial scales, and is the focus of current research.

In this work we demonstrate how Deep Learning techniques can be applied to the problem of estimation and online correction of model error. Recent results (Bonavita and Laloyaux, 2020) in the ECMWF Integrated Forecasting System (IFS) have shown that model error can be learned by an Artificial Neural Network (ANN) and applied in a weak constraint 4D-Var data assimilation framework as a model tendency forcing term. Moreover, the error estimation can extend to the whole atmospheric column and result in significantly improved analyses and forecasts. We have recently implemented in the ECMWF IFS the capability of applying online such ANN-based model error. This allows us to extend the application of the ANN-based model error parameterization from the data assimilation cycle to the long forecast step, where a model error tendency correction is continuously estimated and applied as a model forcing. We show preliminary results of the experiments conducted in the IFS framework and discuss our current understanding of the advantages and limitations of these techniques.

How to cite: Chrust, M., Bonavita, M., and Laloyaux, P.: Correcting model error with an online Artificial Neural Network, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-10932, https://doi.org/10.5194/egusphere-egu21-10932, 2021.

11:36–11:38
|
EGU21-9566
Truong-Vinh Hoang, Sebastian Krumscheid, and Raul Tempone

Filtering is an uncertainty quantification technique that refers to the inference of the states of dynamical systems from noisy observations. This work proposes a machine learning-based filtering method for tracking the high-dimensional non-Gaussian state-space models with non-linear dynamics and sparse observations. Our filter method is based on the conditional expectation mean filter and uses machine-learning techniques to approximate the conditional mean (CM). The contribution of this work is twofolds: (i) we demonstrate theoretically that the assimilated ensembles obtained using the ensemble conditional mean filter (EnCMF) provide a correct prediction of the posterior mean and have the optimal variance, and (ii) we implement the EnCMF using artificial neural networks, which has a significant advantage in representing non-linear functions that map between high-dimensionality domains, such as the CM. We implement the machine learning-based EnCMF for tracking the states of the Lorenz-63 and 96 systems under the chaotic regime. Numerical results show that the EnCMF outperforms the ensemble Kalman filter.

How to cite: Hoang, T.-V., Krumscheid, S., and Tempone, R.: Machine learning based conditional mean filter: a non-linear extension of the ensemble Kalman filter , EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-9566, https://doi.org/10.5194/egusphere-egu21-9566, 2021.

11:38–11:40
|
EGU21-6036
|
ECS
Lucia Yang and Ian Grooms

We propose to use analogs of the forecast mean to generate an ensemble of perturbations for use in ensemble optimal interpolation (EnOI) or ensemble variational (EnVar) methods.  In addition to finding analogs from a library, we propose a new method of constructing analogs using autoencoders (a machine learning method).  To extend the scalability of constructed analogs for use in data assimilation on geophysical models, we propose using patching schemes to divide the global spatial domain into digestable chunks.  Using patches makes training the generative models possible and has the added benefit of being able to exploit parallel computing powers.  The resulting analog methods using analogs from a catalog (AnEnOI), constructed analogs (cAnEnOI), and patched constructed analogs (p-cAnEnOI) are tested in the context of a multiscale Lorenz-`96 model, with standard EnOI and an ensemble square root filter for comparison.  The use of analogs from a modestly-sized catalog is shown to improve the performance of EnOI, with limited marginal improvements resulting from increases in the catalog size.  The method using constructed analogs is found to perform as well as a full ensemble square root filter, and to be robust over a wide range of tuning parameters.  Lastly, we find that p-cAnENOI with larger patches produces the best data assimilation performance despite having larger reconstruction errors.  All patch variants except for the variant that uses the smallest patch size outperform cAnEnOI as well as some traditional data assimilation methods such as the ensemble square root filter.

How to cite: Yang, L. and Grooms, I.: Using machine learning techniques to generate analog ensembles for data assimilation, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-6036, https://doi.org/10.5194/egusphere-egu21-6036, 2021.

11:40–12:30