The analysis of the Earth's gravity and magnetic fields is becoming increasingly important in geosciences. Modern satellite missions are continuing to provide data with ever improving accuracy and nearly global, time-dependent coverage. The gravitational field plays an important role in climate research, as a record of and reference for the observation of mass transport. The study of the Earth's magnetic field and its temporal variations is yielding new insights into the behavior of its internal and external sources. Both gravity and magnetic data furthermore constitute primary sources of information also for the global characterization of other planets. Hence, there continues to be a need to develop new methods of analysis, at the global and local scales, and especially on their interface. For over two decades now, methods that combine global with local sensitivity, often in a multiresolution setting, have been developed: these include wavelets, radial basis functions, Slepian functions, splines, spherical cap harmonics, etc. One purpose of this session is to provide a forum for exchange of research projects, whether related to forward or inverse modeling, theoretical, computational, or observational studies.
Besides monitoring the variations of the gravity and magnetic fields, space geodetic techniques deliver time series describing changes of the surface geometry, sea level change variations or fluctuations in the Earth's orientation. However, geodetic observation systems usually measure the integral effect. Thus, analysis methods have to be applied to the geodetic time series for a better understanding of the relations between and within the components of the system Earth. The combination of data from various space geodetic and remote sensing techniques may allow for separating the integral measurements into individual contributions of the Earth system components. Presentations to time frequency analysis, to the detection of features of the temporal or spatial variability of signals existing in geodetic data and in geophysical models, as well as to the investigations on signal separation techniques, e.g. EOF, are highly appreciated. We further solicit papers on different prediction techniques e.g. least-squares, neural networks, Kalman filter or uni- or multivariate autoregressive methods to forecast Earth Orientation Parameters, which are needed for real-time transformation between celestial and terrestrial reference frames.

Co-organized by EMRP2
Convener: Volker Michel | Co-conveners: Katrin Bentel, Christian Gerhards, Wieslaw Kosek, Michael Schmidt
| Attendance Tue, 05 May, 14:00–15:45 (CEST)

Files for download

Download all presentations (35MB)

Chat time: Tuesday, 5 May 2020, 14:00–15:45

Chairperson: Christian Gerhards and Volker Michel
D1721 |
Balint Magyar, Ambrus Kenyeres, Sandor Toth, and Istvan Hajdu

The GNSS velocity field filtering topic can be identified as a multi-dimensional unsupervised spatial outlier detection problem. In the discussed case, we jointly interpreted the horizontal and vertical velocity fields and its uncertainties as a six dimensional space. To detect and classify the spatial outliers, we performed an orthogonal linear transformation technique called Principal Component Analysis (PCA) to dynamically project the data to a lower dimensional subspace, while redacting the most (~99%) of the explained variance of the input data.

Therefore, the resulting component space can be seen as an attribute function, which describes the investigated deformation patterns. Then we constructed two subspace mapping functions, respectively the k-nearest neighbor (k-NN) and median based neighbor function with Haversine metric, and the samplewise comparison function which compares the samples with the properties of its k-NN environment. Consequently, the resulting comparison function scores highlights the significantly different observations as outliers. Assuming that the data comes from Multivariate Gaussian Distribution (MVD), we evaluated the corresponding Mahalanobis-distance with the estimation of the robust covariance matrix of the investigated area. Then, as the main result of the Robust Mahalanobis-distance (RMD) based approach, we implemented the binary classification via the p-value and critical Mahalanobis-distance thresholding.

Compared to the formerly investigated and applied One-Class Support Vector machine (OCSVM) approach, the RMD based solution gives ~ 17% more accurate results of the European scaled velocity field filtering (like EPN D1933), as well as it corrects the ambiguities and non-desired features (like overfitting) of the former OCSVM approach.

The results will be also presented as an interactive web page of the velocity fields of the latest version of EPN D2050 filtered with the introduced RMD approach.

How to cite: Magyar, B., Kenyeres, A., Toth, S., and Hajdu, I.: Robust Mahalanobis-distance based spatial outlier detection on discrete GNSS velocity fields , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3566, https://doi.org/10.5194/egusphere-egu2020-3566, 2020.

D1722 |
Yener Turen and Dogan Ugur Sanli

In this study, we assess the accuracy of deformation rates produced from GNSS campaign measurements sampled in different frequencies. The ideal frequency of the sampling seems to be 1 measurement per month however it is usually found to be cumbersome. Alternatively the sampling was performed 3 measurements per year and time series analyses were carried out. We used the continuous GPS time series of JPL, NASA from a global network of the IGS to decimate the data down to 4 monthly synthetic GNSS campaign time series. Minimum data period was taken to be 4 years following the suggestions from the literature. Furthermore, the effect of antenna set-up errors in campaign measurements on the estimated trend was taken into account. The accuracy of deformation rates were then determined taking the site velocities from ITRF14 solution as the truth. The RMS of monthly velocities agreed pretty well with the white noise error from global studies given previously in the literature. The RMS of four monthly deformation rates for horizontal positioning were obtained to be 0.45 and 0.50 mm/yr for north and east components respectively whereas the accuracy of vertical deformation rates was found to be 1.73 mm/yr. This is slightly greater than the average level of the white noise error from a global solution previously produced, in which antenna set up errors were out of consideration. Antenna set up errors in campaign measurements modified the above error level to 0.75 and 0.70 mm/yr for the horizontal components north and east respectively whereas the accuracy of the vertical component was slightly shifted to 1.79 mm/yr.

How to cite: Turen, Y. and Sanli, D. U.: Accuracy of GNSS campaign site velocities with respect to ITRF14 solution , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2101, https://doi.org/10.5194/egusphere-egu2020-2101, 2020.

D1723 |
Muharrem Hilmi Erkoç, Uğur Doğan, Seda Özarpacı, Hasan Yildiz, and Erdinç Sezen

This study aims to estimate vertical land motion (VLM) at tide gauges (TG), located in the Mediterranean, Aegean and the Marmara Sea coasts of Turkey, from differences of multimission satellite altimetry and TG sea level time series. Initially, relative sea level trends are estimated at 7 tide gauges stations operated by the Turkish General Directorate of Mapping over the period 2001-2019. Subsequently, absolute sea level trends independent from VLM are computed from multimission satellite altimetry data over the same period. We have computed estimates of linear trends of difference time series between altimetry and tide gauge sea level after removing seasonal signals by harmonic analysis from each time series to estimate the vertical land motion (VLM) at tide gauges. Traditional way of VLM determination at tide gauges is to use GPS@TG or preferably CGPS@TG data. We therefore, processed these GPS data, collected over the years by several TG-GPS campaigns and by continuous GPS stations close to the TG processed by GAMIT/GLOBK software. Subsequently, the GPS and CGPS vertical coordinate time series are used to estimate VLM. These two different VLM estimates, one from GPS and CGPS coordinate time series and other from altimetry-TG sea level time series differences are compared.


Keywords: Vertical land motion, Sea Level Changes, Tide gauge, Satellite altimetry, GPS, CGPS

How to cite: Erkoç, M. H., Doğan, U., Özarpacı, S., Yildiz, H., and Sezen, E.: Estimation of Vertical Land Motion at the Tide Gauges in Turkey, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7110, https://doi.org/10.5194/egusphere-egu2020-7110, 2020.

D1724 |
Reginald Muskett and Syun-Ichi Akasofu

Arctic sea ice is a key component of the Arctic hydrologic cycle. This cycle is connected to land and ocean temperature variations and Arctic snow cover variations, spatially and temporally. Arctic temperature variations from historical observations shows an early 20th century increase (i.e. warming), followed by a period of Arctic temperature decrease (i.e. cooling) since the 1940s, which was followed by another period of Arctic temperature increase since the 1970s that continues into the two decades of the 21st century. Evidence has been accumulating that Arctic sea ice extent can experience multi-decadal to centennial time scale variations as it is a component of the Arctic Geohydrological System. 

We investigate the multi-satellite and sensor daily values of area extent of Arctic sea ice since SMMR on Nimbus 7 (1978) to AMSR2 on GCOM-W1 (2019). From the daily time series we use the first year-cycle as a wave-pattern to compare to all subsequent years-cycles through April 2020 (in progress), and constitute a derivative time series. In this time series we find the emergence of a multi-decadal cycle, showing a relative minimum during the period of 2007 to 2014, and subsequently rising. This may be related to an 80-year cycle (hypothesis). The Earth’s weather system is principally driven the solar radiation and its variations. If the multi-decadal cycle in Arctic sea ice area extent that we interpret continues, it may be linked physically to the Wolf-Gleissberg cycle, a factor in the variations of terrestrial cosmogenic isotopes, ocean sediment layering and glacial varves, ENSO and Aurora.

Our hypothesis and results give more evidence that the multi-decadal variation of Arctic sea ice area extent is controlled by natural physical processes of the Sun-Earth system. 

How to cite: Muskett, R. and Akasofu, S.-I.: Arctic Ocean Sea Ice Area Extent Cyclicity and Non-Stationarity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5915, https://doi.org/10.5194/egusphere-egu2020-5915, 2020.

D1725 |
Christian Gerhards

Recovering the full underlying magnetization from geomagnetic potential field measurements is known to be highly nonunique. Localization constraints on the magnetization can improve this to a certain extent. We present some analytic background as well as some examples on what is meant by 'to a certain extent'. In particular, if no constraints other than spatial localization are imposed, only two out of three components of the vectorial magnetization can be reconstructed uniquely. If it is additionally assumed that the magnetization is induced by an (unknown) ambient dipole field, then the susceptibility and the direction of the ambient dipole can be reconstructed.

How to cite: Gerhards, C.: Hidden Magnetizations and Localization Constraints, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1891, https://doi.org/10.5194/egusphere-egu2020-1891, 2020.

D1726 |
Florian Faucher, Otmar Scherzer, and Hélène Barucq

We consider the quantitative inverse problem for the recovery of subsurface Earth's properties, which relies on an iterative minimization algorithm. Due to the scale of the domains and lack of apriori information, the problem is severely ill-posed. In this work, we reduce the ill-posedness by using the ``regularization by discretization'' approach: the wave speed is described by specific bases, which limits the number of coefficients in the representation. Those bases are associated with the eigenvectors of a diffusion equation, and we investigate several choices for the PDE, that are extracted from the field of image processing. We first compare the efficiency of these model descriptors to accurately capture the variation with a minimal number of coefficients. In the context of sub-surface reconstruction, we demonstrate that the method can be employed to overcome the lack of low-frequency contents in the data. We illustrate with two and three-dimensional acoustic experiments.

How to cite: Faucher, F., Scherzer, O., and Barucq, H.: Eigenvector Model Descriptors for the Seismic Inverse Problem, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22224, https://doi.org/10.5194/egusphere-egu2020-22224, 2020.

D1727 |
Lev Chepigo, Lygin Ivan, and Andrey Bulychev

Actually the most common method of gravity data interpretation is a manual fitting method. In this case, the density model is divided into many polygons with constant density and each polygon is editing manually by interpreter. This approach has two main disadvantages:

- significant amount of time is needed to build a high-quality density model;

- if density isn’t constant within anomalous object or a layer, object must be divided into many blocks, which requires additional time, and editing the model during the interpretation process becomes more complicated.

 To solve these problems, we can use methods of automatic fitting of the density model (inversion). At the same time, it is convenient to divide the model into many identical cells with constant density (grid). In this case, solving the inverse problem of gravity is reduced to solving a system of linear algebraic equations. To solve the system of equations, it is necessary to construct a loss function, which includes terms responsible for the difference between the observed gravitational field and the theoretical field, as well as for the difference between the model and a priori data (regularizer). Further, the problem is solved using iterative gradient optimization methods (gradient descent method, Newton's method and etc.).

However, in this case, the problem arises – final fitted model differs from the initial by contrasting near-surface layer due to the greater influence of the near-surface cells on the loss function, and the deep sources of gravity field anomalies are not included in inversion. Such models can be used in the processing of gravity data (source-based continuation, filtering), but are useless in solving of geological problems.

To take into account the influence of the deep cells of the model, the following solution is proposed: multiplying the gradient of the loss function by a normalization depth function that increases with depth. For example, such a function can be a quadratic function (its choice is conditioned by the fact that the gravity is inversely proportional to the square of the distance).

The use of inversion with a normalizing depth function allows solving the following problems:

- taking into account both surface and deep sources of gravity anomalies;

- solving the problem of taking into account the density gradient within the layers (since the layer is divided into many cells, the densities of which can be differen);

- reliably determine singular points of anomalous objects;

- significantly reduce the time of  the density model fitting.

How to cite: Chepigo, L., Ivan, L., and Bulychev, A.: Gravity inversion with depth normalization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-430, https://doi.org/10.5194/egusphere-egu2020-430, 2020.

D1728 |
Lucia Seoane, Benjamin Beirens, and Guillaume Ramillien

We propose to cumulate complementary gravity data, i.e. geoid height and (radial) free-air gravity anomalies, to evaluate the 3-D shape of the sea floor more precisely. For this purpose, an Extended Kalman Filtering (EKF) scheme has been developed to construct the topographic solution by injecting gravity information progressively. The main advantage of this sequential cumulation of data is the reduction of the dimensions of the inverse problem. Non linear Newtonian operators have been re-evaluated from their original forms and elastic compensation of the topography is also taken into account. The efficiency of the method is proved by inversion of simulated gravity observations to converge to a stable topographic solution with an accuracy of only a few meters. Real geoid and gravity data are also inverted to estimate bathymetry around the New England and Great Meteor seamount chains. Error analysis consists of comparing our topographic solutions to accurate single beam ship tracks for validation.

How to cite: Seoane, L., Beirens, B., and Ramillien, G.: Sea floor topography modeling by cumulating different types of gravity information, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3580, https://doi.org/10.5194/egusphere-egu2020-3580, 2020.

D1729 |
Naomi Schneider and Volker Michel

A fundamental problem in the geosciences is the downward continuation of the gravitational potential. It enables us to learn more about the system Earth and, in particular, the climate change.

Mathematically, we can model a (downward continued) signal in a 'best basis' consisting of local and global trial functions from a dictionary. In practice, our dictionaries include spherical harmonics, Slepian functions and radial basis functions. The expansion in dictionary elements is obtained by one of the Inverse Problem Matching Pursuit (IPMP) algorithms.

However, it remains to discuss the choice of the dictionary. For this, we further developed the IPMP algorithms by introducing a learning technique. With this approach, they automatically select a finite number of optimized dictionary elements from infinitely many possible ones. We present the details of our method and give numerical examples.

See also: V. Michel and N. Schneider, A first approach to learning a best basis for gravitational field modelling, arxiv: 1901.04222v2

How to cite: Schneider, N. and Michel, V.: Dictionary learning algorithms for the downward continuation of the gravitational potential, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2367, https://doi.org/10.5194/egusphere-egu2020-2367, 2020.

D1730 |
Guillaume Ramillien and Lucia Seoane

Approaches based on Stokes coefficient filtering and « mass concentration » representations have been proposed for recovering changes of the surface water mass density from along-track accurate GRACE K-Band Range Rate (KBRR) measurements of geopotential change. The number of parameters, i.e. surface triangular tiles of water mass, to be determined remains large and the choice of the regularization strategy as the gravimetry inverse problem is non unique. In this study, we propose to use regional sets of orthogonal surface functions to image the structure of the surface water mass density variations. Since the number of coefficients of the development is largely smaller than the number of tiles, the computation of daily GRACE solutions for continental hydrology, e.g. obtained by Extended Kalman Filtering (EKF), is greatly fastened and eased by the matrix dimensions and conditioning. The proposed scheme of decomposition is applied to the African continent where it enables to very localized sources of (sub-)monthly water mass amplitudes.

How to cite: Ramillien, G. and Seoane, L.: Fast determination of surface water mass changes using regional orthogonal functions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2692, https://doi.org/10.5194/egusphere-egu2020-2692, 2020.

D1731 |
Qing Liu, Michael Schmidt, and Laura Sánchez

The objective of this study is the combination of different types of basis functions applied separately to different kinds of gravity observations. We use two types of regional data sets: terrestrial gravity data and airborne gravity data, covering an area of about 500 km × 800 km in Colorado, USA. These data are available within the “1 cm geoid experiment” (also known as the “Colorado Experiment”). We apply an approach for regional gravity modeling based on series expansions in terms of spherical radial basis functions (SRBF). Two types of basis functions covering the same spectral domain are used, one for the terrestrial data and another one for the airborne measurements. To be more specific, the non-smoothing Shannon function is applied to the terrestrial data to avoid the loss of spectral information. The Cubic Polynomial (CuP) function is applied to the airborne data as a low-pass filter, and the smoothing features of this type of SRBF are used for filtering the high-frequency noise in the airborne data. In the parameter estimation procedure, these two modeling parts are combined to calculate the quasi-geoid.

The performance of our regional quasi-geoid model is validated by comparing the results with the mean solution of independent computations delivered by fourteen institutions from all over the world. The comparison shows that the low-pass filtering of the airborne gravity data by the CuP function improves the model accuracy by 5% compared to that using the Shannon function. This result also makes evident the advantage of combining different SRBFs covering the same spectral domain for different types of observations.

How to cite: Liu, Q., Schmidt, M., and Sánchez, L.: The use of different spherical radial basis functions to combine terrestrial and airborne measurements for regional gravity field refinement, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9565, https://doi.org/10.5194/egusphere-egu2020-9565, 2020.

D1732 |
Vadim Vyazmin and Yuri Bolotin

Airborne gravimetry is capable to provide Earth’s gravity data of high accuracy and spatial resolution for any area of interest, in particular for hard-to-reach areas. An airborne gravimetry measuring system consists of a stable-platform or strapdown gravimeter, and GNSS receivers. In traditional (scalar) airborne gravimetry, the vertical component of the gravity disturbance vector is measured. In actively developing vector gravimetry, all three components of the gravity disturbance vector are measured.

In this research, we aim at developing new postprocessing algorithms for estimating gravity from airborne data taking into account a priori information about spatial behavior of the gravity field in the survey area. We propose two algorithms for solving the following two problems:

1) In scalar gravimetry:  Mapping gravity at the flight height using the gravity disturbances estimated along the flight lines (via low-pass or Kalman filtering), taking into account spatial correlation of the gravity field in the survey area and statistical information on the along-line gravity estimate errors.

2) In vector gravimetry:  Simultaneous determination of three components of the gravity disturbance vector from airborne measurements at the flight path.

Both developed algorithms use an a priori spatial gravity model based on parameterizing the disturbing potential in the survey area by three-dimensional harmonic spherical scaling functions (SSFs). The algorithm developed for solving Problem 1 provides estimates of the unknown coefficients of the a priori gravity model using a least squares technique. Due to the assumption that the along-line gravity estimate errors at any two lines are not correlated, the algorithm has a recursive (line-by-line) implementation. At the last step of the recursion, regularization is applied due to ill-conditioning of the least squares problem. Numerical results of processing the GT-2A airborne gravimeter data are presented and discussed.

To solve Problem 2, one need to separate the gravity horizontal component estimates from systematic errors of the inertial navigation system (INS) of a gravimeter (attitude errors, inertial sensor bias). The standard method of gravity estimation based on gravity modelling over time is not capable to provide accurate results, and additional corrections should be applied. The developed algorithm uses a spatial gravity model based on the SSFs. The coefficients of the gravity model and the INS systematic errors are estimated simultaneously from airborne measurements at the flight path via Kalman filtering with regularization at the last time moment. Results of simulation tests show a significant increase in accuracy of gravity vector estimation compared to the standard method.

This research was supported by RFBR (grant number 19-01-00179).

How to cite: Vyazmin, V. and Bolotin, Y.: Using spherical scaling functions in scalar and vector airborne gravimetry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11745, https://doi.org/10.5194/egusphere-egu2020-11745, 2020.

D1733 |
Gonca Ahi, Yunus Aytaç Akdoğan, and Hasan Yıldız

For the quasi-geoid determination by 3-D Least Squares Collocation (LSC) in the context of Molodensky’s approach, there is no need to measured or modelled vertical gravity gradient (VGG) as the 3-D LSC takes the varying heights of the gravity observation points into account. However, the use of measured or modelled VGG instead of the thereotical value is expected to improve the quasigeoid-geoid separation term particularly in mountainous areas. The VGG measurements are found to be different from the theoretical value in the range of - % 25 and + % 39 in western Turkey. Previously there has been no study using modelled VGGs for gravimetric geoid modelling in Turkey. VGGs are modelled by 3-D Least Squares Collocation (LSC) in remove-restore approach and validated by terrestrial VGG measurements in western Turkey. The effect of using modelled VGG instead of the theoretical one in quasigeoid-to-geoid separation term is found to be significant. The quasi-geoid computed by 3-D LSC in western Turkey is converted to geoids using theoretical or modelled VGG values and compared with GPS/levelling geoid-undulations.


How to cite: Ahi, G., Akdoğan, Y. A., and Yıldız, H.: Vertical Gravity Gradient Modeling by 3-D Least Squares Collocation and its impact on quasigeoid-geoid separation term , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-680, https://doi.org/10.5194/egusphere-egu2020-680, 2020.

D1734 |
tieding lu

 Uncertainties usually exist in the process of acquisition of measurement data, which affect the results of the parameter estimation. The solution of the uncertainty adjustment model can effectively improve the validity and reliability of parameter estimation. When the coefficient matrix of the observation equation has a singular value close to zero, i.e., the coefficient matrix is ill-posed, the ridge estimation can effectively suppress the influence of the ill-posed problem of the observation equation on the parameter estimation. When the uncertainty adjustment model is ill-posed, it is more seriously affected by the error of the coefficient matrix and observation vector. In this paper, the ridge estimation method is applied to ill-posed uncertainty adjustment model, deriving an iterative algorithm to improve the stability and reliability of the results. The derived algorithm is verified by two examples, and the results show that the new method is effective and feasible.

How to cite: lu, T.: Ridge estimation iterative algorithm to ill-posed uncertainty adjustment model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18906, https://doi.org/10.5194/egusphere-egu2020-18906, 2020.